KVM fixes for 6.14 part 1
- Reject Hyper-V SEND_IPI hypercalls if the local APIC isn't being emulated by KVM to fix a NULL pointer dereference. - Enter guest mode (L2) from KVM's perspective before initializing the vCPU's nested NPT MMU so that the MMU is properly tagged for L2, not L1. - Load the guest's DR6 outside of the innermost .vcpu_run() loop, as the guest's value may be stale if a VM-Exit is handled in the fastpath. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEKTobbabEP7vbhhN9OlYIJqCjN/0FAmev2ekACgkQOlYIJqCj N/32Gg/7B2+oV9RaKB1VNv4G4vbQLiA+DxPM91U0sBqytkr9BfU5kciaVs068OVk 2M3j007HHm51sWlsCB7VLeTmiNNi/RcJzh6mOCpJVGa70imNZl3/1cvbzx1hjOAn DbZSIqBfLpPnAmNUp4c++WsDPZR2vVVMXriVNWM+RLFRT8E2GavCKxGppoNf+FIS 8aYYikiqIx+E6iYsZjEm4TXqOQ2CSLM+auq2/L24bFgkn/v6I5m70QfnnYgs7Y7R uZhv+x2O8DXuW2RxabiC4q302PDdNKtHYpEh/5+vmG34mouZEEPTVlSRU720frqU SnOwtiTKwDmAwMDSRXUAP4jc9FsD4JHSUUM7Sk0J/YaI55X3xV+YrJUBZ07bwunT TkKPr6TvlJW9s2bi+CEc0HHoMHqmejjKhq8fOeDgVkGYH1nhjrLQAFpxjI4iVmPQ vZLmCZXEMzJaqySMNVIPdSFJLLsKnD7mJT3XfbXG7dV5zmde2qYd7+TiRVb5dmst xTgSvhA1jLXpSYA4rmMjhweLEfQyljaPgb1GEZCQCBrV9clP0cb091rOWNbrcieG aMXFwHEyPjGDvlXlhjdfkNeHdP6Dq8y0aBoyeSnvdwvpAN256jswrzpYjBHWQqfv jsD3QHcbImUr+kH2CHFsZuXxsjh+woL+4crR1eQkL8oZWHEykzs= =aFcV -----END PGP SIGNATURE----- Merge tag 'kvm-x86-fixes-6.14-rcN' of https://github.com/kvm-x86/linux into HEAD KVM fixes for 6.14 part 1 - Reject Hyper-V SEND_IPI hypercalls if the local APIC isn't being emulated by KVM to fix a NULL pointer dereference. - Enter guest mode (L2) from KVM's perspective before initializing the vCPU's nested NPT MMU so that the MMU is properly tagged for L2, not L1. - Load the guest's DR6 outside of the innermost .vcpu_run() loop, as the guest's value may be stale if a VM-Exit is handled in the fastpath.
This commit is contained in:
commit
d3d0b8dfe0
234 changed files with 2004 additions and 951 deletions
6
CREDITS
6
CREDITS
|
@ -2515,11 +2515,9 @@ D: SLS distribution
|
||||||
D: Initial implementation of VC's, pty's and select()
|
D: Initial implementation of VC's, pty's and select()
|
||||||
|
|
||||||
N: Pavel Machek
|
N: Pavel Machek
|
||||||
E: pavel@ucw.cz
|
E: pavel@kernel.org
|
||||||
P: 4096R/92DFCE96 4FA7 9EEF FCD4 C44F C585 B8C7 C060 2241 92DF CE96
|
P: 4096R/92DFCE96 4FA7 9EEF FCD4 C44F C585 B8C7 C060 2241 92DF CE96
|
||||||
D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd,
|
D: NBD, Sun4/330 port, USB, work on suspend-to-ram/disk,
|
||||||
D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,
|
|
||||||
D: work on suspend-to-ram/disk, killing duplicates from ioctl32,
|
|
||||||
D: Altera SoCFPGA and Nokia N900 support.
|
D: Altera SoCFPGA and Nokia N900 support.
|
||||||
S: Czech Republic
|
S: Czech Republic
|
||||||
|
|
||||||
|
|
|
@ -14,9 +14,8 @@ allOf:
|
||||||
|
|
||||||
description: |
|
description: |
|
||||||
The Microchip LAN966x outband interrupt controller (OIC) maps the internal
|
The Microchip LAN966x outband interrupt controller (OIC) maps the internal
|
||||||
interrupt sources of the LAN966x device to an external interrupt.
|
interrupt sources of the LAN966x device to a PCI interrupt when the LAN966x
|
||||||
When the LAN966x device is used as a PCI device, the external interrupt is
|
device is used as a PCI device.
|
||||||
routed to the PCI interrupt.
|
|
||||||
|
|
||||||
properties:
|
properties:
|
||||||
compatible:
|
compatible:
|
||||||
|
|
98
Documentation/filesystems/bcachefs/SubmittingPatches.rst
Normal file
98
Documentation/filesystems/bcachefs/SubmittingPatches.rst
Normal file
|
@ -0,0 +1,98 @@
|
||||||
|
Submitting patches to bcachefs:
|
||||||
|
===============================
|
||||||
|
|
||||||
|
Patches must be tested before being submitted, either with the xfstests suite
|
||||||
|
[0], or the full bcachefs test suite in ktest [1], depending on what's being
|
||||||
|
touched. Note that ktest wraps xfstests and will be an easier method to running
|
||||||
|
it for most users; it includes single-command wrappers for all the mainstream
|
||||||
|
in-kernel local filesystems.
|
||||||
|
|
||||||
|
Patches will undergo more testing after being merged (including
|
||||||
|
lockdep/kasan/preempt/etc. variants), these are not generally required to be
|
||||||
|
run by the submitter - but do put some thought into what you're changing and
|
||||||
|
which tests might be relevant, e.g. are you dealing with tricky memory layout
|
||||||
|
work? kasan, are you doing locking work? then lockdep; and ktest includes
|
||||||
|
single-command variants for the debug build types you'll most likely need.
|
||||||
|
|
||||||
|
The exception to this rule is incomplete WIP/RFC patches: if you're working on
|
||||||
|
something nontrivial, it's encouraged to send out a WIP patch to let people
|
||||||
|
know what you're doing and make sure you're on the right track. Just make sure
|
||||||
|
it includes a brief note as to what's done and what's incomplete, to avoid
|
||||||
|
confusion.
|
||||||
|
|
||||||
|
Rigorous checkpatch.pl adherence is not required (many of its warnings are
|
||||||
|
considered out of date), but try not to deviate too much without reason.
|
||||||
|
|
||||||
|
Focus on writing code that reads well and is organized well; code should be
|
||||||
|
aesthetically pleasing.
|
||||||
|
|
||||||
|
CI:
|
||||||
|
===
|
||||||
|
|
||||||
|
Instead of running your tests locally, when running the full test suite it's
|
||||||
|
prefereable to let a server farm do it in parallel, and then have the results
|
||||||
|
in a nice test dashboard (which can tell you which failures are new, and
|
||||||
|
presents results in a git log view, avoiding the need for most bisecting).
|
||||||
|
|
||||||
|
That exists [2], and community members may request an account. If you work for
|
||||||
|
a big tech company, you'll need to help out with server costs to get access -
|
||||||
|
but the CI is not restricted to running bcachefs tests: it runs any ktest test
|
||||||
|
(which generally makes it easy to wrap other tests that can run in qemu).
|
||||||
|
|
||||||
|
Other things to think about:
|
||||||
|
============================
|
||||||
|
|
||||||
|
- How will we debug this code? Is there sufficient introspection to diagnose
|
||||||
|
when something starts acting wonky on a user machine?
|
||||||
|
|
||||||
|
We don't necessarily need every single field of every data structure visible
|
||||||
|
with introspection, but having the important fields of all the core data
|
||||||
|
types wired up makes debugging drastically easier - a bit of thoughtful
|
||||||
|
foresight greatly reduces the need to have people build custom kernels with
|
||||||
|
debug patches.
|
||||||
|
|
||||||
|
More broadly, think about all the debug tooling that might be needed.
|
||||||
|
|
||||||
|
- Does it make the codebase more or less of a mess? Can we also try to do some
|
||||||
|
organizing, too?
|
||||||
|
|
||||||
|
- Do new tests need to be written? New assertions? How do we know and verify
|
||||||
|
that the code is correct, and what happens if something goes wrong?
|
||||||
|
|
||||||
|
We don't yet have automated code coverage analysis or easy fault injection -
|
||||||
|
but for now, pretend we did and ask what they might tell us.
|
||||||
|
|
||||||
|
Assertions are hugely important, given that we don't yet have a systems
|
||||||
|
language that can do ergonomic embedded correctness proofs. Hitting an assert
|
||||||
|
in testing is much better than wandering off into undefined behaviour la-la
|
||||||
|
land - use them. Use them judiciously, and not as a replacement for proper
|
||||||
|
error handling, but use them.
|
||||||
|
|
||||||
|
- Does it need to be performance tested? Should we add new peformance counters?
|
||||||
|
|
||||||
|
bcachefs has a set of persistent runtime counters which can be viewed with
|
||||||
|
the 'bcachefs fs top' command; this should give users a basic idea of what
|
||||||
|
their filesystem is currently doing. If you're doing a new feature or looking
|
||||||
|
at old code, think if anything should be added.
|
||||||
|
|
||||||
|
- If it's a new on disk format feature - have upgrades and downgrades been
|
||||||
|
tested? (Automated tests exists but aren't in the CI, due to the hassle of
|
||||||
|
disk image management; coordinate to have them run.)
|
||||||
|
|
||||||
|
Mailing list, IRC:
|
||||||
|
==================
|
||||||
|
|
||||||
|
Patches should hit the list [3], but much discussion and code review happens on
|
||||||
|
IRC as well [4]; many people appreciate the more conversational approach and
|
||||||
|
quicker feedback.
|
||||||
|
|
||||||
|
Additionally, we have a lively user community doing excellent QA work, which
|
||||||
|
exists primarily on IRC. Please make use of that resource; user feedback is
|
||||||
|
important for any nontrivial feature, and documenting it in commit messages
|
||||||
|
would be a good idea.
|
||||||
|
|
||||||
|
[0]: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
|
||||||
|
[1]: https://evilpiepirate.org/git/ktest.git/
|
||||||
|
[2]: https://evilpiepirate.org/~testdashboard/ci/
|
||||||
|
[3]: linux-bcachefs@vger.kernel.org
|
||||||
|
[4]: irc.oftc.net#bcache, #bcachefs-dev
|
|
@ -9,4 +9,5 @@ bcachefs Documentation
|
||||||
:numbered:
|
:numbered:
|
||||||
|
|
||||||
CodingStyle
|
CodingStyle
|
||||||
|
SubmittingPatches
|
||||||
errorcodes
|
errorcodes
|
||||||
|
|
57
MAINTAINERS
57
MAINTAINERS
|
@ -2209,7 +2209,6 @@ F: sound/soc/codecs/cs42l84.*
|
||||||
F: sound/soc/codecs/ssm3515.c
|
F: sound/soc/codecs/ssm3515.c
|
||||||
|
|
||||||
ARM/APPLE MACHINE SUPPORT
|
ARM/APPLE MACHINE SUPPORT
|
||||||
M: Hector Martin <marcan@marcan.st>
|
|
||||||
M: Sven Peter <sven@svenpeter.dev>
|
M: Sven Peter <sven@svenpeter.dev>
|
||||||
R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
|
R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
|
||||||
L: asahi@lists.linux.dev
|
L: asahi@lists.linux.dev
|
||||||
|
@ -3955,6 +3954,7 @@ M: Kent Overstreet <kent.overstreet@linux.dev>
|
||||||
L: linux-bcachefs@vger.kernel.org
|
L: linux-bcachefs@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
C: irc://irc.oftc.net/bcache
|
C: irc://irc.oftc.net/bcache
|
||||||
|
P: Documentation/filesystems/bcachefs/SubmittingPatches.rst
|
||||||
T: git https://evilpiepirate.org/git/bcachefs.git
|
T: git https://evilpiepirate.org/git/bcachefs.git
|
||||||
F: fs/bcachefs/
|
F: fs/bcachefs/
|
||||||
F: Documentation/filesystems/bcachefs/
|
F: Documentation/filesystems/bcachefs/
|
||||||
|
@ -9418,7 +9418,7 @@ F: fs/freevxfs/
|
||||||
|
|
||||||
FREEZER
|
FREEZER
|
||||||
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
||||||
M: Pavel Machek <pavel@ucw.cz>
|
M: Pavel Machek <pavel@kernel.org>
|
||||||
L: linux-pm@vger.kernel.org
|
L: linux-pm@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
F: Documentation/power/freezing-of-tasks.rst
|
F: Documentation/power/freezing-of-tasks.rst
|
||||||
|
@ -9878,7 +9878,7 @@ S: Maintained
|
||||||
F: drivers/staging/gpib/
|
F: drivers/staging/gpib/
|
||||||
|
|
||||||
GPIO ACPI SUPPORT
|
GPIO ACPI SUPPORT
|
||||||
M: Mika Westerberg <mika.westerberg@linux.intel.com>
|
M: Mika Westerberg <westeri@kernel.org>
|
||||||
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
||||||
L: linux-gpio@vger.kernel.org
|
L: linux-gpio@vger.kernel.org
|
||||||
L: linux-acpi@vger.kernel.org
|
L: linux-acpi@vger.kernel.org
|
||||||
|
@ -10253,7 +10253,7 @@ F: drivers/video/fbdev/hgafb.c
|
||||||
|
|
||||||
HIBERNATION (aka Software Suspend, aka swsusp)
|
HIBERNATION (aka Software Suspend, aka swsusp)
|
||||||
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
||||||
M: Pavel Machek <pavel@ucw.cz>
|
M: Pavel Machek <pavel@kernel.org>
|
||||||
L: linux-pm@vger.kernel.org
|
L: linux-pm@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
B: https://bugzilla.kernel.org
|
B: https://bugzilla.kernel.org
|
||||||
|
@ -13124,8 +13124,8 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/har
|
||||||
F: scripts/leaking_addresses.pl
|
F: scripts/leaking_addresses.pl
|
||||||
|
|
||||||
LED SUBSYSTEM
|
LED SUBSYSTEM
|
||||||
M: Pavel Machek <pavel@ucw.cz>
|
|
||||||
M: Lee Jones <lee@kernel.org>
|
M: Lee Jones <lee@kernel.org>
|
||||||
|
M: Pavel Machek <pavel@kernel.org>
|
||||||
L: linux-leds@vger.kernel.org
|
L: linux-leds@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/leds.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/leds.git
|
||||||
|
@ -16462,6 +16462,22 @@ F: include/net/dsa.h
|
||||||
F: net/dsa/
|
F: net/dsa/
|
||||||
F: tools/testing/selftests/drivers/net/dsa/
|
F: tools/testing/selftests/drivers/net/dsa/
|
||||||
|
|
||||||
|
NETWORKING [ETHTOOL]
|
||||||
|
M: Andrew Lunn <andrew@lunn.ch>
|
||||||
|
M: Jakub Kicinski <kuba@kernel.org>
|
||||||
|
F: Documentation/netlink/specs/ethtool.yaml
|
||||||
|
F: Documentation/networking/ethtool-netlink.rst
|
||||||
|
F: include/linux/ethtool*
|
||||||
|
F: include/uapi/linux/ethtool*
|
||||||
|
F: net/ethtool/
|
||||||
|
F: tools/testing/selftests/drivers/net/*/ethtool*
|
||||||
|
|
||||||
|
NETWORKING [ETHTOOL CABLE TEST]
|
||||||
|
M: Andrew Lunn <andrew@lunn.ch>
|
||||||
|
F: net/ethtool/cabletest.c
|
||||||
|
F: tools/testing/selftests/drivers/net/*/ethtool*
|
||||||
|
K: cable_test
|
||||||
|
|
||||||
NETWORKING [GENERAL]
|
NETWORKING [GENERAL]
|
||||||
M: "David S. Miller" <davem@davemloft.net>
|
M: "David S. Miller" <davem@davemloft.net>
|
||||||
M: Eric Dumazet <edumazet@google.com>
|
M: Eric Dumazet <edumazet@google.com>
|
||||||
|
@ -16621,6 +16637,7 @@ F: tools/testing/selftests/net/mptcp/
|
||||||
NETWORKING [TCP]
|
NETWORKING [TCP]
|
||||||
M: Eric Dumazet <edumazet@google.com>
|
M: Eric Dumazet <edumazet@google.com>
|
||||||
M: Neal Cardwell <ncardwell@google.com>
|
M: Neal Cardwell <ncardwell@google.com>
|
||||||
|
R: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: Documentation/networking/net_cachelines/tcp_sock.rst
|
F: Documentation/networking/net_cachelines/tcp_sock.rst
|
||||||
|
@ -16648,6 +16665,31 @@ F: include/net/tls.h
|
||||||
F: include/uapi/linux/tls.h
|
F: include/uapi/linux/tls.h
|
||||||
F: net/tls/*
|
F: net/tls/*
|
||||||
|
|
||||||
|
NETWORKING [SOCKETS]
|
||||||
|
M: Eric Dumazet <edumazet@google.com>
|
||||||
|
M: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||||
|
M: Paolo Abeni <pabeni@redhat.com>
|
||||||
|
M: Willem de Bruijn <willemb@google.com>
|
||||||
|
S: Maintained
|
||||||
|
F: include/linux/sock_diag.h
|
||||||
|
F: include/linux/socket.h
|
||||||
|
F: include/linux/sockptr.h
|
||||||
|
F: include/net/sock.h
|
||||||
|
F: include/net/sock_reuseport.h
|
||||||
|
F: include/uapi/linux/socket.h
|
||||||
|
F: net/core/*sock*
|
||||||
|
F: net/core/scm.c
|
||||||
|
F: net/socket.c
|
||||||
|
|
||||||
|
NETWORKING [UNIX SOCKETS]
|
||||||
|
M: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||||
|
S: Maintained
|
||||||
|
F: include/net/af_unix.h
|
||||||
|
F: include/net/netns/unix.h
|
||||||
|
F: include/uapi/linux/unix_diag.h
|
||||||
|
F: net/unix/
|
||||||
|
F: tools/testing/selftests/net/af_unix/
|
||||||
|
|
||||||
NETXEN (1/10) GbE SUPPORT
|
NETXEN (1/10) GbE SUPPORT
|
||||||
M: Manish Chopra <manishc@marvell.com>
|
M: Manish Chopra <manishc@marvell.com>
|
||||||
M: Rahul Verma <rahulv@marvell.com>
|
M: Rahul Verma <rahulv@marvell.com>
|
||||||
|
@ -16781,7 +16823,7 @@ F: include/linux/tick.h
|
||||||
F: kernel/time/tick*.*
|
F: kernel/time/tick*.*
|
||||||
|
|
||||||
NOKIA N900 CAMERA SUPPORT (ET8EK8 SENSOR, AD5820 FOCUS)
|
NOKIA N900 CAMERA SUPPORT (ET8EK8 SENSOR, AD5820 FOCUS)
|
||||||
M: Pavel Machek <pavel@ucw.cz>
|
M: Pavel Machek <pavel@kernel.org>
|
||||||
M: Sakari Ailus <sakari.ailus@iki.fi>
|
M: Sakari Ailus <sakari.ailus@iki.fi>
|
||||||
L: linux-media@vger.kernel.org
|
L: linux-media@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
|
@ -17713,6 +17755,7 @@ L: netdev@vger.kernel.org
|
||||||
L: dev@openvswitch.org
|
L: dev@openvswitch.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
W: http://openvswitch.org
|
W: http://openvswitch.org
|
||||||
|
F: Documentation/networking/openvswitch.rst
|
||||||
F: include/uapi/linux/openvswitch.h
|
F: include/uapi/linux/openvswitch.h
|
||||||
F: net/openvswitch/
|
F: net/openvswitch/
|
||||||
F: tools/testing/selftests/net/openvswitch/
|
F: tools/testing/selftests/net/openvswitch/
|
||||||
|
@ -22806,7 +22849,7 @@ F: drivers/sh/
|
||||||
SUSPEND TO RAM
|
SUSPEND TO RAM
|
||||||
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
||||||
M: Len Brown <len.brown@intel.com>
|
M: Len Brown <len.brown@intel.com>
|
||||||
M: Pavel Machek <pavel@ucw.cz>
|
M: Pavel Machek <pavel@kernel.org>
|
||||||
L: linux-pm@vger.kernel.org
|
L: linux-pm@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
B: https://bugzilla.kernel.org
|
B: https://bugzilla.kernel.org
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -2,7 +2,7 @@
|
||||||
VERSION = 6
|
VERSION = 6
|
||||||
PATCHLEVEL = 14
|
PATCHLEVEL = 14
|
||||||
SUBLEVEL = 0
|
SUBLEVEL = 0
|
||||||
EXTRAVERSION = -rc1
|
EXTRAVERSION = -rc2
|
||||||
NAME = Baby Opossum Posse
|
NAME = Baby Opossum Posse
|
||||||
|
|
||||||
# *DOCUMENTATION*
|
# *DOCUMENTATION*
|
||||||
|
|
|
@ -74,7 +74,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
|
||||||
/*
|
/*
|
||||||
* This is used to ensure we don't load something for the wrong architecture.
|
* This is used to ensure we don't load something for the wrong architecture.
|
||||||
*/
|
*/
|
||||||
#define elf_check_arch(x) ((x)->e_machine == EM_ALPHA)
|
#define elf_check_arch(x) (((x)->e_machine == EM_ALPHA) && !((x)->e_flags & EF_ALPHA_32BIT))
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* These are used to set parameters in the core dumps.
|
* These are used to set parameters in the core dumps.
|
||||||
|
@ -137,10 +137,6 @@ extern int dump_elf_task(elf_greg_t *dest, struct task_struct *task);
|
||||||
: amask (AMASK_CIX) ? "ev6" : "ev67"); \
|
: amask (AMASK_CIX) ? "ev6" : "ev67"); \
|
||||||
})
|
})
|
||||||
|
|
||||||
#define SET_PERSONALITY(EX) \
|
|
||||||
set_personality(((EX).e_flags & EF_ALPHA_32BIT) \
|
|
||||||
? PER_LINUX_32BIT : PER_LINUX)
|
|
||||||
|
|
||||||
extern int alpha_l1i_cacheshape;
|
extern int alpha_l1i_cacheshape;
|
||||||
extern int alpha_l1d_cacheshape;
|
extern int alpha_l1d_cacheshape;
|
||||||
extern int alpha_l2_cacheshape;
|
extern int alpha_l2_cacheshape;
|
||||||
|
|
|
@ -360,7 +360,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
|
||||||
|
|
||||||
extern void paging_init(void);
|
extern void paging_init(void);
|
||||||
|
|
||||||
/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */
|
/* We have our own get_unmapped_area */
|
||||||
#define HAVE_ARCH_UNMAPPED_AREA
|
#define HAVE_ARCH_UNMAPPED_AREA
|
||||||
|
|
||||||
#endif /* _ALPHA_PGTABLE_H */
|
#endif /* _ALPHA_PGTABLE_H */
|
||||||
|
|
|
@ -8,23 +8,19 @@
|
||||||
#ifndef __ASM_ALPHA_PROCESSOR_H
|
#ifndef __ASM_ALPHA_PROCESSOR_H
|
||||||
#define __ASM_ALPHA_PROCESSOR_H
|
#define __ASM_ALPHA_PROCESSOR_H
|
||||||
|
|
||||||
#include <linux/personality.h> /* for ADDR_LIMIT_32BIT */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We have a 42-bit user address space: 4TB user VM...
|
* We have a 42-bit user address space: 4TB user VM...
|
||||||
*/
|
*/
|
||||||
#define TASK_SIZE (0x40000000000UL)
|
#define TASK_SIZE (0x40000000000UL)
|
||||||
|
|
||||||
#define STACK_TOP \
|
#define STACK_TOP (0x00120000000UL)
|
||||||
(current->personality & ADDR_LIMIT_32BIT ? 0x80000000 : 0x00120000000UL)
|
|
||||||
|
|
||||||
#define STACK_TOP_MAX 0x00120000000UL
|
#define STACK_TOP_MAX 0x00120000000UL
|
||||||
|
|
||||||
/* This decides where the kernel will search for a free chunk of vm
|
/* This decides where the kernel will search for a free chunk of vm
|
||||||
* space during mmap's.
|
* space during mmap's.
|
||||||
*/
|
*/
|
||||||
#define TASK_UNMAPPED_BASE \
|
#define TASK_UNMAPPED_BASE (TASK_SIZE / 2)
|
||||||
((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2)
|
|
||||||
|
|
||||||
/* This is dead. Everything has been moved to thread_info. */
|
/* This is dead. Everything has been moved to thread_info. */
|
||||||
struct thread_struct { };
|
struct thread_struct { };
|
||||||
|
|
|
@ -1210,8 +1210,7 @@ SYSCALL_DEFINE1(old_adjtimex, struct timex32 __user *, txc_p)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Get an address range which is currently unmapped. Similar to the
|
/* Get an address range which is currently unmapped. */
|
||||||
generic version except that we know how to honor ADDR_LIMIT_32BIT. */
|
|
||||||
|
|
||||||
static unsigned long
|
static unsigned long
|
||||||
arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
|
arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
|
||||||
|
@ -1230,13 +1229,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
|
||||||
unsigned long len, unsigned long pgoff,
|
unsigned long len, unsigned long pgoff,
|
||||||
unsigned long flags, vm_flags_t vm_flags)
|
unsigned long flags, vm_flags_t vm_flags)
|
||||||
{
|
{
|
||||||
unsigned long limit;
|
unsigned long limit = TASK_SIZE;
|
||||||
|
|
||||||
/* "32 bit" actually means 31 bit, since pointers sign extend. */
|
|
||||||
if (current->personality & ADDR_LIMIT_32BIT)
|
|
||||||
limit = 0x80000000;
|
|
||||||
else
|
|
||||||
limit = TASK_SIZE;
|
|
||||||
|
|
||||||
if (len > limit)
|
if (len > limit)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
|
@ -75,7 +75,7 @@ static void fsl_msi_print_chip(struct irq_data *irqd, struct seq_file *p)
|
||||||
srs = (hwirq >> msi_data->srs_shift) & MSI_SRS_MASK;
|
srs = (hwirq >> msi_data->srs_shift) & MSI_SRS_MASK;
|
||||||
cascade_virq = msi_data->cascade_array[srs]->virq;
|
cascade_virq = msi_data->cascade_array[srs]->virq;
|
||||||
|
|
||||||
seq_printf(p, " fsl-msi-%d", cascade_virq);
|
seq_printf(p, "fsl-msi-%d", cascade_virq);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -25,6 +25,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
|
||||||
# avoid errors with '-march=i386', and future flags may depend on the target to
|
# avoid errors with '-march=i386', and future flags may depend on the target to
|
||||||
# be valid.
|
# be valid.
|
||||||
KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
|
KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
|
||||||
|
KBUILD_CFLAGS += -std=gnu11
|
||||||
KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
|
KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
|
||||||
KBUILD_CFLAGS += -Wundef
|
KBUILD_CFLAGS += -Wundef
|
||||||
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
|
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
|
||||||
|
|
|
@ -48,6 +48,7 @@ KVM_X86_OP(set_idt)
|
||||||
KVM_X86_OP(get_gdt)
|
KVM_X86_OP(get_gdt)
|
||||||
KVM_X86_OP(set_gdt)
|
KVM_X86_OP(set_gdt)
|
||||||
KVM_X86_OP(sync_dirty_debug_regs)
|
KVM_X86_OP(sync_dirty_debug_regs)
|
||||||
|
KVM_X86_OP(set_dr6)
|
||||||
KVM_X86_OP(set_dr7)
|
KVM_X86_OP(set_dr7)
|
||||||
KVM_X86_OP(cache_reg)
|
KVM_X86_OP(cache_reg)
|
||||||
KVM_X86_OP(get_rflags)
|
KVM_X86_OP(get_rflags)
|
||||||
|
|
|
@ -1696,6 +1696,7 @@ struct kvm_x86_ops {
|
||||||
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
|
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
|
||||||
|
void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
|
||||||
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
|
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
|
||||||
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
|
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
|
||||||
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
|
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
|
||||||
|
|
|
@ -2226,6 +2226,9 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
|
||||||
u32 vector;
|
u32 vector;
|
||||||
bool all_cpus;
|
bool all_cpus;
|
||||||
|
|
||||||
|
if (!lapic_in_kernel(vcpu))
|
||||||
|
return HV_STATUS_INVALID_HYPERCALL_INPUT;
|
||||||
|
|
||||||
if (hc->code == HVCALL_SEND_IPI) {
|
if (hc->code == HVCALL_SEND_IPI) {
|
||||||
if (!hc->fast) {
|
if (!hc->fast) {
|
||||||
if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
|
if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
|
||||||
|
@ -2852,7 +2855,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
|
||||||
ent->eax |= HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED;
|
ent->eax |= HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED;
|
||||||
ent->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;
|
ent->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;
|
||||||
ent->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;
|
ent->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;
|
||||||
ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
|
if (!vcpu || lapic_in_kernel(vcpu))
|
||||||
|
ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
|
||||||
ent->eax |= HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED;
|
ent->eax |= HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED;
|
||||||
if (evmcs_ver)
|
if (evmcs_ver)
|
||||||
ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
|
ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
|
||||||
|
|
|
@ -5540,7 +5540,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
|
||||||
union kvm_mmu_page_role root_role;
|
union kvm_mmu_page_role root_role;
|
||||||
|
|
||||||
/* NPT requires CR0.PG=1. */
|
/* NPT requires CR0.PG=1. */
|
||||||
WARN_ON_ONCE(cpu_role.base.direct);
|
WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
|
||||||
|
|
||||||
root_role = cpu_role.base;
|
root_role = cpu_role.base;
|
||||||
root_role.level = kvm_mmu_get_tdp_level(vcpu);
|
root_role.level = kvm_mmu_get_tdp_level(vcpu);
|
||||||
|
|
|
@ -646,6 +646,11 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
|
||||||
u32 pause_count12;
|
u32 pause_count12;
|
||||||
u32 pause_thresh12;
|
u32 pause_thresh12;
|
||||||
|
|
||||||
|
nested_svm_transition_tlb_flush(vcpu);
|
||||||
|
|
||||||
|
/* Enter Guest-Mode */
|
||||||
|
enter_guest_mode(vcpu);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
|
* Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
|
||||||
* exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes.
|
* exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes.
|
||||||
|
@ -762,11 +767,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
nested_svm_transition_tlb_flush(vcpu);
|
|
||||||
|
|
||||||
/* Enter Guest-Mode */
|
|
||||||
enter_guest_mode(vcpu);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Merge guest and host intercepts - must be called with vcpu in
|
* Merge guest and host intercepts - must be called with vcpu in
|
||||||
* guest-mode to take effect.
|
* guest-mode to take effect.
|
||||||
|
|
|
@ -1991,11 +1991,11 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
|
||||||
svm->asid = sd->next_asid++;
|
svm->asid = sd->next_asid++;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value)
|
static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
|
||||||
{
|
{
|
||||||
struct vmcb *vmcb = svm->vmcb;
|
struct vmcb *vmcb = to_svm(vcpu)->vmcb;
|
||||||
|
|
||||||
if (svm->vcpu.arch.guest_state_protected)
|
if (vcpu->arch.guest_state_protected)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (unlikely(value != vmcb->save.dr6)) {
|
if (unlikely(value != vmcb->save.dr6)) {
|
||||||
|
@ -4247,10 +4247,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
|
||||||
* Run with all-zero DR6 unless needed, so that we can get the exact cause
|
* Run with all-zero DR6 unless needed, so that we can get the exact cause
|
||||||
* of a #DB.
|
* of a #DB.
|
||||||
*/
|
*/
|
||||||
if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
|
if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
|
||||||
svm_set_dr6(svm, vcpu->arch.dr6);
|
svm_set_dr6(vcpu, DR6_ACTIVE_LOW);
|
||||||
else
|
|
||||||
svm_set_dr6(svm, DR6_ACTIVE_LOW);
|
|
||||||
|
|
||||||
clgi();
|
clgi();
|
||||||
kvm_load_guest_xsave_state(vcpu);
|
kvm_load_guest_xsave_state(vcpu);
|
||||||
|
@ -5043,6 +5041,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
|
||||||
.set_idt = svm_set_idt,
|
.set_idt = svm_set_idt,
|
||||||
.get_gdt = svm_get_gdt,
|
.get_gdt = svm_get_gdt,
|
||||||
.set_gdt = svm_set_gdt,
|
.set_gdt = svm_set_gdt,
|
||||||
|
.set_dr6 = svm_set_dr6,
|
||||||
.set_dr7 = svm_set_dr7,
|
.set_dr7 = svm_set_dr7,
|
||||||
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
|
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
|
||||||
.cache_reg = svm_cache_reg,
|
.cache_reg = svm_cache_reg,
|
||||||
|
|
|
@ -61,6 +61,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
|
||||||
.set_idt = vmx_set_idt,
|
.set_idt = vmx_set_idt,
|
||||||
.get_gdt = vmx_get_gdt,
|
.get_gdt = vmx_get_gdt,
|
||||||
.set_gdt = vmx_set_gdt,
|
.set_gdt = vmx_set_gdt,
|
||||||
|
.set_dr6 = vmx_set_dr6,
|
||||||
.set_dr7 = vmx_set_dr7,
|
.set_dr7 = vmx_set_dr7,
|
||||||
.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
|
.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
|
||||||
.cache_reg = vmx_cache_reg,
|
.cache_reg = vmx_cache_reg,
|
||||||
|
|
|
@ -5648,6 +5648,12 @@ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
|
||||||
set_debugreg(DR6_RESERVED, 6);
|
set_debugreg(DR6_RESERVED, 6);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
|
||||||
|
{
|
||||||
|
lockdep_assert_irqs_disabled();
|
||||||
|
set_debugreg(vcpu->arch.dr6, 6);
|
||||||
|
}
|
||||||
|
|
||||||
void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
|
void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
|
||||||
{
|
{
|
||||||
vmcs_writel(GUEST_DR7, val);
|
vmcs_writel(GUEST_DR7, val);
|
||||||
|
@ -7417,10 +7423,6 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
|
||||||
vmx->loaded_vmcs->host_state.cr4 = cr4;
|
vmx->loaded_vmcs->host_state.cr4 = cr4;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
|
|
||||||
if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
|
|
||||||
set_debugreg(vcpu->arch.dr6, 6);
|
|
||||||
|
|
||||||
/* When single-stepping over STI and MOV SS, we must clear the
|
/* When single-stepping over STI and MOV SS, we must clear the
|
||||||
* corresponding interruptibility bits in the guest state. Otherwise
|
* corresponding interruptibility bits in the guest state. Otherwise
|
||||||
* vmentry fails as it then expects bit 14 (BS) in pending debug
|
* vmentry fails as it then expects bit 14 (BS) in pending debug
|
||||||
|
|
|
@ -73,6 +73,7 @@ void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
|
||||||
|
void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val);
|
||||||
void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
|
void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
|
||||||
void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
|
void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
|
||||||
void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
|
void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
|
||||||
|
|
|
@ -10961,6 +10961,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
|
||||||
set_debugreg(vcpu->arch.eff_db[1], 1);
|
set_debugreg(vcpu->arch.eff_db[1], 1);
|
||||||
set_debugreg(vcpu->arch.eff_db[2], 2);
|
set_debugreg(vcpu->arch.eff_db[2], 2);
|
||||||
set_debugreg(vcpu->arch.eff_db[3], 3);
|
set_debugreg(vcpu->arch.eff_db[3], 3);
|
||||||
|
/* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
|
||||||
|
if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
|
||||||
|
kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
|
||||||
} else if (unlikely(hw_breakpoint_active())) {
|
} else if (unlikely(hw_breakpoint_active())) {
|
||||||
set_debugreg(0, 7);
|
set_debugreg(0, 7);
|
||||||
}
|
}
|
||||||
|
|
|
@ -100,9 +100,6 @@ SYM_FUNC_START(xen_hypercall_hvm)
|
||||||
push %r10
|
push %r10
|
||||||
push %r9
|
push %r9
|
||||||
push %r8
|
push %r8
|
||||||
#ifdef CONFIG_FRAME_POINTER
|
|
||||||
pushq $0 /* Dummy push for stack alignment. */
|
|
||||||
#endif
|
|
||||||
#endif
|
#endif
|
||||||
/* Set the vendor specific function. */
|
/* Set the vendor specific function. */
|
||||||
call __xen_hypercall_setfunc
|
call __xen_hypercall_setfunc
|
||||||
|
@ -117,11 +114,8 @@ SYM_FUNC_START(xen_hypercall_hvm)
|
||||||
pop %ebx
|
pop %ebx
|
||||||
pop %eax
|
pop %eax
|
||||||
#else
|
#else
|
||||||
lea xen_hypercall_amd(%rip), %rbx
|
lea xen_hypercall_amd(%rip), %rcx
|
||||||
cmp %rax, %rbx
|
cmp %rax, %rcx
|
||||||
#ifdef CONFIG_FRAME_POINTER
|
|
||||||
pop %rax /* Dummy pop. */
|
|
||||||
#endif
|
|
||||||
pop %r8
|
pop %r8
|
||||||
pop %r9
|
pop %r9
|
||||||
pop %r10
|
pop %r10
|
||||||
|
@ -132,6 +126,7 @@ SYM_FUNC_START(xen_hypercall_hvm)
|
||||||
pop %rcx
|
pop %rcx
|
||||||
pop %rax
|
pop %rax
|
||||||
#endif
|
#endif
|
||||||
|
FRAME_END
|
||||||
/* Use correct hypercall function. */
|
/* Use correct hypercall function. */
|
||||||
jz xen_hypercall_amd
|
jz xen_hypercall_amd
|
||||||
jmp xen_hypercall_intel
|
jmp xen_hypercall_intel
|
||||||
|
|
|
@ -21,6 +21,11 @@
|
||||||
|
|
||||||
#define AMDXDNA_AUTOSUSPEND_DELAY 5000 /* milliseconds */
|
#define AMDXDNA_AUTOSUSPEND_DELAY 5000 /* milliseconds */
|
||||||
|
|
||||||
|
MODULE_FIRMWARE("amdnpu/1502_00/npu.sbin");
|
||||||
|
MODULE_FIRMWARE("amdnpu/17f0_10/npu.sbin");
|
||||||
|
MODULE_FIRMWARE("amdnpu/17f0_11/npu.sbin");
|
||||||
|
MODULE_FIRMWARE("amdnpu/17f0_20/npu.sbin");
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Bind the driver base on (vendor_id, device_id) pair and later use the
|
* Bind the driver base on (vendor_id, device_id) pair and later use the
|
||||||
* (device_id, rev_id) pair as a key to select the devices. The devices with
|
* (device_id, rev_id) pair as a key to select the devices. The devices with
|
||||||
|
|
|
@ -397,15 +397,19 @@ int ivpu_boot(struct ivpu_device *vdev)
|
||||||
if (ivpu_fw_is_cold_boot(vdev)) {
|
if (ivpu_fw_is_cold_boot(vdev)) {
|
||||||
ret = ivpu_pm_dct_init(vdev);
|
ret = ivpu_pm_dct_init(vdev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_diagnose_failure;
|
goto err_disable_ipc;
|
||||||
|
|
||||||
ret = ivpu_hw_sched_init(vdev);
|
ret = ivpu_hw_sched_init(vdev);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_diagnose_failure;
|
goto err_disable_ipc;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_disable_ipc:
|
||||||
|
ivpu_ipc_disable(vdev);
|
||||||
|
ivpu_hw_irq_disable(vdev);
|
||||||
|
disable_irq(vdev->irq);
|
||||||
err_diagnose_failure:
|
err_diagnose_failure:
|
||||||
ivpu_hw_diagnose_failure(vdev);
|
ivpu_hw_diagnose_failure(vdev);
|
||||||
ivpu_mmu_evtq_dump(vdev);
|
ivpu_mmu_evtq_dump(vdev);
|
||||||
|
|
|
@ -115,41 +115,57 @@ err_power_down:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ivpu_pm_recovery_work(struct work_struct *work)
|
static void ivpu_pm_reset_begin(struct ivpu_device *vdev)
|
||||||
{
|
{
|
||||||
struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
|
pm_runtime_disable(vdev->drm.dev);
|
||||||
struct ivpu_device *vdev = pm->vdev;
|
|
||||||
char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
|
|
||||||
|
|
||||||
ret = pm_runtime_resume_and_get(vdev->drm.dev);
|
|
||||||
if (ret)
|
|
||||||
ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
|
|
||||||
|
|
||||||
ivpu_jsm_state_dump(vdev);
|
|
||||||
ivpu_dev_coredump(vdev);
|
|
||||||
|
|
||||||
atomic_inc(&vdev->pm->reset_counter);
|
atomic_inc(&vdev->pm->reset_counter);
|
||||||
atomic_set(&vdev->pm->reset_pending, 1);
|
atomic_set(&vdev->pm->reset_pending, 1);
|
||||||
down_write(&vdev->pm->reset_lock);
|
down_write(&vdev->pm->reset_lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ivpu_pm_reset_complete(struct ivpu_device *vdev)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
ivpu_suspend(vdev);
|
|
||||||
ivpu_pm_prepare_cold_boot(vdev);
|
ivpu_pm_prepare_cold_boot(vdev);
|
||||||
ivpu_jobs_abort_all(vdev);
|
ivpu_jobs_abort_all(vdev);
|
||||||
ivpu_ms_cleanup_all(vdev);
|
ivpu_ms_cleanup_all(vdev);
|
||||||
|
|
||||||
ret = ivpu_resume(vdev);
|
ret = ivpu_resume(vdev);
|
||||||
if (ret)
|
if (ret) {
|
||||||
ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
|
ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
|
||||||
|
pm_runtime_set_suspended(vdev->drm.dev);
|
||||||
|
} else {
|
||||||
|
pm_runtime_set_active(vdev->drm.dev);
|
||||||
|
}
|
||||||
|
|
||||||
up_write(&vdev->pm->reset_lock);
|
up_write(&vdev->pm->reset_lock);
|
||||||
atomic_set(&vdev->pm->reset_pending, 0);
|
atomic_set(&vdev->pm->reset_pending, 0);
|
||||||
|
|
||||||
kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
|
|
||||||
pm_runtime_mark_last_busy(vdev->drm.dev);
|
pm_runtime_mark_last_busy(vdev->drm.dev);
|
||||||
pm_runtime_put_autosuspend(vdev->drm.dev);
|
pm_runtime_enable(vdev->drm.dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void ivpu_pm_recovery_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
|
||||||
|
struct ivpu_device *vdev = pm->vdev;
|
||||||
|
char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
|
||||||
|
|
||||||
|
ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
|
||||||
|
|
||||||
|
ivpu_pm_reset_begin(vdev);
|
||||||
|
|
||||||
|
if (!pm_runtime_status_suspended(vdev->drm.dev)) {
|
||||||
|
ivpu_jsm_state_dump(vdev);
|
||||||
|
ivpu_dev_coredump(vdev);
|
||||||
|
ivpu_suspend(vdev);
|
||||||
|
}
|
||||||
|
|
||||||
|
ivpu_pm_reset_complete(vdev);
|
||||||
|
|
||||||
|
kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
|
||||||
}
|
}
|
||||||
|
|
||||||
void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
|
void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
|
||||||
|
@ -309,7 +325,10 @@ int ivpu_rpm_get(struct ivpu_device *vdev)
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = pm_runtime_resume_and_get(vdev->drm.dev);
|
ret = pm_runtime_resume_and_get(vdev->drm.dev);
|
||||||
drm_WARN_ON(&vdev->drm, ret < 0);
|
if (ret < 0) {
|
||||||
|
ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
|
||||||
|
pm_runtime_set_suspended(vdev->drm.dev);
|
||||||
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -325,16 +344,13 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
|
||||||
struct ivpu_device *vdev = pci_get_drvdata(pdev);
|
struct ivpu_device *vdev = pci_get_drvdata(pdev);
|
||||||
|
|
||||||
ivpu_dbg(vdev, PM, "Pre-reset..\n");
|
ivpu_dbg(vdev, PM, "Pre-reset..\n");
|
||||||
atomic_inc(&vdev->pm->reset_counter);
|
|
||||||
atomic_set(&vdev->pm->reset_pending, 1);
|
|
||||||
|
|
||||||
pm_runtime_get_sync(vdev->drm.dev);
|
ivpu_pm_reset_begin(vdev);
|
||||||
down_write(&vdev->pm->reset_lock);
|
|
||||||
ivpu_prepare_for_reset(vdev);
|
if (!pm_runtime_status_suspended(vdev->drm.dev)) {
|
||||||
ivpu_hw_reset(vdev);
|
ivpu_prepare_for_reset(vdev);
|
||||||
ivpu_pm_prepare_cold_boot(vdev);
|
ivpu_hw_reset(vdev);
|
||||||
ivpu_jobs_abort_all(vdev);
|
}
|
||||||
ivpu_ms_cleanup_all(vdev);
|
|
||||||
|
|
||||||
ivpu_dbg(vdev, PM, "Pre-reset done.\n");
|
ivpu_dbg(vdev, PM, "Pre-reset done.\n");
|
||||||
}
|
}
|
||||||
|
@ -342,18 +358,12 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
|
||||||
void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
|
void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
struct ivpu_device *vdev = pci_get_drvdata(pdev);
|
struct ivpu_device *vdev = pci_get_drvdata(pdev);
|
||||||
int ret;
|
|
||||||
|
|
||||||
ivpu_dbg(vdev, PM, "Post-reset..\n");
|
ivpu_dbg(vdev, PM, "Post-reset..\n");
|
||||||
ret = ivpu_resume(vdev);
|
|
||||||
if (ret)
|
|
||||||
ivpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
|
|
||||||
up_write(&vdev->pm->reset_lock);
|
|
||||||
atomic_set(&vdev->pm->reset_pending, 0);
|
|
||||||
ivpu_dbg(vdev, PM, "Post-reset done.\n");
|
|
||||||
|
|
||||||
pm_runtime_mark_last_busy(vdev->drm.dev);
|
ivpu_pm_reset_complete(vdev);
|
||||||
pm_runtime_put_autosuspend(vdev->drm.dev);
|
|
||||||
|
ivpu_dbg(vdev, PM, "Post-reset done.\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
void ivpu_pm_init(struct ivpu_device *vdev)
|
void ivpu_pm_init(struct ivpu_device *vdev)
|
||||||
|
|
|
@ -287,9 +287,7 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
|
||||||
if (!handler || !module)
|
if (!handler || !module)
|
||||||
goto invalid_guid;
|
goto invalid_guid;
|
||||||
|
|
||||||
if (!handler->handler_addr ||
|
if (!handler->handler_addr) {
|
||||||
!handler->static_data_buffer_addr ||
|
|
||||||
!handler->acpi_param_buffer_addr) {
|
|
||||||
buffer->prm_status = PRM_HANDLER_ERROR;
|
buffer->prm_status = PRM_HANDLER_ERROR;
|
||||||
return AE_OK;
|
return AE_OK;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1187,8 +1187,6 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if (nval == 0)
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
if (obj->type == ACPI_TYPE_BUFFER) {
|
if (obj->type == ACPI_TYPE_BUFFER) {
|
||||||
if (proptype != DEV_PROP_U8)
|
if (proptype != DEV_PROP_U8)
|
||||||
|
@ -1212,9 +1210,11 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
|
||||||
ret = acpi_copy_property_array_uint(items, (u64 *)val, nval);
|
ret = acpi_copy_property_array_uint(items, (u64 *)val, nval);
|
||||||
break;
|
break;
|
||||||
case DEV_PROP_STRING:
|
case DEV_PROP_STRING:
|
||||||
ret = acpi_copy_property_array_string(
|
nval = min_t(u32, nval, obj->package.count);
|
||||||
items, (char **)val,
|
if (nval == 0)
|
||||||
min_t(u32, nval, obj->package.count));
|
return -ENODATA;
|
||||||
|
|
||||||
|
ret = acpi_copy_property_array_string(items, (char **)val, nval);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
|
|
|
@ -563,6 +563,12 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
|
||||||
DMI_MATCH(DMI_BOARD_NAME, "RP-15"),
|
DMI_MATCH(DMI_BOARD_NAME, "RP-15"),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."),
|
||||||
|
DMI_MATCH(DMI_BOARD_NAME, "MECH-17"),
|
||||||
|
},
|
||||||
|
},
|
||||||
{
|
{
|
||||||
/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
|
/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
|
||||||
.matches = {
|
.matches = {
|
||||||
|
|
|
@ -1191,24 +1191,18 @@ static pm_message_t resume_event(pm_message_t sleep_state)
|
||||||
return PMSG_ON;
|
return PMSG_ON;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dpm_superior_set_must_resume(struct device *dev, bool set_active)
|
static void dpm_superior_set_must_resume(struct device *dev)
|
||||||
{
|
{
|
||||||
struct device_link *link;
|
struct device_link *link;
|
||||||
int idx;
|
int idx;
|
||||||
|
|
||||||
if (dev->parent) {
|
if (dev->parent)
|
||||||
dev->parent->power.must_resume = true;
|
dev->parent->power.must_resume = true;
|
||||||
if (set_active)
|
|
||||||
dev->parent->power.set_active = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
idx = device_links_read_lock();
|
idx = device_links_read_lock();
|
||||||
|
|
||||||
list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) {
|
list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node)
|
||||||
link->supplier->power.must_resume = true;
|
link->supplier->power.must_resume = true;
|
||||||
if (set_active)
|
|
||||||
link->supplier->power.set_active = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
device_links_read_unlock(idx);
|
device_links_read_unlock(idx);
|
||||||
}
|
}
|
||||||
|
@ -1287,9 +1281,12 @@ Skip:
|
||||||
dev->power.must_resume = true;
|
dev->power.must_resume = true;
|
||||||
|
|
||||||
if (dev->power.must_resume) {
|
if (dev->power.must_resume) {
|
||||||
dev->power.set_active = dev->power.set_active ||
|
if (dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) {
|
||||||
dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND);
|
dev->power.set_active = true;
|
||||||
dpm_superior_set_must_resume(dev, dev->power.set_active);
|
if (dev->parent && !dev->parent->power.ignore_children)
|
||||||
|
dev->parent->power.set_active = true;
|
||||||
|
}
|
||||||
|
dpm_superior_set_must_resume(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
Complete:
|
Complete:
|
||||||
|
|
|
@ -1127,8 +1127,8 @@ static void vdc_queue_drain(struct vdc_port *port)
|
||||||
|
|
||||||
spin_lock_irq(&port->vio.lock);
|
spin_lock_irq(&port->vio.lock);
|
||||||
port->drain = 0;
|
port->drain = 0;
|
||||||
blk_mq_unquiesce_queue(q, memflags);
|
blk_mq_unquiesce_queue(q);
|
||||||
blk_mq_unfreeze_queue(q);
|
blk_mq_unfreeze_queue(q, memflags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vdc_ldc_reset_timer_work(struct work_struct *work)
|
static void vdc_ldc_reset_timer_work(struct work_struct *work)
|
||||||
|
|
|
@ -657,7 +657,7 @@ static void moxtet_irq_print_chip(struct irq_data *d, struct seq_file *p)
|
||||||
|
|
||||||
id = moxtet->modules[pos->idx];
|
id = moxtet->modules[pos->idx];
|
||||||
|
|
||||||
seq_printf(p, " moxtet-%s.%i#%i", mox_module_name(id), pos->idx,
|
seq_printf(p, "moxtet-%s.%i#%i", mox_module_name(id), pos->idx,
|
||||||
pos->bit);
|
pos->bit);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,8 @@ config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
|
||||||
|
|
||||||
config ARM_AIROHA_SOC_CPUFREQ
|
config ARM_AIROHA_SOC_CPUFREQ
|
||||||
tristate "Airoha EN7581 SoC CPUFreq support"
|
tristate "Airoha EN7581 SoC CPUFreq support"
|
||||||
depends on (ARCH_AIROHA && OF) || COMPILE_TEST
|
depends on ARCH_AIROHA || COMPILE_TEST
|
||||||
|
depends on OF
|
||||||
select PM_OPP
|
select PM_OPP
|
||||||
default ARCH_AIROHA
|
default ARCH_AIROHA
|
||||||
help
|
help
|
||||||
|
|
|
@ -699,7 +699,7 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
|
||||||
if (min_perf < lowest_nonlinear_perf)
|
if (min_perf < lowest_nonlinear_perf)
|
||||||
min_perf = lowest_nonlinear_perf;
|
min_perf = lowest_nonlinear_perf;
|
||||||
|
|
||||||
max_perf = cap_perf;
|
max_perf = cpudata->max_limit_perf;
|
||||||
if (max_perf < min_perf)
|
if (max_perf < min_perf)
|
||||||
max_perf = min_perf;
|
max_perf = min_perf;
|
||||||
|
|
||||||
|
@ -747,7 +747,6 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
|
||||||
guard(mutex)(&amd_pstate_driver_lock);
|
guard(mutex)(&amd_pstate_driver_lock);
|
||||||
|
|
||||||
ret = amd_pstate_cpu_boost_update(policy, state);
|
ret = amd_pstate_cpu_boost_update(policy, state);
|
||||||
policy->boost_enabled = !ret ? state : false;
|
|
||||||
refresh_frequency_limits(policy);
|
refresh_frequency_limits(policy);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -822,25 +821,28 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
|
||||||
|
|
||||||
static void amd_pstate_update_limits(unsigned int cpu)
|
static void amd_pstate_update_limits(unsigned int cpu)
|
||||||
{
|
{
|
||||||
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
|
struct cpufreq_policy *policy = NULL;
|
||||||
struct amd_cpudata *cpudata;
|
struct amd_cpudata *cpudata;
|
||||||
u32 prev_high = 0, cur_high = 0;
|
u32 prev_high = 0, cur_high = 0;
|
||||||
int ret;
|
int ret;
|
||||||
bool highest_perf_changed = false;
|
bool highest_perf_changed = false;
|
||||||
|
|
||||||
|
if (!amd_pstate_prefcore)
|
||||||
|
return;
|
||||||
|
|
||||||
|
policy = cpufreq_cpu_get(cpu);
|
||||||
if (!policy)
|
if (!policy)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
cpudata = policy->driver_data;
|
cpudata = policy->driver_data;
|
||||||
|
|
||||||
if (!amd_pstate_prefcore)
|
|
||||||
return;
|
|
||||||
|
|
||||||
guard(mutex)(&amd_pstate_driver_lock);
|
guard(mutex)(&amd_pstate_driver_lock);
|
||||||
|
|
||||||
ret = amd_get_highest_perf(cpu, &cur_high);
|
ret = amd_get_highest_perf(cpu, &cur_high);
|
||||||
if (ret)
|
if (ret) {
|
||||||
goto free_cpufreq_put;
|
cpufreq_cpu_put(policy);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
prev_high = READ_ONCE(cpudata->prefcore_ranking);
|
prev_high = READ_ONCE(cpudata->prefcore_ranking);
|
||||||
highest_perf_changed = (prev_high != cur_high);
|
highest_perf_changed = (prev_high != cur_high);
|
||||||
|
@ -850,8 +852,6 @@ static void amd_pstate_update_limits(unsigned int cpu)
|
||||||
if (cur_high < CPPC_MAX_PERF)
|
if (cur_high < CPPC_MAX_PERF)
|
||||||
sched_set_itmt_core_prio((int)cur_high, cpu);
|
sched_set_itmt_core_prio((int)cur_high, cpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
free_cpufreq_put:
|
|
||||||
cpufreq_cpu_put(policy);
|
cpufreq_cpu_put(policy);
|
||||||
|
|
||||||
if (!highest_perf_changed)
|
if (!highest_perf_changed)
|
||||||
|
|
|
@ -1571,7 +1571,8 @@ static int cpufreq_online(unsigned int cpu)
|
||||||
policy->cdev = of_cpufreq_cooling_register(policy);
|
policy->cdev = of_cpufreq_cooling_register(policy);
|
||||||
|
|
||||||
/* Let the per-policy boost flag mirror the cpufreq_driver boost during init */
|
/* Let the per-policy boost flag mirror the cpufreq_driver boost during init */
|
||||||
if (policy->boost_enabled != cpufreq_boost_enabled()) {
|
if (cpufreq_driver->set_boost &&
|
||||||
|
policy->boost_enabled != cpufreq_boost_enabled()) {
|
||||||
policy->boost_enabled = cpufreq_boost_enabled();
|
policy->boost_enabled = cpufreq_boost_enabled();
|
||||||
ret = cpufreq_driver->set_boost(policy, policy->boost_enabled);
|
ret = cpufreq_driver->set_boost(policy, policy->boost_enabled);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|
|
@ -106,7 +106,7 @@ config ISCSI_IBFT
|
||||||
select ISCSI_BOOT_SYSFS
|
select ISCSI_BOOT_SYSFS
|
||||||
select ISCSI_IBFT_FIND if X86
|
select ISCSI_IBFT_FIND if X86
|
||||||
depends on ACPI && SCSI && SCSI_LOWLEVEL
|
depends on ACPI && SCSI && SCSI_LOWLEVEL
|
||||||
default n
|
default n
|
||||||
help
|
help
|
||||||
This option enables support for detection and exposing of iSCSI
|
This option enables support for detection and exposing of iSCSI
|
||||||
Boot Firmware Table (iBFT) via sysfs to userspace. If you wish to
|
Boot Firmware Table (iBFT) via sysfs to userspace. If you wish to
|
||||||
|
|
|
@ -310,7 +310,10 @@ static ssize_t ibft_attr_show_nic(void *data, int type, char *buf)
|
||||||
str += sprintf_ipaddr(str, nic->ip_addr);
|
str += sprintf_ipaddr(str, nic->ip_addr);
|
||||||
break;
|
break;
|
||||||
case ISCSI_BOOT_ETH_SUBNET_MASK:
|
case ISCSI_BOOT_ETH_SUBNET_MASK:
|
||||||
val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1));
|
if (nic->subnet_mask_prefix > 32)
|
||||||
|
val = cpu_to_be32(~0);
|
||||||
|
else
|
||||||
|
val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1));
|
||||||
str += sprintf(str, "%pI4", &val);
|
str += sprintf(str, "%pI4", &val);
|
||||||
break;
|
break;
|
||||||
case ISCSI_BOOT_ETH_PREFIX_LEN:
|
case ISCSI_BOOT_ETH_PREFIX_LEN:
|
||||||
|
|
|
@ -338,6 +338,7 @@ config GPIO_GRANITERAPIDS
|
||||||
|
|
||||||
config GPIO_GRGPIO
|
config GPIO_GRGPIO
|
||||||
tristate "Aeroflex Gaisler GRGPIO support"
|
tristate "Aeroflex Gaisler GRGPIO support"
|
||||||
|
depends on OF || COMPILE_TEST
|
||||||
select GPIO_GENERIC
|
select GPIO_GENERIC
|
||||||
select IRQ_DOMAIN
|
select IRQ_DOMAIN
|
||||||
help
|
help
|
||||||
|
|
|
@ -841,25 +841,6 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pendin
|
||||||
DECLARE_BITMAP(trigger, MAX_LINE);
|
DECLARE_BITMAP(trigger, MAX_LINE);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (chip->driver_data & PCA_PCAL) {
|
|
||||||
/* Read the current interrupt status from the device */
|
|
||||||
ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, trigger);
|
|
||||||
if (ret)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
/* Check latched inputs and clear interrupt status */
|
|
||||||
ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
|
|
||||||
if (ret)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
/* Apply filter for rising/falling edge selection */
|
|
||||||
bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise, cur_stat, gc->ngpio);
|
|
||||||
|
|
||||||
bitmap_and(pending, new_stat, trigger, gc->ngpio);
|
|
||||||
|
|
||||||
return !bitmap_empty(pending, gc->ngpio);
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
|
ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
|
||||||
if (ret)
|
if (ret)
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -1028,20 +1028,23 @@ gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
|
||||||
struct configfs_subsystem *subsys = dev->group.cg_subsys;
|
struct configfs_subsystem *subsys = dev->group.cg_subsys;
|
||||||
struct gpio_sim_bank *bank;
|
struct gpio_sim_bank *bank;
|
||||||
struct gpio_sim_line *line;
|
struct gpio_sim_line *line;
|
||||||
|
struct config_item *item;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The device only needs to depend on leaf line entries. This is
|
* The device only needs to depend on leaf entries. This is
|
||||||
* sufficient to lock up all the configfs entries that the
|
* sufficient to lock up all the configfs entries that the
|
||||||
* instantiated, alive device depends on.
|
* instantiated, alive device depends on.
|
||||||
*/
|
*/
|
||||||
list_for_each_entry(bank, &dev->bank_list, siblings) {
|
list_for_each_entry(bank, &dev->bank_list, siblings) {
|
||||||
list_for_each_entry(line, &bank->line_list, siblings) {
|
list_for_each_entry(line, &bank->line_list, siblings) {
|
||||||
|
item = line->hog ? &line->hog->item
|
||||||
|
: &line->group.cg_item;
|
||||||
|
|
||||||
if (lock)
|
if (lock)
|
||||||
WARN_ON(configfs_depend_item_unlocked(
|
WARN_ON(configfs_depend_item_unlocked(subsys,
|
||||||
subsys, &line->group.cg_item));
|
item));
|
||||||
else
|
else
|
||||||
configfs_undepend_item_unlocked(
|
configfs_undepend_item_unlocked(item);
|
||||||
&line->group.cg_item);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -119,9 +119,10 @@
|
||||||
* - 3.57.0 - Compute tunneling on GFX10+
|
* - 3.57.0 - Compute tunneling on GFX10+
|
||||||
* - 3.58.0 - Add GFX12 DCC support
|
* - 3.58.0 - Add GFX12 DCC support
|
||||||
* - 3.59.0 - Cleared VRAM
|
* - 3.59.0 - Cleared VRAM
|
||||||
|
* - 3.60.0 - Add AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE (Vulkan requirement)
|
||||||
*/
|
*/
|
||||||
#define KMS_DRIVER_MAJOR 3
|
#define KMS_DRIVER_MAJOR 3
|
||||||
#define KMS_DRIVER_MINOR 59
|
#define KMS_DRIVER_MINOR 60
|
||||||
#define KMS_DRIVER_PATCHLEVEL 0
|
#define KMS_DRIVER_PATCHLEVEL 0
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -309,7 +309,7 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
|
||||||
mutex_lock(&adev->mman.gtt_window_lock);
|
mutex_lock(&adev->mman.gtt_window_lock);
|
||||||
while (src_mm.remaining) {
|
while (src_mm.remaining) {
|
||||||
uint64_t from, to, cur_size, tiling_flags;
|
uint64_t from, to, cur_size, tiling_flags;
|
||||||
uint32_t num_type, data_format, max_com;
|
uint32_t num_type, data_format, max_com, write_compress_disable;
|
||||||
struct dma_fence *next;
|
struct dma_fence *next;
|
||||||
|
|
||||||
/* Never copy more than 256MiB at once to avoid a timeout */
|
/* Never copy more than 256MiB at once to avoid a timeout */
|
||||||
|
@ -340,9 +340,13 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
|
||||||
max_com = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_MAX_COMPRESSED_BLOCK);
|
max_com = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_MAX_COMPRESSED_BLOCK);
|
||||||
num_type = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_NUMBER_TYPE);
|
num_type = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_NUMBER_TYPE);
|
||||||
data_format = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_DATA_FORMAT);
|
data_format = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_DATA_FORMAT);
|
||||||
|
write_compress_disable =
|
||||||
|
AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_WRITE_COMPRESS_DISABLE);
|
||||||
copy_flags |= (AMDGPU_COPY_FLAGS_SET(MAX_COMPRESSED, max_com) |
|
copy_flags |= (AMDGPU_COPY_FLAGS_SET(MAX_COMPRESSED, max_com) |
|
||||||
AMDGPU_COPY_FLAGS_SET(NUMBER_TYPE, num_type) |
|
AMDGPU_COPY_FLAGS_SET(NUMBER_TYPE, num_type) |
|
||||||
AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format));
|
AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format) |
|
||||||
|
AMDGPU_COPY_FLAGS_SET(WRITE_COMPRESS_DISABLE,
|
||||||
|
write_compress_disable));
|
||||||
}
|
}
|
||||||
|
|
||||||
r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
|
r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
|
||||||
|
|
|
@ -119,6 +119,8 @@ struct amdgpu_copy_mem {
|
||||||
#define AMDGPU_COPY_FLAGS_NUMBER_TYPE_MASK 0x07
|
#define AMDGPU_COPY_FLAGS_NUMBER_TYPE_MASK 0x07
|
||||||
#define AMDGPU_COPY_FLAGS_DATA_FORMAT_SHIFT 8
|
#define AMDGPU_COPY_FLAGS_DATA_FORMAT_SHIFT 8
|
||||||
#define AMDGPU_COPY_FLAGS_DATA_FORMAT_MASK 0x3f
|
#define AMDGPU_COPY_FLAGS_DATA_FORMAT_MASK 0x3f
|
||||||
|
#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_SHIFT 14
|
||||||
|
#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_MASK 0x1
|
||||||
|
|
||||||
#define AMDGPU_COPY_FLAGS_SET(field, value) \
|
#define AMDGPU_COPY_FLAGS_SET(field, value) \
|
||||||
(((__u32)(value) & AMDGPU_COPY_FLAGS_##field##_MASK) << AMDGPU_COPY_FLAGS_##field##_SHIFT)
|
(((__u32)(value) & AMDGPU_COPY_FLAGS_##field##_MASK) << AMDGPU_COPY_FLAGS_##field##_SHIFT)
|
||||||
|
|
|
@ -1741,11 +1741,12 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
|
||||||
uint32_t byte_count,
|
uint32_t byte_count,
|
||||||
uint32_t copy_flags)
|
uint32_t copy_flags)
|
||||||
{
|
{
|
||||||
uint32_t num_type, data_format, max_com;
|
uint32_t num_type, data_format, max_com, write_cm;
|
||||||
|
|
||||||
max_com = AMDGPU_COPY_FLAGS_GET(copy_flags, MAX_COMPRESSED);
|
max_com = AMDGPU_COPY_FLAGS_GET(copy_flags, MAX_COMPRESSED);
|
||||||
data_format = AMDGPU_COPY_FLAGS_GET(copy_flags, DATA_FORMAT);
|
data_format = AMDGPU_COPY_FLAGS_GET(copy_flags, DATA_FORMAT);
|
||||||
num_type = AMDGPU_COPY_FLAGS_GET(copy_flags, NUMBER_TYPE);
|
num_type = AMDGPU_COPY_FLAGS_GET(copy_flags, NUMBER_TYPE);
|
||||||
|
write_cm = AMDGPU_COPY_FLAGS_GET(copy_flags, WRITE_COMPRESS_DISABLE) ? 2 : 1;
|
||||||
|
|
||||||
ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
|
ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
|
||||||
SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
|
SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
|
||||||
|
@ -1762,7 +1763,7 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
|
||||||
if ((copy_flags & (AMDGPU_COPY_FLAGS_READ_DECOMPRESSED | AMDGPU_COPY_FLAGS_WRITE_COMPRESSED)))
|
if ((copy_flags & (AMDGPU_COPY_FLAGS_READ_DECOMPRESSED | AMDGPU_COPY_FLAGS_WRITE_COMPRESSED)))
|
||||||
ib->ptr[ib->length_dw++] = SDMA_DCC_DATA_FORMAT(data_format) | SDMA_DCC_NUM_TYPE(num_type) |
|
ib->ptr[ib->length_dw++] = SDMA_DCC_DATA_FORMAT(data_format) | SDMA_DCC_NUM_TYPE(num_type) |
|
||||||
((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) |
|
((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) |
|
||||||
((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(1) : 0) |
|
((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(write_cm) : 0) |
|
||||||
SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1);
|
SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1);
|
||||||
else
|
else
|
||||||
ib->ptr[ib->length_dw++] = 0;
|
ib->ptr[ib->length_dw++] = 0;
|
||||||
|
|
|
@ -2133,7 +2133,7 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
|
||||||
|
|
||||||
dc_enable_stereo(dc, context, dc_streams, context->stream_count);
|
dc_enable_stereo(dc, context, dc_streams, context->stream_count);
|
||||||
|
|
||||||
if (context->stream_count > get_seamless_boot_stream_count(context) ||
|
if (get_seamless_boot_stream_count(context) == 0 ||
|
||||||
context->stream_count == 0) {
|
context->stream_count == 0) {
|
||||||
/* Must wait for no flips to be pending before doing optimize bw */
|
/* Must wait for no flips to be pending before doing optimize bw */
|
||||||
hwss_wait_for_no_pipes_pending(dc, context);
|
hwss_wait_for_no_pipes_pending(dc, context);
|
||||||
|
|
|
@ -63,8 +63,7 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
|
||||||
|
|
||||||
bool should_use_dmub_lock(struct dc_link *link)
|
bool should_use_dmub_lock(struct dc_link *link)
|
||||||
{
|
{
|
||||||
if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
|
if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
|
||||||
link->psr_settings.psr_version == DC_PSR_VERSION_1)
|
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
if (link->replay_settings.replay_feature_enabled)
|
if (link->replay_settings.replay_feature_enabled)
|
||||||
|
|
|
@ -29,11 +29,15 @@ dml_ccflags := $(CC_FLAGS_FPU)
|
||||||
dml_rcflags := $(CC_FLAGS_NO_FPU)
|
dml_rcflags := $(CC_FLAGS_NO_FPU)
|
||||||
|
|
||||||
ifneq ($(CONFIG_FRAME_WARN),0)
|
ifneq ($(CONFIG_FRAME_WARN),0)
|
||||||
ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
|
ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
|
||||||
frame_warn_flag := -Wframe-larger-than=3072
|
frame_warn_limit := 3072
|
||||||
else
|
else
|
||||||
frame_warn_flag := -Wframe-larger-than=2048
|
frame_warn_limit := 2048
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
ifeq ($(call test-lt, $(CONFIG_FRAME_WARN), $(frame_warn_limit)),y)
|
||||||
|
frame_warn_flag := -Wframe-larger-than=$(frame_warn_limit)
|
||||||
|
endif
|
||||||
endif
|
endif
|
||||||
|
|
||||||
CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
|
CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
|
||||||
|
|
|
@ -28,15 +28,19 @@ dml2_ccflags := $(CC_FLAGS_FPU)
|
||||||
dml2_rcflags := $(CC_FLAGS_NO_FPU)
|
dml2_rcflags := $(CC_FLAGS_NO_FPU)
|
||||||
|
|
||||||
ifneq ($(CONFIG_FRAME_WARN),0)
|
ifneq ($(CONFIG_FRAME_WARN),0)
|
||||||
ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
|
ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
|
||||||
ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy)
|
ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy)
|
||||||
frame_warn_flag := -Wframe-larger-than=4096
|
frame_warn_limit := 4096
|
||||||
else
|
else
|
||||||
frame_warn_flag := -Wframe-larger-than=3072
|
frame_warn_limit := 3072
|
||||||
endif
|
endif
|
||||||
else
|
else
|
||||||
frame_warn_flag := -Wframe-larger-than=2048
|
frame_warn_limit := 2048
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
ifeq ($(call test-lt, $(CONFIG_FRAME_WARN), $(frame_warn_limit)),y)
|
||||||
|
frame_warn_flag := -Wframe-larger-than=$(frame_warn_limit)
|
||||||
|
endif
|
||||||
endif
|
endif
|
||||||
|
|
||||||
subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/dc/dml2
|
subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/dc/dml2
|
||||||
|
|
|
@ -1017,7 +1017,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
|
||||||
if (disp_cfg_stream_location < 0)
|
if (disp_cfg_stream_location < 0)
|
||||||
disp_cfg_stream_location = dml_dispcfg->num_streams++;
|
disp_cfg_stream_location = dml_dispcfg->num_streams++;
|
||||||
|
|
||||||
ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
||||||
populate_dml21_timing_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, context->streams[stream_index], dml_ctx);
|
populate_dml21_timing_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, context->streams[stream_index], dml_ctx);
|
||||||
adjust_dml21_hblank_timing_config_from_pipe_ctx(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, &context->res_ctx.pipe_ctx[stream_index]);
|
adjust_dml21_hblank_timing_config_from_pipe_ctx(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, &context->res_ctx.pipe_ctx[stream_index]);
|
||||||
populate_dml21_output_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].output, context->streams[stream_index], &context->res_ctx.pipe_ctx[stream_index]);
|
populate_dml21_output_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].output, context->streams[stream_index], &context->res_ctx.pipe_ctx[stream_index]);
|
||||||
|
@ -1042,7 +1042,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
|
||||||
if (disp_cfg_plane_location < 0)
|
if (disp_cfg_plane_location < 0)
|
||||||
disp_cfg_plane_location = dml_dispcfg->num_planes++;
|
disp_cfg_plane_location = dml_dispcfg->num_planes++;
|
||||||
|
|
||||||
ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
||||||
|
|
||||||
populate_dml21_surface_config_from_plane_state(in_dc, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location].surface, context->stream_status[stream_index].plane_states[plane_index]);
|
populate_dml21_surface_config_from_plane_state(in_dc, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location].surface, context->stream_status[stream_index].plane_states[plane_index]);
|
||||||
populate_dml21_plane_config_from_plane_state(dml_ctx, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location], context->stream_status[stream_index].plane_states[plane_index], context, stream_index);
|
populate_dml21_plane_config_from_plane_state(dml_ctx, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location], context->stream_status[stream_index].plane_states[plane_index], context, stream_index);
|
||||||
|
|
|
@ -786,7 +786,7 @@ static void populate_dml_output_cfg_from_stream_state(struct dml_output_cfg_st *
|
||||||
case SIGNAL_TYPE_DISPLAY_PORT_MST:
|
case SIGNAL_TYPE_DISPLAY_PORT_MST:
|
||||||
case SIGNAL_TYPE_DISPLAY_PORT:
|
case SIGNAL_TYPE_DISPLAY_PORT:
|
||||||
out->OutputEncoder[location] = dml_dp;
|
out->OutputEncoder[location] = dml_dp;
|
||||||
if (dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1)
|
if (location < MAX_HPO_DP2_ENCODERS && dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1)
|
||||||
out->OutputEncoder[dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location]] = dml_dp2p0;
|
out->OutputEncoder[dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location]] = dml_dp2p0;
|
||||||
break;
|
break;
|
||||||
case SIGNAL_TYPE_EDP:
|
case SIGNAL_TYPE_EDP:
|
||||||
|
@ -1343,7 +1343,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
|
||||||
if (disp_cfg_stream_location < 0)
|
if (disp_cfg_stream_location < 0)
|
||||||
disp_cfg_stream_location = dml_dispcfg->num_timings++;
|
disp_cfg_stream_location = dml_dispcfg->num_timings++;
|
||||||
|
|
||||||
ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
||||||
|
|
||||||
populate_dml_timing_cfg_from_stream_state(&dml_dispcfg->timing, disp_cfg_stream_location, context->streams[i]);
|
populate_dml_timing_cfg_from_stream_state(&dml_dispcfg->timing, disp_cfg_stream_location, context->streams[i]);
|
||||||
populate_dml_output_cfg_from_stream_state(&dml_dispcfg->output, disp_cfg_stream_location, context->streams[i], current_pipe_context, dml2);
|
populate_dml_output_cfg_from_stream_state(&dml_dispcfg->output, disp_cfg_stream_location, context->streams[i], current_pipe_context, dml2);
|
||||||
|
@ -1383,7 +1383,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
|
||||||
if (disp_cfg_plane_location < 0)
|
if (disp_cfg_plane_location < 0)
|
||||||
disp_cfg_plane_location = dml_dispcfg->num_surfaces++;
|
disp_cfg_plane_location = dml_dispcfg->num_surfaces++;
|
||||||
|
|
||||||
ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
|
||||||
|
|
||||||
populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]);
|
populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]);
|
||||||
populate_dml_plane_cfg_from_plane_state(
|
populate_dml_plane_cfg_from_plane_state(
|
||||||
|
|
|
@ -129,7 +129,8 @@ bool hubbub3_program_watermarks(
|
||||||
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
||||||
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);
|
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);
|
||||||
|
|
||||||
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
|
||||||
|
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
||||||
|
|
||||||
return wm_pending;
|
return wm_pending;
|
||||||
}
|
}
|
||||||
|
|
|
@ -750,7 +750,8 @@ static bool hubbub31_program_watermarks(
|
||||||
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
||||||
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
|
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
|
||||||
|
|
||||||
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
|
||||||
|
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
||||||
return wm_pending;
|
return wm_pending;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -786,7 +786,8 @@ static bool hubbub32_program_watermarks(
|
||||||
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
|
||||||
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
|
DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
|
||||||
|
|
||||||
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
|
||||||
|
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
||||||
|
|
||||||
hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
|
hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
|
||||||
|
|
||||||
|
|
|
@ -326,7 +326,8 @@ static bool hubbub35_program_watermarks(
|
||||||
DCHUBBUB_ARB_MIN_REQ_OUTSTAND_COMMIT_THRESHOLD, 0xA);/*hw delta*/
|
DCHUBBUB_ARB_MIN_REQ_OUTSTAND_COMMIT_THRESHOLD, 0xA);/*hw delta*/
|
||||||
REG_UPDATE(DCHUBBUB_ARB_HOSTVM_CNTL, DCHUBBUB_ARB_MAX_QOS_COMMIT_THRESHOLD, 0xF);
|
REG_UPDATE(DCHUBBUB_ARB_HOSTVM_CNTL, DCHUBBUB_ARB_MAX_QOS_COMMIT_THRESHOLD, 0xF);
|
||||||
|
|
||||||
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
|
||||||
|
hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
|
||||||
|
|
||||||
hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
|
hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
|
||||||
|
|
||||||
|
|
|
@ -500,6 +500,8 @@ void hubp3_init(struct hubp *hubp)
|
||||||
//hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
|
//hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
|
||||||
REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
|
REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
|
||||||
|
|
||||||
|
REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
|
||||||
|
|
||||||
hubp_reset(hubp);
|
hubp_reset(hubp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -168,6 +168,8 @@ void hubp32_init(struct hubp *hubp)
|
||||||
{
|
{
|
||||||
struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
|
struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
|
||||||
REG_WRITE(HUBPREQ_DEBUG_DB, 1 << 8);
|
REG_WRITE(HUBPREQ_DEBUG_DB, 1 << 8);
|
||||||
|
|
||||||
|
REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
|
||||||
}
|
}
|
||||||
static struct hubp_funcs dcn32_hubp_funcs = {
|
static struct hubp_funcs dcn32_hubp_funcs = {
|
||||||
.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
|
.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
|
||||||
|
|
|
@ -236,7 +236,8 @@ void dcn35_init_hw(struct dc *dc)
|
||||||
}
|
}
|
||||||
|
|
||||||
hws->funcs.init_pipes(dc, dc->current_state);
|
hws->funcs.init_pipes(dc, dc->current_state);
|
||||||
if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
|
if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
|
||||||
|
!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
|
||||||
dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
|
dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
|
||||||
!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
|
!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
|
||||||
}
|
}
|
||||||
|
|
|
@ -160,6 +160,10 @@ static int komeda_wb_connector_add(struct komeda_kms_dev *kms,
|
||||||
formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
|
formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
|
||||||
kwb_conn->wb_layer->layer_type,
|
kwb_conn->wb_layer->layer_type,
|
||||||
&n_formats);
|
&n_formats);
|
||||||
|
if (!formats) {
|
||||||
|
kfree(kwb_conn);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
err = drm_writeback_connector_init(&kms->base, wb_conn,
|
err = drm_writeback_connector_init(&kms->base, wb_conn,
|
||||||
&komeda_wb_connector_funcs,
|
&komeda_wb_connector_funcs,
|
||||||
|
|
|
@ -195,7 +195,7 @@ static bool __ast_dp_wait_enable(struct ast_device *ast, bool enabled)
|
||||||
if (enabled)
|
if (enabled)
|
||||||
vgacrdf_test |= AST_IO_VGACRDF_DP_VIDEO_ENABLE;
|
vgacrdf_test |= AST_IO_VGACRDF_DP_VIDEO_ENABLE;
|
||||||
|
|
||||||
for (i = 0; i < 200; ++i) {
|
for (i = 0; i < 1000; ++i) {
|
||||||
if (i)
|
if (i)
|
||||||
mdelay(1);
|
mdelay(1);
|
||||||
vgacrdf = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xdf,
|
vgacrdf = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xdf,
|
||||||
|
|
|
@ -311,16 +311,6 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
|
||||||
if (!aux->transfer)
|
if (!aux->transfer)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
#ifndef CONFIG_MEDIA_CEC_RC
|
|
||||||
/*
|
|
||||||
* CEC_CAP_RC is part of CEC_CAP_DEFAULTS, but it is stripped by
|
|
||||||
* cec_allocate_adapter() if CONFIG_MEDIA_CEC_RC is undefined.
|
|
||||||
*
|
|
||||||
* Do this here as well to ensure the tests against cec_caps are
|
|
||||||
* correct.
|
|
||||||
*/
|
|
||||||
cec_caps &= ~CEC_CAP_RC;
|
|
||||||
#endif
|
|
||||||
cancel_delayed_work_sync(&aux->cec.unregister_work);
|
cancel_delayed_work_sync(&aux->cec.unregister_work);
|
||||||
|
|
||||||
mutex_lock(&aux->cec.lock);
|
mutex_lock(&aux->cec.lock);
|
||||||
|
@ -337,7 +327,9 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
|
||||||
num_las = CEC_MAX_LOG_ADDRS;
|
num_las = CEC_MAX_LOG_ADDRS;
|
||||||
|
|
||||||
if (aux->cec.adap) {
|
if (aux->cec.adap) {
|
||||||
if (aux->cec.adap->capabilities == cec_caps &&
|
/* Check if the adapter properties have changed */
|
||||||
|
if ((aux->cec.adap->capabilities & CEC_CAP_MONITOR_ALL) ==
|
||||||
|
(cec_caps & CEC_CAP_MONITOR_ALL) &&
|
||||||
aux->cec.adap->available_log_addrs == num_las) {
|
aux->cec.adap->available_log_addrs == num_las) {
|
||||||
/* Unchanged, so just set the phys addr */
|
/* Unchanged, so just set the phys addr */
|
||||||
cec_s_phys_addr(aux->cec.adap, source_physical_address, false);
|
cec_s_phys_addr(aux->cec.adap, source_physical_address, false);
|
||||||
|
|
|
@ -41,8 +41,9 @@ static u32 scale(u32 source_val,
|
||||||
{
|
{
|
||||||
u64 target_val;
|
u64 target_val;
|
||||||
|
|
||||||
WARN_ON(source_min > source_max);
|
if (WARN_ON(source_min >= source_max) ||
|
||||||
WARN_ON(target_min > target_max);
|
WARN_ON(target_min > target_max))
|
||||||
|
return target_min;
|
||||||
|
|
||||||
/* defensive */
|
/* defensive */
|
||||||
source_val = clamp(source_val, source_min, source_max);
|
source_val = clamp(source_val, source_min, source_max);
|
||||||
|
|
|
@ -1791,7 +1791,7 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
|
||||||
if (DISPLAY_VER(display) == 11)
|
if (DISPLAY_VER(display) == 11)
|
||||||
return 10;
|
return 10;
|
||||||
|
|
||||||
return 0;
|
return intel_dp_dsc_min_src_input_bpc();
|
||||||
}
|
}
|
||||||
|
|
||||||
int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
|
int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
|
||||||
|
@ -2072,11 +2072,10 @@ icl_dsc_compute_link_config(struct intel_dp *intel_dp,
|
||||||
/* Compressed BPP should be less than the Input DSC bpp */
|
/* Compressed BPP should be less than the Input DSC bpp */
|
||||||
dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
|
dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
|
for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
|
||||||
if (valid_dsc_bpp[i] < dsc_min_bpp)
|
if (valid_dsc_bpp[i] < dsc_min_bpp ||
|
||||||
|
valid_dsc_bpp[i] > dsc_max_bpp)
|
||||||
continue;
|
continue;
|
||||||
if (valid_dsc_bpp[i] > dsc_max_bpp)
|
|
||||||
break;
|
|
||||||
|
|
||||||
ret = dsc_compute_link_config(intel_dp,
|
ret = dsc_compute_link_config(intel_dp,
|
||||||
pipe_config,
|
pipe_config,
|
||||||
|
@ -2829,7 +2828,6 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
|
||||||
|
|
||||||
crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC);
|
crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC);
|
||||||
|
|
||||||
/* Currently only DP_AS_SDP_AVT_FIXED_VTOTAL mode supported */
|
|
||||||
as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC;
|
as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC;
|
||||||
as_sdp->length = 0x9;
|
as_sdp->length = 0x9;
|
||||||
as_sdp->duration_incr_ms = 0;
|
as_sdp->duration_incr_ms = 0;
|
||||||
|
@ -2840,7 +2838,7 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
|
||||||
as_sdp->target_rr = drm_mode_vrefresh(adjusted_mode);
|
as_sdp->target_rr = drm_mode_vrefresh(adjusted_mode);
|
||||||
as_sdp->target_rr_divider = true;
|
as_sdp->target_rr_divider = true;
|
||||||
} else {
|
} else {
|
||||||
as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL;
|
as_sdp->mode = DP_AS_SDP_AVT_DYNAMIC_VTOTAL;
|
||||||
as_sdp->vtotal = adjusted_mode->vtotal;
|
as_sdp->vtotal = adjusted_mode->vtotal;
|
||||||
as_sdp->target_rr = 0;
|
as_sdp->target_rr = 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -341,6 +341,10 @@ int intel_dp_mtp_tu_compute_config(struct intel_dp *intel_dp,
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Allow using zero step to indicate one try */
|
||||||
|
if (!step)
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (slots < 0) {
|
if (slots < 0) {
|
||||||
|
|
|
@ -41,7 +41,7 @@ intel_hdcp_adjust_hdcp_line_rekeying(struct intel_encoder *encoder,
|
||||||
u32 rekey_bit = 0;
|
u32 rekey_bit = 0;
|
||||||
|
|
||||||
/* Here we assume HDMI is in TMDS mode of operation */
|
/* Here we assume HDMI is in TMDS mode of operation */
|
||||||
if (encoder->type != INTEL_OUTPUT_HDMI)
|
if (!intel_encoder_is_hdmi(encoder))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (DISPLAY_VER(display) >= 30) {
|
if (DISPLAY_VER(display) >= 30) {
|
||||||
|
@ -2188,6 +2188,19 @@ static int intel_hdcp2_check_link(struct intel_connector *connector)
|
||||||
|
|
||||||
drm_dbg_kms(display->drm,
|
drm_dbg_kms(display->drm,
|
||||||
"HDCP2.2 Downstream topology change\n");
|
"HDCP2.2 Downstream topology change\n");
|
||||||
|
|
||||||
|
ret = hdcp2_authenticate_repeater_topology(connector);
|
||||||
|
if (!ret) {
|
||||||
|
intel_hdcp_update_value(connector,
|
||||||
|
DRM_MODE_CONTENT_PROTECTION_ENABLED,
|
||||||
|
true);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
drm_dbg_kms(display->drm,
|
||||||
|
"[CONNECTOR:%d:%s] Repeater topology auth failed.(%d)\n",
|
||||||
|
connector->base.base.id, connector->base.name,
|
||||||
|
ret);
|
||||||
} else {
|
} else {
|
||||||
drm_dbg_kms(display->drm,
|
drm_dbg_kms(display->drm,
|
||||||
"[CONNECTOR:%d:%s] HDCP2.2 link failed, retrying auth\n",
|
"[CONNECTOR:%d:%s] HDCP2.2 link failed, retrying auth\n",
|
||||||
|
|
|
@ -106,8 +106,6 @@ static const u32 icl_sdr_y_plane_formats[] = {
|
||||||
DRM_FORMAT_Y216,
|
DRM_FORMAT_Y216,
|
||||||
DRM_FORMAT_XYUV8888,
|
DRM_FORMAT_XYUV8888,
|
||||||
DRM_FORMAT_XVYU2101010,
|
DRM_FORMAT_XVYU2101010,
|
||||||
DRM_FORMAT_XVYU12_16161616,
|
|
||||||
DRM_FORMAT_XVYU16161616,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
static const u32 icl_sdr_uv_plane_formats[] = {
|
static const u32 icl_sdr_uv_plane_formats[] = {
|
||||||
|
@ -134,8 +132,6 @@ static const u32 icl_sdr_uv_plane_formats[] = {
|
||||||
DRM_FORMAT_Y216,
|
DRM_FORMAT_Y216,
|
||||||
DRM_FORMAT_XYUV8888,
|
DRM_FORMAT_XYUV8888,
|
||||||
DRM_FORMAT_XVYU2101010,
|
DRM_FORMAT_XVYU2101010,
|
||||||
DRM_FORMAT_XVYU12_16161616,
|
|
||||||
DRM_FORMAT_XVYU16161616,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
static const u32 icl_hdr_plane_formats[] = {
|
static const u32 icl_hdr_plane_formats[] = {
|
||||||
|
|
|
@ -209,8 +209,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
|
||||||
struct address_space *mapping = obj->base.filp->f_mapping;
|
struct address_space *mapping = obj->base.filp->f_mapping;
|
||||||
unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
|
unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
|
||||||
struct sg_table *st;
|
struct sg_table *st;
|
||||||
struct sgt_iter sgt_iter;
|
|
||||||
struct page *page;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -239,9 +237,7 @@ rebuild_st:
|
||||||
* for PAGE_SIZE chunks instead may be helpful.
|
* for PAGE_SIZE chunks instead may be helpful.
|
||||||
*/
|
*/
|
||||||
if (max_segment > PAGE_SIZE) {
|
if (max_segment > PAGE_SIZE) {
|
||||||
for_each_sgt_page(page, sgt_iter, st)
|
shmem_sg_free_table(st, mapping, false, false);
|
||||||
put_page(page);
|
|
||||||
sg_free_table(st);
|
|
||||||
kfree(st);
|
kfree(st);
|
||||||
|
|
||||||
max_segment = PAGE_SIZE;
|
max_segment = PAGE_SIZE;
|
||||||
|
|
|
@ -1469,6 +1469,19 @@ static void __reset_guc_busyness_stats(struct intel_guc *guc)
|
||||||
spin_unlock_irqrestore(&guc->timestamp.lock, flags);
|
spin_unlock_irqrestore(&guc->timestamp.lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void __update_guc_busyness_running_state(struct intel_guc *guc)
|
||||||
|
{
|
||||||
|
struct intel_gt *gt = guc_to_gt(guc);
|
||||||
|
struct intel_engine_cs *engine;
|
||||||
|
enum intel_engine_id id;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&guc->timestamp.lock, flags);
|
||||||
|
for_each_engine(engine, gt, id)
|
||||||
|
engine->stats.guc.running = false;
|
||||||
|
spin_unlock_irqrestore(&guc->timestamp.lock, flags);
|
||||||
|
}
|
||||||
|
|
||||||
static void __update_guc_busyness_stats(struct intel_guc *guc)
|
static void __update_guc_busyness_stats(struct intel_guc *guc)
|
||||||
{
|
{
|
||||||
struct intel_gt *gt = guc_to_gt(guc);
|
struct intel_gt *gt = guc_to_gt(guc);
|
||||||
|
@ -1619,6 +1632,9 @@ void intel_guc_busyness_park(struct intel_gt *gt)
|
||||||
if (!guc_submission_initialized(guc))
|
if (!guc_submission_initialized(guc))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
/* Assume no engines are running and set running state to false */
|
||||||
|
__update_guc_busyness_running_state(guc);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* There is a race with suspend flow where the worker runs after suspend
|
* There is a race with suspend flow where the worker runs after suspend
|
||||||
* and causes an unclaimed register access warning. Cancel the worker
|
* and causes an unclaimed register access warning. Cancel the worker
|
||||||
|
@ -5519,12 +5535,20 @@ static inline void guc_log_context(struct drm_printer *p,
|
||||||
{
|
{
|
||||||
drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id.id);
|
drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id.id);
|
||||||
drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
|
drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
|
||||||
drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
|
if (intel_context_pin_if_active(ce)) {
|
||||||
ce->ring->head,
|
drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
|
||||||
ce->lrc_reg_state[CTX_RING_HEAD]);
|
ce->ring->head,
|
||||||
drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
|
ce->lrc_reg_state[CTX_RING_HEAD]);
|
||||||
ce->ring->tail,
|
drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
|
||||||
ce->lrc_reg_state[CTX_RING_TAIL]);
|
ce->ring->tail,
|
||||||
|
ce->lrc_reg_state[CTX_RING_TAIL]);
|
||||||
|
intel_context_unpin(ce);
|
||||||
|
} else {
|
||||||
|
drm_printf(p, "\t\tLRC Head: Internal %u, Memory not pinned\n",
|
||||||
|
ce->ring->head);
|
||||||
|
drm_printf(p, "\t\tLRC Tail: Internal %u, Memory not pinned\n",
|
||||||
|
ce->ring->tail);
|
||||||
|
}
|
||||||
drm_printf(p, "\t\tContext Pin Count: %u\n",
|
drm_printf(p, "\t\tContext Pin Count: %u\n",
|
||||||
atomic_read(&ce->pin_count));
|
atomic_read(&ce->pin_count));
|
||||||
drm_printf(p, "\t\tGuC ID Ref Count: %u\n",
|
drm_printf(p, "\t\tGuC ID Ref Count: %u\n",
|
||||||
|
|
|
@ -51,6 +51,10 @@
|
||||||
/* Common to all OA units */
|
/* Common to all OA units */
|
||||||
#define OA_OACONTROL_REPORT_BC_MASK REG_GENMASK(9, 9)
|
#define OA_OACONTROL_REPORT_BC_MASK REG_GENMASK(9, 9)
|
||||||
#define OA_OACONTROL_COUNTER_SIZE_MASK REG_GENMASK(8, 8)
|
#define OA_OACONTROL_COUNTER_SIZE_MASK REG_GENMASK(8, 8)
|
||||||
|
#define OAG_OACONTROL_USED_BITS \
|
||||||
|
(OAG_OACONTROL_OA_PES_DISAG_EN | OAG_OACONTROL_OA_CCS_SELECT_MASK | \
|
||||||
|
OAG_OACONTROL_OA_COUNTER_SEL_MASK | OAG_OACONTROL_OA_COUNTER_ENABLE | \
|
||||||
|
OA_OACONTROL_REPORT_BC_MASK | OA_OACONTROL_COUNTER_SIZE_MASK)
|
||||||
|
|
||||||
#define OAG_OA_DEBUG XE_REG(0xdaf8, XE_REG_OPTION_MASKED)
|
#define OAG_OA_DEBUG XE_REG(0xdaf8, XE_REG_OPTION_MASKED)
|
||||||
#define OAG_OA_DEBUG_DISABLE_MMIO_TRG REG_BIT(14)
|
#define OAG_OA_DEBUG_DISABLE_MMIO_TRG REG_BIT(14)
|
||||||
|
@ -78,6 +82,8 @@
|
||||||
#define OAM_CONTEXT_CONTROL_OFFSET (0x1bc)
|
#define OAM_CONTEXT_CONTROL_OFFSET (0x1bc)
|
||||||
#define OAM_CONTROL_OFFSET (0x194)
|
#define OAM_CONTROL_OFFSET (0x194)
|
||||||
#define OAM_CONTROL_COUNTER_SEL_MASK REG_GENMASK(3, 1)
|
#define OAM_CONTROL_COUNTER_SEL_MASK REG_GENMASK(3, 1)
|
||||||
|
#define OAM_OACONTROL_USED_BITS \
|
||||||
|
(OAM_CONTROL_COUNTER_SEL_MASK | OAG_OACONTROL_OA_COUNTER_ENABLE)
|
||||||
#define OAM_DEBUG_OFFSET (0x198)
|
#define OAM_DEBUG_OFFSET (0x198)
|
||||||
#define OAM_STATUS_OFFSET (0x19c)
|
#define OAM_STATUS_OFFSET (0x19c)
|
||||||
#define OAM_MMIO_TRG_OFFSET (0x1d0)
|
#define OAM_MMIO_TRG_OFFSET (0x1d0)
|
||||||
|
|
|
@ -119,11 +119,7 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
|
||||||
drm_puts(&p, "\n**** GuC CT ****\n");
|
drm_puts(&p, "\n**** GuC CT ****\n");
|
||||||
xe_guc_ct_snapshot_print(ss->guc.ct, &p);
|
xe_guc_ct_snapshot_print(ss->guc.ct, &p);
|
||||||
|
|
||||||
/*
|
drm_puts(&p, "\n**** Contexts ****\n");
|
||||||
* Don't add a new section header here because the mesa debug decoder
|
|
||||||
* tool expects the context information to be in the 'GuC CT' section.
|
|
||||||
*/
|
|
||||||
/* drm_puts(&p, "\n**** Contexts ****\n"); */
|
|
||||||
xe_guc_exec_queue_snapshot_print(ss->ge, &p);
|
xe_guc_exec_queue_snapshot_print(ss->ge, &p);
|
||||||
|
|
||||||
drm_puts(&p, "\n**** Job ****\n");
|
drm_puts(&p, "\n**** Job ****\n");
|
||||||
|
@ -395,42 +391,34 @@ int xe_devcoredump_init(struct xe_device *xe)
|
||||||
/**
|
/**
|
||||||
* xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
|
* xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
|
||||||
*
|
*
|
||||||
* The output is split to multiple lines because some print targets, e.g. dmesg
|
* The output is split into multiple calls to drm_puts() because some print
|
||||||
* cannot handle arbitrarily long lines. Note also that printing to dmesg in
|
* targets, e.g. dmesg, cannot handle arbitrarily long lines. These targets may
|
||||||
* piece-meal fashion is not possible, each separate call to drm_puts() has a
|
* add newlines, as is the case with dmesg: each drm_puts() call creates a
|
||||||
* line-feed automatically added! Therefore, the entire output line must be
|
* separate line.
|
||||||
* constructed in a local buffer first, then printed in one atomic output call.
|
|
||||||
*
|
*
|
||||||
* There is also a scheduler yield call to prevent the 'task has been stuck for
|
* There is also a scheduler yield call to prevent the 'task has been stuck for
|
||||||
* 120s' kernel hang check feature from firing when printing to a slow target
|
* 120s' kernel hang check feature from firing when printing to a slow target
|
||||||
* such as dmesg over a serial port.
|
* such as dmesg over a serial port.
|
||||||
*
|
*
|
||||||
* TODO: Add compression prior to the ASCII85 encoding to shrink huge buffers down.
|
|
||||||
*
|
|
||||||
* @p: the printer object to output to
|
* @p: the printer object to output to
|
||||||
* @prefix: optional prefix to add to output string
|
* @prefix: optional prefix to add to output string
|
||||||
|
* @suffix: optional suffix to add at the end. 0 disables it and is
|
||||||
|
* not added to the output, which is useful when using multiple calls
|
||||||
|
* to dump data to @p
|
||||||
* @blob: the Binary Large OBject to dump out
|
* @blob: the Binary Large OBject to dump out
|
||||||
* @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
|
* @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
|
||||||
* @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
|
* @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
|
||||||
*/
|
*/
|
||||||
void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
|
void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
|
||||||
const void *blob, size_t offset, size_t size)
|
const void *blob, size_t offset, size_t size)
|
||||||
{
|
{
|
||||||
const u32 *blob32 = (const u32 *)blob;
|
const u32 *blob32 = (const u32 *)blob;
|
||||||
char buff[ASCII85_BUFSZ], *line_buff;
|
char buff[ASCII85_BUFSZ], *line_buff;
|
||||||
size_t line_pos = 0;
|
size_t line_pos = 0;
|
||||||
|
|
||||||
/*
|
|
||||||
* Splitting blobs across multiple lines is not compatible with the mesa
|
|
||||||
* debug decoder tool. Note that even dropping the explicit '\n' below
|
|
||||||
* doesn't help because the GuC log is so big some underlying implementation
|
|
||||||
* still splits the lines at 512K characters. So just bail completely for
|
|
||||||
* the moment.
|
|
||||||
*/
|
|
||||||
return;
|
|
||||||
|
|
||||||
#define DMESG_MAX_LINE_LEN 800
|
#define DMESG_MAX_LINE_LEN 800
|
||||||
#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
|
/* Always leave space for the suffix char and the \0 */
|
||||||
|
#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "<suffix>\0" */
|
||||||
|
|
||||||
if (size & 3)
|
if (size & 3)
|
||||||
drm_printf(p, "Size not word aligned: %zu", size);
|
drm_printf(p, "Size not word aligned: %zu", size);
|
||||||
|
@ -462,7 +450,6 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
|
||||||
line_pos += strlen(line_buff + line_pos);
|
line_pos += strlen(line_buff + line_pos);
|
||||||
|
|
||||||
if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
|
if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
|
||||||
line_buff[line_pos++] = '\n';
|
|
||||||
line_buff[line_pos++] = 0;
|
line_buff[line_pos++] = 0;
|
||||||
|
|
||||||
drm_puts(p, line_buff);
|
drm_puts(p, line_buff);
|
||||||
|
@ -474,10 +461,11 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (line_pos) {
|
if (suffix)
|
||||||
line_buff[line_pos++] = '\n';
|
line_buff[line_pos++] = suffix;
|
||||||
line_buff[line_pos++] = 0;
|
|
||||||
|
|
||||||
|
if (line_pos) {
|
||||||
|
line_buff[line_pos++] = 0;
|
||||||
drm_puts(p, line_buff);
|
drm_puts(p, line_buff);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -29,7 +29,7 @@ static inline int xe_devcoredump_init(struct xe_device *xe)
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
|
void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
|
||||||
const void *blob, size_t offset, size_t size);
|
const void *blob, size_t offset, size_t size);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -532,8 +532,10 @@ static int all_fw_domain_init(struct xe_gt *gt)
|
||||||
if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt))
|
if (IS_SRIOV_PF(gt_to_xe(gt)) && !xe_gt_is_media_type(gt))
|
||||||
xe_lmtt_init_hw(>_to_tile(gt)->sriov.pf.lmtt);
|
xe_lmtt_init_hw(>_to_tile(gt)->sriov.pf.lmtt);
|
||||||
|
|
||||||
if (IS_SRIOV_PF(gt_to_xe(gt)))
|
if (IS_SRIOV_PF(gt_to_xe(gt))) {
|
||||||
|
xe_gt_sriov_pf_init(gt);
|
||||||
xe_gt_sriov_pf_init_hw(gt);
|
xe_gt_sriov_pf_init_hw(gt);
|
||||||
|
}
|
||||||
|
|
||||||
xe_force_wake_put(gt_to_fw(gt), fw_ref);
|
xe_force_wake_put(gt_to_fw(gt), fw_ref);
|
||||||
|
|
||||||
|
|
|
@ -68,6 +68,19 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* xe_gt_sriov_pf_init - Prepare SR-IOV PF data structures on PF.
|
||||||
|
* @gt: the &xe_gt to initialize
|
||||||
|
*
|
||||||
|
* Late one-time initialization of the PF data.
|
||||||
|
*
|
||||||
|
* Return: 0 on success or a negative error code on failure.
|
||||||
|
*/
|
||||||
|
int xe_gt_sriov_pf_init(struct xe_gt *gt)
|
||||||
|
{
|
||||||
|
return xe_gt_sriov_pf_migration_init(gt);
|
||||||
|
}
|
||||||
|
|
||||||
static bool pf_needs_enable_ggtt_guest_update(struct xe_device *xe)
|
static bool pf_needs_enable_ggtt_guest_update(struct xe_device *xe)
|
||||||
{
|
{
|
||||||
return GRAPHICS_VERx100(xe) == 1200;
|
return GRAPHICS_VERx100(xe) == 1200;
|
||||||
|
@ -90,7 +103,6 @@ void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
|
||||||
pf_enable_ggtt_guest_update(gt);
|
pf_enable_ggtt_guest_update(gt);
|
||||||
|
|
||||||
xe_gt_sriov_pf_service_update(gt);
|
xe_gt_sriov_pf_service_update(gt);
|
||||||
xe_gt_sriov_pf_migration_init(gt);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static u32 pf_get_vf_regs_stride(struct xe_device *xe)
|
static u32 pf_get_vf_regs_stride(struct xe_device *xe)
|
||||||
|
|
|
@ -10,6 +10,7 @@ struct xe_gt;
|
||||||
|
|
||||||
#ifdef CONFIG_PCI_IOV
|
#ifdef CONFIG_PCI_IOV
|
||||||
int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
|
int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
|
||||||
|
int xe_gt_sriov_pf_init(struct xe_gt *gt);
|
||||||
void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
|
void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
|
||||||
void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid);
|
void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid);
|
||||||
void xe_gt_sriov_pf_restart(struct xe_gt *gt);
|
void xe_gt_sriov_pf_restart(struct xe_gt *gt);
|
||||||
|
@ -19,6 +20,11 @@ static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline int xe_gt_sriov_pf_init(struct xe_gt *gt)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
|
static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
|
@ -1724,7 +1724,8 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot,
|
||||||
snapshot->g2h_outstanding);
|
snapshot->g2h_outstanding);
|
||||||
|
|
||||||
if (snapshot->ctb)
|
if (snapshot->ctb)
|
||||||
xe_print_blob_ascii85(p, "CTB data", snapshot->ctb, 0, snapshot->ctb_size);
|
xe_print_blob_ascii85(p, "CTB data", '\n',
|
||||||
|
snapshot->ctb, 0, snapshot->ctb_size);
|
||||||
} else {
|
} else {
|
||||||
drm_puts(p, "CT disabled\n");
|
drm_puts(p, "CT disabled\n");
|
||||||
}
|
}
|
||||||
|
|
|
@ -211,8 +211,10 @@ void xe_guc_log_snapshot_print(struct xe_guc_log_snapshot *snapshot, struct drm_
|
||||||
remain = snapshot->size;
|
remain = snapshot->size;
|
||||||
for (i = 0; i < snapshot->num_chunks; i++) {
|
for (i = 0; i < snapshot->num_chunks; i++) {
|
||||||
size_t size = min(GUC_LOG_CHUNK_SIZE, remain);
|
size_t size = min(GUC_LOG_CHUNK_SIZE, remain);
|
||||||
|
const char *prefix = i ? NULL : "Log data";
|
||||||
|
char suffix = i == snapshot->num_chunks - 1 ? '\n' : 0;
|
||||||
|
|
||||||
xe_print_blob_ascii85(p, i ? NULL : "Log data", snapshot->copy[i], 0, size);
|
xe_print_blob_ascii85(p, prefix, suffix, snapshot->copy[i], 0, size);
|
||||||
remain -= size;
|
remain -= size;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -237,7 +237,6 @@ static bool xe_oa_buffer_check_unlocked(struct xe_oa_stream *stream)
|
||||||
u32 tail, hw_tail, partial_report_size, available;
|
u32 tail, hw_tail, partial_report_size, available;
|
||||||
int report_size = stream->oa_buffer.format->size;
|
int report_size = stream->oa_buffer.format->size;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
bool pollin;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
|
spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
|
||||||
|
|
||||||
|
@ -282,11 +281,11 @@ static bool xe_oa_buffer_check_unlocked(struct xe_oa_stream *stream)
|
||||||
stream->oa_buffer.tail = tail;
|
stream->oa_buffer.tail = tail;
|
||||||
|
|
||||||
available = xe_oa_circ_diff(stream, stream->oa_buffer.tail, stream->oa_buffer.head);
|
available = xe_oa_circ_diff(stream, stream->oa_buffer.tail, stream->oa_buffer.head);
|
||||||
pollin = available >= stream->wait_num_reports * report_size;
|
stream->pollin = available >= stream->wait_num_reports * report_size;
|
||||||
|
|
||||||
spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
|
spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
|
||||||
|
|
||||||
return pollin;
|
return stream->pollin;
|
||||||
}
|
}
|
||||||
|
|
||||||
static enum hrtimer_restart xe_oa_poll_check_timer_cb(struct hrtimer *hrtimer)
|
static enum hrtimer_restart xe_oa_poll_check_timer_cb(struct hrtimer *hrtimer)
|
||||||
|
@ -294,10 +293,8 @@ static enum hrtimer_restart xe_oa_poll_check_timer_cb(struct hrtimer *hrtimer)
|
||||||
struct xe_oa_stream *stream =
|
struct xe_oa_stream *stream =
|
||||||
container_of(hrtimer, typeof(*stream), poll_check_timer);
|
container_of(hrtimer, typeof(*stream), poll_check_timer);
|
||||||
|
|
||||||
if (xe_oa_buffer_check_unlocked(stream)) {
|
if (xe_oa_buffer_check_unlocked(stream))
|
||||||
stream->pollin = true;
|
|
||||||
wake_up(&stream->poll_wq);
|
wake_up(&stream->poll_wq);
|
||||||
}
|
|
||||||
|
|
||||||
hrtimer_forward_now(hrtimer, ns_to_ktime(stream->poll_period_ns));
|
hrtimer_forward_now(hrtimer, ns_to_ktime(stream->poll_period_ns));
|
||||||
|
|
||||||
|
@ -452,6 +449,12 @@ static u32 __oa_ccs_select(struct xe_oa_stream *stream)
|
||||||
return val;
|
return val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static u32 __oactrl_used_bits(struct xe_oa_stream *stream)
|
||||||
|
{
|
||||||
|
return stream->hwe->oa_unit->type == DRM_XE_OA_UNIT_TYPE_OAG ?
|
||||||
|
OAG_OACONTROL_USED_BITS : OAM_OACONTROL_USED_BITS;
|
||||||
|
}
|
||||||
|
|
||||||
static void xe_oa_enable(struct xe_oa_stream *stream)
|
static void xe_oa_enable(struct xe_oa_stream *stream)
|
||||||
{
|
{
|
||||||
const struct xe_oa_format *format = stream->oa_buffer.format;
|
const struct xe_oa_format *format = stream->oa_buffer.format;
|
||||||
|
@ -472,14 +475,14 @@ static void xe_oa_enable(struct xe_oa_stream *stream)
|
||||||
stream->hwe->oa_unit->type == DRM_XE_OA_UNIT_TYPE_OAG)
|
stream->hwe->oa_unit->type == DRM_XE_OA_UNIT_TYPE_OAG)
|
||||||
val |= OAG_OACONTROL_OA_PES_DISAG_EN;
|
val |= OAG_OACONTROL_OA_PES_DISAG_EN;
|
||||||
|
|
||||||
xe_mmio_write32(&stream->gt->mmio, regs->oa_ctrl, val);
|
xe_mmio_rmw32(&stream->gt->mmio, regs->oa_ctrl, __oactrl_used_bits(stream), val);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xe_oa_disable(struct xe_oa_stream *stream)
|
static void xe_oa_disable(struct xe_oa_stream *stream)
|
||||||
{
|
{
|
||||||
struct xe_mmio *mmio = &stream->gt->mmio;
|
struct xe_mmio *mmio = &stream->gt->mmio;
|
||||||
|
|
||||||
xe_mmio_write32(mmio, __oa_regs(stream)->oa_ctrl, 0);
|
xe_mmio_rmw32(mmio, __oa_regs(stream)->oa_ctrl, __oactrl_used_bits(stream), 0);
|
||||||
if (xe_mmio_wait32(mmio, __oa_regs(stream)->oa_ctrl,
|
if (xe_mmio_wait32(mmio, __oa_regs(stream)->oa_ctrl,
|
||||||
OAG_OACONTROL_OA_COUNTER_ENABLE, 0, 50000, NULL, false))
|
OAG_OACONTROL_OA_COUNTER_ENABLE, 0, 50000, NULL, false))
|
||||||
drm_err(&stream->oa->xe->drm,
|
drm_err(&stream->oa->xe->drm,
|
||||||
|
@ -2534,6 +2537,8 @@ static void __xe_oa_init_oa_units(struct xe_gt *gt)
|
||||||
u->type = DRM_XE_OA_UNIT_TYPE_OAM;
|
u->type = DRM_XE_OA_UNIT_TYPE_OAM;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
xe_mmio_write32(>->mmio, u->regs.oa_ctrl, 0);
|
||||||
|
|
||||||
/* Ensure MMIO trigger remains disabled till there is a stream */
|
/* Ensure MMIO trigger remains disabled till there is a stream */
|
||||||
xe_mmio_write32(>->mmio, u->regs.oa_debug,
|
xe_mmio_write32(>->mmio, u->regs.oa_debug,
|
||||||
oag_configure_mmio_trigger(NULL, false));
|
oag_configure_mmio_trigger(NULL, false));
|
||||||
|
|
|
@ -1300,12 +1300,14 @@ new_device_store(struct device *dev, struct device_attribute *attr,
|
||||||
info.flags |= I2C_CLIENT_SLAVE;
|
info.flags |= I2C_CLIENT_SLAVE;
|
||||||
}
|
}
|
||||||
|
|
||||||
info.flags |= I2C_CLIENT_USER;
|
|
||||||
|
|
||||||
client = i2c_new_client_device(adap, &info);
|
client = i2c_new_client_device(adap, &info);
|
||||||
if (IS_ERR(client))
|
if (IS_ERR(client))
|
||||||
return PTR_ERR(client);
|
return PTR_ERR(client);
|
||||||
|
|
||||||
|
/* Keep track of the added device */
|
||||||
|
mutex_lock(&adap->userspace_clients_lock);
|
||||||
|
list_add_tail(&client->detected, &adap->userspace_clients);
|
||||||
|
mutex_unlock(&adap->userspace_clients_lock);
|
||||||
dev_info(dev, "%s: Instantiated device %s at 0x%02hx\n", "new_device",
|
dev_info(dev, "%s: Instantiated device %s at 0x%02hx\n", "new_device",
|
||||||
info.type, info.addr);
|
info.type, info.addr);
|
||||||
|
|
||||||
|
@ -1313,15 +1315,6 @@ new_device_store(struct device *dev, struct device_attribute *attr,
|
||||||
}
|
}
|
||||||
static DEVICE_ATTR_WO(new_device);
|
static DEVICE_ATTR_WO(new_device);
|
||||||
|
|
||||||
static int __i2c_find_user_addr(struct device *dev, const void *addrp)
|
|
||||||
{
|
|
||||||
struct i2c_client *client = i2c_verify_client(dev);
|
|
||||||
unsigned short addr = *(unsigned short *)addrp;
|
|
||||||
|
|
||||||
return client && client->flags & I2C_CLIENT_USER &&
|
|
||||||
i2c_encode_flags_to_addr(client) == addr;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* And of course let the users delete the devices they instantiated, if
|
* And of course let the users delete the devices they instantiated, if
|
||||||
* they got it wrong. This interface can only be used to delete devices
|
* they got it wrong. This interface can only be used to delete devices
|
||||||
|
@ -1336,7 +1329,7 @@ delete_device_store(struct device *dev, struct device_attribute *attr,
|
||||||
const char *buf, size_t count)
|
const char *buf, size_t count)
|
||||||
{
|
{
|
||||||
struct i2c_adapter *adap = to_i2c_adapter(dev);
|
struct i2c_adapter *adap = to_i2c_adapter(dev);
|
||||||
struct device *child_dev;
|
struct i2c_client *client, *next;
|
||||||
unsigned short addr;
|
unsigned short addr;
|
||||||
char end;
|
char end;
|
||||||
int res;
|
int res;
|
||||||
|
@ -1352,19 +1345,28 @@ delete_device_store(struct device *dev, struct device_attribute *attr,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_lock(&core_lock);
|
|
||||||
/* Make sure the device was added through sysfs */
|
/* Make sure the device was added through sysfs */
|
||||||
child_dev = device_find_child(&adap->dev, &addr, __i2c_find_user_addr);
|
res = -ENOENT;
|
||||||
if (child_dev) {
|
mutex_lock_nested(&adap->userspace_clients_lock,
|
||||||
i2c_unregister_device(i2c_verify_client(child_dev));
|
i2c_adapter_depth(adap));
|
||||||
put_device(child_dev);
|
list_for_each_entry_safe(client, next, &adap->userspace_clients,
|
||||||
} else {
|
detected) {
|
||||||
dev_err(dev, "Can't find userspace-created device at %#x\n", addr);
|
if (i2c_encode_flags_to_addr(client) == addr) {
|
||||||
count = -ENOENT;
|
dev_info(dev, "%s: Deleting device %s at 0x%02hx\n",
|
||||||
}
|
"delete_device", client->name, client->addr);
|
||||||
mutex_unlock(&core_lock);
|
|
||||||
|
|
||||||
return count;
|
list_del(&client->detected);
|
||||||
|
i2c_unregister_device(client);
|
||||||
|
res = count;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mutex_unlock(&adap->userspace_clients_lock);
|
||||||
|
|
||||||
|
if (res < 0)
|
||||||
|
dev_err(dev, "%s: Can't find device in list\n",
|
||||||
|
"delete_device");
|
||||||
|
return res;
|
||||||
}
|
}
|
||||||
static DEVICE_ATTR_IGNORE_LOCKDEP(delete_device, S_IWUSR, NULL,
|
static DEVICE_ATTR_IGNORE_LOCKDEP(delete_device, S_IWUSR, NULL,
|
||||||
delete_device_store);
|
delete_device_store);
|
||||||
|
@ -1535,6 +1537,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
|
||||||
adap->locked_flags = 0;
|
adap->locked_flags = 0;
|
||||||
rt_mutex_init(&adap->bus_lock);
|
rt_mutex_init(&adap->bus_lock);
|
||||||
rt_mutex_init(&adap->mux_lock);
|
rt_mutex_init(&adap->mux_lock);
|
||||||
|
mutex_init(&adap->userspace_clients_lock);
|
||||||
|
INIT_LIST_HEAD(&adap->userspace_clients);
|
||||||
|
|
||||||
/* Set default timeout to 1 second if not already set */
|
/* Set default timeout to 1 second if not already set */
|
||||||
if (adap->timeout == 0)
|
if (adap->timeout == 0)
|
||||||
|
@ -1700,6 +1704,23 @@ int i2c_add_numbered_adapter(struct i2c_adapter *adap)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(i2c_add_numbered_adapter);
|
EXPORT_SYMBOL_GPL(i2c_add_numbered_adapter);
|
||||||
|
|
||||||
|
static void i2c_do_del_adapter(struct i2c_driver *driver,
|
||||||
|
struct i2c_adapter *adapter)
|
||||||
|
{
|
||||||
|
struct i2c_client *client, *_n;
|
||||||
|
|
||||||
|
/* Remove the devices we created ourselves as the result of hardware
|
||||||
|
* probing (using a driver's detect method) */
|
||||||
|
list_for_each_entry_safe(client, _n, &driver->clients, detected) {
|
||||||
|
if (client->adapter == adapter) {
|
||||||
|
dev_dbg(&adapter->dev, "Removing %s at 0x%x\n",
|
||||||
|
client->name, client->addr);
|
||||||
|
list_del(&client->detected);
|
||||||
|
i2c_unregister_device(client);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static int __unregister_client(struct device *dev, void *dummy)
|
static int __unregister_client(struct device *dev, void *dummy)
|
||||||
{
|
{
|
||||||
struct i2c_client *client = i2c_verify_client(dev);
|
struct i2c_client *client = i2c_verify_client(dev);
|
||||||
|
@ -1715,6 +1736,12 @@ static int __unregister_dummy(struct device *dev, void *dummy)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int __process_removed_adapter(struct device_driver *d, void *data)
|
||||||
|
{
|
||||||
|
i2c_do_del_adapter(to_i2c_driver(d), data);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* i2c_del_adapter - unregister I2C adapter
|
* i2c_del_adapter - unregister I2C adapter
|
||||||
* @adap: the adapter being unregistered
|
* @adap: the adapter being unregistered
|
||||||
|
@ -1726,6 +1753,7 @@ static int __unregister_dummy(struct device *dev, void *dummy)
|
||||||
void i2c_del_adapter(struct i2c_adapter *adap)
|
void i2c_del_adapter(struct i2c_adapter *adap)
|
||||||
{
|
{
|
||||||
struct i2c_adapter *found;
|
struct i2c_adapter *found;
|
||||||
|
struct i2c_client *client, *next;
|
||||||
|
|
||||||
/* First make sure that this adapter was ever added */
|
/* First make sure that this adapter was ever added */
|
||||||
mutex_lock(&core_lock);
|
mutex_lock(&core_lock);
|
||||||
|
@ -1737,16 +1765,31 @@ void i2c_del_adapter(struct i2c_adapter *adap)
|
||||||
}
|
}
|
||||||
|
|
||||||
i2c_acpi_remove_space_handler(adap);
|
i2c_acpi_remove_space_handler(adap);
|
||||||
|
/* Tell drivers about this removal */
|
||||||
|
mutex_lock(&core_lock);
|
||||||
|
bus_for_each_drv(&i2c_bus_type, NULL, adap,
|
||||||
|
__process_removed_adapter);
|
||||||
|
mutex_unlock(&core_lock);
|
||||||
|
|
||||||
|
/* Remove devices instantiated from sysfs */
|
||||||
|
mutex_lock_nested(&adap->userspace_clients_lock,
|
||||||
|
i2c_adapter_depth(adap));
|
||||||
|
list_for_each_entry_safe(client, next, &adap->userspace_clients,
|
||||||
|
detected) {
|
||||||
|
dev_dbg(&adap->dev, "Removing %s at 0x%x\n", client->name,
|
||||||
|
client->addr);
|
||||||
|
list_del(&client->detected);
|
||||||
|
i2c_unregister_device(client);
|
||||||
|
}
|
||||||
|
mutex_unlock(&adap->userspace_clients_lock);
|
||||||
|
|
||||||
/* Detach any active clients. This can't fail, thus we do not
|
/* Detach any active clients. This can't fail, thus we do not
|
||||||
* check the returned value. This is a two-pass process, because
|
* check the returned value. This is a two-pass process, because
|
||||||
* we can't remove the dummy devices during the first pass: they
|
* we can't remove the dummy devices during the first pass: they
|
||||||
* could have been instantiated by real devices wishing to clean
|
* could have been instantiated by real devices wishing to clean
|
||||||
* them up properly, so we give them a chance to do that first. */
|
* them up properly, so we give them a chance to do that first. */
|
||||||
mutex_lock(&core_lock);
|
|
||||||
device_for_each_child(&adap->dev, NULL, __unregister_client);
|
device_for_each_child(&adap->dev, NULL, __unregister_client);
|
||||||
device_for_each_child(&adap->dev, NULL, __unregister_dummy);
|
device_for_each_child(&adap->dev, NULL, __unregister_dummy);
|
||||||
mutex_unlock(&core_lock);
|
|
||||||
|
|
||||||
/* device name is gone after device_unregister */
|
/* device name is gone after device_unregister */
|
||||||
dev_dbg(&adap->dev, "adapter [%s] unregistered\n", adap->name);
|
dev_dbg(&adap->dev, "adapter [%s] unregistered\n", adap->name);
|
||||||
|
@ -1966,6 +2009,7 @@ int i2c_register_driver(struct module *owner, struct i2c_driver *driver)
|
||||||
/* add the driver to the list of i2c drivers in the driver core */
|
/* add the driver to the list of i2c drivers in the driver core */
|
||||||
driver->driver.owner = owner;
|
driver->driver.owner = owner;
|
||||||
driver->driver.bus = &i2c_bus_type;
|
driver->driver.bus = &i2c_bus_type;
|
||||||
|
INIT_LIST_HEAD(&driver->clients);
|
||||||
|
|
||||||
/* When registration returns, the driver core
|
/* When registration returns, the driver core
|
||||||
* will have called probe() for all matching-but-unbound devices.
|
* will have called probe() for all matching-but-unbound devices.
|
||||||
|
@ -1983,13 +2027,10 @@ int i2c_register_driver(struct module *owner, struct i2c_driver *driver)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(i2c_register_driver);
|
EXPORT_SYMBOL(i2c_register_driver);
|
||||||
|
|
||||||
static int __i2c_unregister_detected_client(struct device *dev, void *argp)
|
static int __process_removed_driver(struct device *dev, void *data)
|
||||||
{
|
{
|
||||||
struct i2c_client *client = i2c_verify_client(dev);
|
if (dev->type == &i2c_adapter_type)
|
||||||
|
i2c_do_del_adapter(data, to_i2c_adapter(dev));
|
||||||
if (client && client->flags & I2C_CLIENT_AUTO)
|
|
||||||
i2c_unregister_device(client);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2000,12 +2041,7 @@ static int __i2c_unregister_detected_client(struct device *dev, void *argp)
|
||||||
*/
|
*/
|
||||||
void i2c_del_driver(struct i2c_driver *driver)
|
void i2c_del_driver(struct i2c_driver *driver)
|
||||||
{
|
{
|
||||||
mutex_lock(&core_lock);
|
i2c_for_each_dev(driver, __process_removed_driver);
|
||||||
/* Satisfy __must_check, function can't fail */
|
|
||||||
if (driver_for_each_device(&driver->driver, NULL, NULL,
|
|
||||||
__i2c_unregister_detected_client)) {
|
|
||||||
}
|
|
||||||
mutex_unlock(&core_lock);
|
|
||||||
|
|
||||||
driver_unregister(&driver->driver);
|
driver_unregister(&driver->driver);
|
||||||
pr_debug("driver [%s] unregistered\n", driver->driver.name);
|
pr_debug("driver [%s] unregistered\n", driver->driver.name);
|
||||||
|
@ -2432,7 +2468,6 @@ static int i2c_detect_address(struct i2c_client *temp_client,
|
||||||
/* Finally call the custom detection function */
|
/* Finally call the custom detection function */
|
||||||
memset(&info, 0, sizeof(struct i2c_board_info));
|
memset(&info, 0, sizeof(struct i2c_board_info));
|
||||||
info.addr = addr;
|
info.addr = addr;
|
||||||
info.flags = I2C_CLIENT_AUTO;
|
|
||||||
err = driver->detect(temp_client, &info);
|
err = driver->detect(temp_client, &info);
|
||||||
if (err) {
|
if (err) {
|
||||||
/* -ENODEV is returned if the detection fails. We catch it
|
/* -ENODEV is returned if the detection fails. We catch it
|
||||||
|
@ -2459,7 +2494,9 @@ static int i2c_detect_address(struct i2c_client *temp_client,
|
||||||
dev_dbg(&adapter->dev, "Creating %s at 0x%02x\n",
|
dev_dbg(&adapter->dev, "Creating %s at 0x%02x\n",
|
||||||
info.type, info.addr);
|
info.type, info.addr);
|
||||||
client = i2c_new_client_device(adapter, &info);
|
client = i2c_new_client_device(adapter, &info);
|
||||||
if (IS_ERR(client))
|
if (!IS_ERR(client))
|
||||||
|
list_add_tail(&client->detected, &driver->clients);
|
||||||
|
else
|
||||||
dev_err(&adapter->dev, "Failed creating %s at 0x%02x\n",
|
dev_err(&adapter->dev, "Failed creating %s at 0x%02x\n",
|
||||||
info.type, info.addr);
|
info.type, info.addr);
|
||||||
}
|
}
|
||||||
|
|
|
@ -169,6 +169,7 @@ config IXP4XX_IRQ
|
||||||
|
|
||||||
config LAN966X_OIC
|
config LAN966X_OIC
|
||||||
tristate "Microchip LAN966x OIC Support"
|
tristate "Microchip LAN966x OIC Support"
|
||||||
|
depends on MCHP_LAN966X_PCI || COMPILE_TEST
|
||||||
select GENERIC_IRQ_CHIP
|
select GENERIC_IRQ_CHIP
|
||||||
select IRQ_DOMAIN
|
select IRQ_DOMAIN
|
||||||
help
|
help
|
||||||
|
|
|
@ -577,7 +577,8 @@ static void __exception_irq_entry aic_handle_fiq(struct pt_regs *regs)
|
||||||
AIC_FIQ_HWIRQ(AIC_TMR_EL02_VIRT));
|
AIC_FIQ_HWIRQ(AIC_TMR_EL02_VIRT));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & PMCR0_IACT) {
|
if ((read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & (PMCR0_IMODE | PMCR0_IACT)) ==
|
||||||
|
(FIELD_PREP(PMCR0_IMODE, PMCR0_IMODE_FIQ) | PMCR0_IACT)) {
|
||||||
int irq;
|
int irq;
|
||||||
if (cpumask_test_cpu(smp_processor_id(),
|
if (cpumask_test_cpu(smp_processor_id(),
|
||||||
&aic_irqc->fiq_aff[AIC_CPU_PMU_P]->aff))
|
&aic_irqc->fiq_aff[AIC_CPU_PMU_P]->aff))
|
||||||
|
|
|
@ -68,7 +68,8 @@ static int mvebu_icu_translate(struct irq_domain *d, struct irq_fwspec *fwspec,
|
||||||
unsigned long *hwirq, unsigned int *type)
|
unsigned long *hwirq, unsigned int *type)
|
||||||
{
|
{
|
||||||
unsigned int param_count = static_branch_unlikely(&legacy_bindings) ? 3 : 2;
|
unsigned int param_count = static_branch_unlikely(&legacy_bindings) ? 3 : 2;
|
||||||
struct mvebu_icu_msi_data *msi_data = d->host_data;
|
struct msi_domain_info *info = d->host_data;
|
||||||
|
struct mvebu_icu_msi_data *msi_data = info->chip_data;
|
||||||
struct mvebu_icu *icu = msi_data->icu;
|
struct mvebu_icu *icu = msi_data->icu;
|
||||||
|
|
||||||
/* Check the count of the parameters in dt */
|
/* Check the count of the parameters in dt */
|
||||||
|
|
|
@ -98,7 +98,7 @@ static void partition_irq_print_chip(struct irq_data *d, struct seq_file *p)
|
||||||
struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
|
struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
|
||||||
struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
|
struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
|
||||||
|
|
||||||
seq_printf(p, " %5s-%lu", chip->name, data->hwirq);
|
seq_printf(p, "%5s-%lu", chip->name, data->hwirq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct irq_chip partition_irq_chip = {
|
static struct irq_chip partition_irq_chip = {
|
||||||
|
|
|
@ -27,7 +27,7 @@ static void imsic_ipi_send(unsigned int cpu)
|
||||||
{
|
{
|
||||||
struct imsic_local_config *local = per_cpu_ptr(imsic->global.local, cpu);
|
struct imsic_local_config *local = per_cpu_ptr(imsic->global.local, cpu);
|
||||||
|
|
||||||
writel_relaxed(IMSIC_IPI_ID, local->msi_va);
|
writel(IMSIC_IPI_ID, local->msi_va);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void imsic_ipi_starting_cpu(void)
|
static void imsic_ipi_starting_cpu(void)
|
||||||
|
|
|
@ -31,7 +31,7 @@ static DEFINE_PER_CPU(void __iomem *, sswi_cpu_regs);
|
||||||
|
|
||||||
static void thead_aclint_sswi_ipi_send(unsigned int cpu)
|
static void thead_aclint_sswi_ipi_send(unsigned int cpu)
|
||||||
{
|
{
|
||||||
writel_relaxed(0x1, per_cpu(sswi_cpu_regs, cpu));
|
writel(0x1, per_cpu(sswi_cpu_regs, cpu));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void thead_aclint_sswi_ipi_clear(void)
|
static void thead_aclint_sswi_ipi_clear(void)
|
||||||
|
|
|
@ -76,10 +76,8 @@ static int linear_set_limits(struct mddev *mddev)
|
||||||
lim.max_write_zeroes_sectors = mddev->chunk_sectors;
|
lim.max_write_zeroes_sectors = mddev->chunk_sectors;
|
||||||
lim.io_min = mddev->chunk_sectors << 9;
|
lim.io_min = mddev->chunk_sectors << 9;
|
||||||
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
|
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
|
||||||
if (err) {
|
if (err)
|
||||||
queue_limits_cancel_update(mddev->gendisk->queue);
|
|
||||||
return err;
|
return err;
|
||||||
}
|
|
||||||
|
|
||||||
return queue_limits_set(mddev->gendisk->queue, &lim);
|
return queue_limits_set(mddev->gendisk->queue, &lim);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1441,7 +1441,9 @@ void aq_nic_deinit(struct aq_nic_s *self, bool link_down)
|
||||||
aq_ptp_ring_free(self);
|
aq_ptp_ring_free(self);
|
||||||
aq_ptp_free(self);
|
aq_ptp_free(self);
|
||||||
|
|
||||||
if (likely(self->aq_fw_ops->deinit) && link_down) {
|
/* May be invoked during hot unplug. */
|
||||||
|
if (pci_device_is_present(self->pdev) &&
|
||||||
|
likely(self->aq_fw_ops->deinit) && link_down) {
|
||||||
mutex_lock(&self->fwreq_mutex);
|
mutex_lock(&self->fwreq_mutex);
|
||||||
self->aq_fw_ops->deinit(self->aq_hw);
|
self->aq_fw_ops->deinit(self->aq_hw);
|
||||||
mutex_unlock(&self->fwreq_mutex);
|
mutex_unlock(&self->fwreq_mutex);
|
||||||
|
|
|
@ -41,9 +41,12 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
||||||
{
|
{
|
||||||
struct bcmgenet_priv *priv = netdev_priv(dev);
|
struct bcmgenet_priv *priv = netdev_priv(dev);
|
||||||
struct device *kdev = &priv->pdev->dev;
|
struct device *kdev = &priv->pdev->dev;
|
||||||
|
u32 phy_wolopts = 0;
|
||||||
|
|
||||||
if (dev->phydev)
|
if (dev->phydev) {
|
||||||
phy_ethtool_get_wol(dev->phydev, wol);
|
phy_ethtool_get_wol(dev->phydev, wol);
|
||||||
|
phy_wolopts = wol->wolopts;
|
||||||
|
}
|
||||||
|
|
||||||
/* MAC is not wake-up capable, return what the PHY does */
|
/* MAC is not wake-up capable, return what the PHY does */
|
||||||
if (!device_can_wakeup(kdev))
|
if (!device_can_wakeup(kdev))
|
||||||
|
@ -51,9 +54,14 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
||||||
|
|
||||||
/* Overlay MAC capabilities with that of the PHY queried before */
|
/* Overlay MAC capabilities with that of the PHY queried before */
|
||||||
wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
|
wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
|
||||||
wol->wolopts = priv->wolopts;
|
wol->wolopts |= priv->wolopts;
|
||||||
memset(wol->sopass, 0, sizeof(wol->sopass));
|
|
||||||
|
|
||||||
|
/* Return the PHY configured magic password */
|
||||||
|
if (phy_wolopts & WAKE_MAGICSECURE)
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Otherwise the MAC one */
|
||||||
|
memset(wol->sopass, 0, sizeof(wol->sopass));
|
||||||
if (wol->wolopts & WAKE_MAGICSECURE)
|
if (wol->wolopts & WAKE_MAGICSECURE)
|
||||||
memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
|
memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
|
||||||
}
|
}
|
||||||
|
@ -70,7 +78,7 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
||||||
/* Try Wake-on-LAN from the PHY first */
|
/* Try Wake-on-LAN from the PHY first */
|
||||||
if (dev->phydev) {
|
if (dev->phydev) {
|
||||||
ret = phy_ethtool_set_wol(dev->phydev, wol);
|
ret = phy_ethtool_set_wol(dev->phydev, wol);
|
||||||
if (ret != -EOPNOTSUPP)
|
if (ret != -EOPNOTSUPP && wol->wolopts)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -55,6 +55,7 @@
|
||||||
#include <linux/hwmon.h>
|
#include <linux/hwmon.h>
|
||||||
#include <linux/hwmon-sysfs.h>
|
#include <linux/hwmon-sysfs.h>
|
||||||
#include <linux/crc32poly.h>
|
#include <linux/crc32poly.h>
|
||||||
|
#include <linux/dmi.h>
|
||||||
|
|
||||||
#include <net/checksum.h>
|
#include <net/checksum.h>
|
||||||
#include <net/gso.h>
|
#include <net/gso.h>
|
||||||
|
@ -18212,6 +18213,50 @@ unlock:
|
||||||
|
|
||||||
static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
|
static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
|
||||||
|
|
||||||
|
/* Systems where ACPI _PTS (Prepare To Sleep) S5 will result in a fatal
|
||||||
|
* PCIe AER event on the tg3 device if the tg3 device is not, or cannot
|
||||||
|
* be, powered down.
|
||||||
|
*/
|
||||||
|
static const struct dmi_system_id tg3_restart_aer_quirk_table[] = {
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R440"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R540"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R640"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R650"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R740"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R750"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{}
|
||||||
|
};
|
||||||
|
|
||||||
static void tg3_shutdown(struct pci_dev *pdev)
|
static void tg3_shutdown(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
struct net_device *dev = pci_get_drvdata(pdev);
|
struct net_device *dev = pci_get_drvdata(pdev);
|
||||||
|
@ -18228,6 +18273,19 @@ static void tg3_shutdown(struct pci_dev *pdev)
|
||||||
|
|
||||||
if (system_state == SYSTEM_POWER_OFF)
|
if (system_state == SYSTEM_POWER_OFF)
|
||||||
tg3_power_down(tp);
|
tg3_power_down(tp);
|
||||||
|
else if (system_state == SYSTEM_RESTART &&
|
||||||
|
dmi_first_match(tg3_restart_aer_quirk_table) &&
|
||||||
|
pdev->current_state != PCI_D3cold &&
|
||||||
|
pdev->current_state != PCI_UNKNOWN) {
|
||||||
|
/* Disable PCIe AER on the tg3 to avoid a fatal
|
||||||
|
* error during this system restart.
|
||||||
|
*/
|
||||||
|
pcie_capability_clear_word(pdev, PCI_EXP_DEVCTL,
|
||||||
|
PCI_EXP_DEVCTL_CERE |
|
||||||
|
PCI_EXP_DEVCTL_NFERE |
|
||||||
|
PCI_EXP_DEVCTL_FERE |
|
||||||
|
PCI_EXP_DEVCTL_URRE);
|
||||||
|
}
|
||||||
|
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
|
|
||||||
|
|
|
@ -981,6 +981,9 @@ static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv
|
||||||
|
|
||||||
/* preallocate memory for ice_sched_node */
|
/* preallocate memory for ice_sched_node */
|
||||||
node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
|
node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
|
||||||
|
if (!node)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
*priv = node;
|
*priv = node;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -527,15 +527,14 @@ err:
|
||||||
* @xdp: xdp_buff used as input to the XDP program
|
* @xdp: xdp_buff used as input to the XDP program
|
||||||
* @xdp_prog: XDP program to run
|
* @xdp_prog: XDP program to run
|
||||||
* @xdp_ring: ring to be used for XDP_TX action
|
* @xdp_ring: ring to be used for XDP_TX action
|
||||||
* @rx_buf: Rx buffer to store the XDP action
|
|
||||||
* @eop_desc: Last descriptor in packet to read metadata from
|
* @eop_desc: Last descriptor in packet to read metadata from
|
||||||
*
|
*
|
||||||
* Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
|
* Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
|
||||||
*/
|
*/
|
||||||
static void
|
static u32
|
||||||
ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
|
ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
|
||||||
struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
|
struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
|
||||||
struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *eop_desc)
|
union ice_32b_rx_flex_desc *eop_desc)
|
||||||
{
|
{
|
||||||
unsigned int ret = ICE_XDP_PASS;
|
unsigned int ret = ICE_XDP_PASS;
|
||||||
u32 act;
|
u32 act;
|
||||||
|
@ -574,7 +573,7 @@ out_failure:
|
||||||
ret = ICE_XDP_CONSUMED;
|
ret = ICE_XDP_CONSUMED;
|
||||||
}
|
}
|
||||||
exit:
|
exit:
|
||||||
ice_set_rx_bufs_act(xdp, rx_ring, ret);
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -860,10 +859,8 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
|
||||||
xdp_buff_set_frags_flag(xdp);
|
xdp_buff_set_frags_flag(xdp);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) {
|
if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS))
|
||||||
ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
|
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
|
||||||
|
|
||||||
__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
|
__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
|
||||||
rx_buf->page_offset, size);
|
rx_buf->page_offset, size);
|
||||||
|
@ -924,7 +921,6 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
|
||||||
struct ice_rx_buf *rx_buf;
|
struct ice_rx_buf *rx_buf;
|
||||||
|
|
||||||
rx_buf = &rx_ring->rx_buf[ntc];
|
rx_buf = &rx_ring->rx_buf[ntc];
|
||||||
rx_buf->pgcnt = page_count(rx_buf->page);
|
|
||||||
prefetchw(rx_buf->page);
|
prefetchw(rx_buf->page);
|
||||||
|
|
||||||
if (!size)
|
if (!size)
|
||||||
|
@ -940,6 +936,31 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
|
||||||
return rx_buf;
|
return rx_buf;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ice_get_pgcnts - grab page_count() for gathered fragments
|
||||||
|
* @rx_ring: Rx descriptor ring to store the page counts on
|
||||||
|
*
|
||||||
|
* This function is intended to be called right before running XDP
|
||||||
|
* program so that the page recycling mechanism will be able to take
|
||||||
|
* a correct decision regarding underlying pages; this is done in such
|
||||||
|
* way as XDP program can change the refcount of page
|
||||||
|
*/
|
||||||
|
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
|
||||||
|
{
|
||||||
|
u32 nr_frags = rx_ring->nr_frags + 1;
|
||||||
|
u32 idx = rx_ring->first_desc;
|
||||||
|
struct ice_rx_buf *rx_buf;
|
||||||
|
u32 cnt = rx_ring->count;
|
||||||
|
|
||||||
|
for (int i = 0; i < nr_frags; i++) {
|
||||||
|
rx_buf = &rx_ring->rx_buf[idx];
|
||||||
|
rx_buf->pgcnt = page_count(rx_buf->page);
|
||||||
|
|
||||||
|
if (++idx == cnt)
|
||||||
|
idx = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ice_build_skb - Build skb around an existing buffer
|
* ice_build_skb - Build skb around an existing buffer
|
||||||
* @rx_ring: Rx descriptor ring to transact packets on
|
* @rx_ring: Rx descriptor ring to transact packets on
|
||||||
|
@ -1051,12 +1072,12 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
|
||||||
rx_buf->page_offset + headlen, size,
|
rx_buf->page_offset + headlen, size,
|
||||||
xdp->frame_sz);
|
xdp->frame_sz);
|
||||||
} else {
|
} else {
|
||||||
/* buffer is unused, change the act that should be taken later
|
/* buffer is unused, restore biased page count in Rx buffer;
|
||||||
* on; data was copied onto skb's linear part so there's no
|
* data was copied onto skb's linear part so there's no
|
||||||
* need for adjusting page offset and we can reuse this buffer
|
* need for adjusting page offset and we can reuse this buffer
|
||||||
* as-is
|
* as-is
|
||||||
*/
|
*/
|
||||||
rx_buf->act = ICE_SKB_CONSUMED;
|
rx_buf->pagecnt_bias++;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (unlikely(xdp_buff_has_frags(xdp))) {
|
if (unlikely(xdp_buff_has_frags(xdp))) {
|
||||||
|
@ -1103,6 +1124,65 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
|
||||||
rx_buf->page = NULL;
|
rx_buf->page = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
|
||||||
|
* @rx_ring: Rx ring with all the auxiliary data
|
||||||
|
* @xdp: XDP buffer carrying linear + frags part
|
||||||
|
* @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
|
||||||
|
* @ntc: a current next_to_clean value to be stored at rx_ring
|
||||||
|
* @verdict: return code from XDP program execution
|
||||||
|
*
|
||||||
|
* Walk through gathered fragments and satisfy internal page
|
||||||
|
* recycle mechanism; we take here an action related to verdict
|
||||||
|
* returned by XDP program;
|
||||||
|
*/
|
||||||
|
static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
|
||||||
|
u32 *xdp_xmit, u32 ntc, u32 verdict)
|
||||||
|
{
|
||||||
|
u32 nr_frags = rx_ring->nr_frags + 1;
|
||||||
|
u32 idx = rx_ring->first_desc;
|
||||||
|
u32 cnt = rx_ring->count;
|
||||||
|
u32 post_xdp_frags = 1;
|
||||||
|
struct ice_rx_buf *buf;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
if (unlikely(xdp_buff_has_frags(xdp)))
|
||||||
|
post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
|
||||||
|
|
||||||
|
for (i = 0; i < post_xdp_frags; i++) {
|
||||||
|
buf = &rx_ring->rx_buf[idx];
|
||||||
|
|
||||||
|
if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
|
||||||
|
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
|
||||||
|
*xdp_xmit |= verdict;
|
||||||
|
} else if (verdict & ICE_XDP_CONSUMED) {
|
||||||
|
buf->pagecnt_bias++;
|
||||||
|
} else if (verdict == ICE_XDP_PASS) {
|
||||||
|
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
|
||||||
|
}
|
||||||
|
|
||||||
|
ice_put_rx_buf(rx_ring, buf);
|
||||||
|
|
||||||
|
if (++idx == cnt)
|
||||||
|
idx = 0;
|
||||||
|
}
|
||||||
|
/* handle buffers that represented frags released by XDP prog;
|
||||||
|
* for these we keep pagecnt_bias as-is; refcount from struct page
|
||||||
|
* has been decremented within XDP prog and we do not have to increase
|
||||||
|
* the biased refcnt
|
||||||
|
*/
|
||||||
|
for (; i < nr_frags; i++) {
|
||||||
|
buf = &rx_ring->rx_buf[idx];
|
||||||
|
ice_put_rx_buf(rx_ring, buf);
|
||||||
|
if (++idx == cnt)
|
||||||
|
idx = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
xdp->data = NULL;
|
||||||
|
rx_ring->first_desc = ntc;
|
||||||
|
rx_ring->nr_frags = 0;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
|
* ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
|
||||||
* @rx_ring: Rx descriptor ring to transact packets on
|
* @rx_ring: Rx descriptor ring to transact packets on
|
||||||
|
@ -1120,15 +1200,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
|
||||||
unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
|
unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
|
||||||
unsigned int offset = rx_ring->rx_offset;
|
unsigned int offset = rx_ring->rx_offset;
|
||||||
struct xdp_buff *xdp = &rx_ring->xdp;
|
struct xdp_buff *xdp = &rx_ring->xdp;
|
||||||
u32 cached_ntc = rx_ring->first_desc;
|
|
||||||
struct ice_tx_ring *xdp_ring = NULL;
|
struct ice_tx_ring *xdp_ring = NULL;
|
||||||
struct bpf_prog *xdp_prog = NULL;
|
struct bpf_prog *xdp_prog = NULL;
|
||||||
u32 ntc = rx_ring->next_to_clean;
|
u32 ntc = rx_ring->next_to_clean;
|
||||||
|
u32 cached_ntu, xdp_verdict;
|
||||||
u32 cnt = rx_ring->count;
|
u32 cnt = rx_ring->count;
|
||||||
u32 xdp_xmit = 0;
|
u32 xdp_xmit = 0;
|
||||||
u32 cached_ntu;
|
|
||||||
bool failure;
|
bool failure;
|
||||||
u32 first;
|
|
||||||
|
|
||||||
xdp_prog = READ_ONCE(rx_ring->xdp_prog);
|
xdp_prog = READ_ONCE(rx_ring->xdp_prog);
|
||||||
if (xdp_prog) {
|
if (xdp_prog) {
|
||||||
|
@ -1190,6 +1268,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
|
||||||
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
|
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
|
||||||
xdp_buff_clear_frags_flag(xdp);
|
xdp_buff_clear_frags_flag(xdp);
|
||||||
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
|
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
|
||||||
|
ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if (++ntc == cnt)
|
if (++ntc == cnt)
|
||||||
|
@ -1199,15 +1278,15 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
|
||||||
if (ice_is_non_eop(rx_ring, rx_desc))
|
if (ice_is_non_eop(rx_ring, rx_desc))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc);
|
ice_get_pgcnts(rx_ring);
|
||||||
if (rx_buf->act == ICE_XDP_PASS)
|
xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
|
||||||
|
if (xdp_verdict == ICE_XDP_PASS)
|
||||||
goto construct_skb;
|
goto construct_skb;
|
||||||
total_rx_bytes += xdp_get_buff_len(xdp);
|
total_rx_bytes += xdp_get_buff_len(xdp);
|
||||||
total_rx_pkts++;
|
total_rx_pkts++;
|
||||||
|
|
||||||
xdp->data = NULL;
|
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
|
||||||
rx_ring->first_desc = ntc;
|
|
||||||
rx_ring->nr_frags = 0;
|
|
||||||
continue;
|
continue;
|
||||||
construct_skb:
|
construct_skb:
|
||||||
if (likely(ice_ring_uses_build_skb(rx_ring)))
|
if (likely(ice_ring_uses_build_skb(rx_ring)))
|
||||||
|
@ -1217,18 +1296,12 @@ construct_skb:
|
||||||
/* exit if we failed to retrieve a buffer */
|
/* exit if we failed to retrieve a buffer */
|
||||||
if (!skb) {
|
if (!skb) {
|
||||||
rx_ring->ring_stats->rx_stats.alloc_page_failed++;
|
rx_ring->ring_stats->rx_stats.alloc_page_failed++;
|
||||||
rx_buf->act = ICE_XDP_CONSUMED;
|
xdp_verdict = ICE_XDP_CONSUMED;
|
||||||
if (unlikely(xdp_buff_has_frags(xdp)))
|
|
||||||
ice_set_rx_bufs_act(xdp, rx_ring,
|
|
||||||
ICE_XDP_CONSUMED);
|
|
||||||
xdp->data = NULL;
|
|
||||||
rx_ring->first_desc = ntc;
|
|
||||||
rx_ring->nr_frags = 0;
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
xdp->data = NULL;
|
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
|
||||||
rx_ring->first_desc = ntc;
|
|
||||||
rx_ring->nr_frags = 0;
|
if (!skb)
|
||||||
|
break;
|
||||||
|
|
||||||
stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
|
stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
|
||||||
if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
|
if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
|
||||||
|
@ -1257,23 +1330,6 @@ construct_skb:
|
||||||
total_rx_pkts++;
|
total_rx_pkts++;
|
||||||
}
|
}
|
||||||
|
|
||||||
first = rx_ring->first_desc;
|
|
||||||
while (cached_ntc != first) {
|
|
||||||
struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc];
|
|
||||||
|
|
||||||
if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) {
|
|
||||||
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
|
|
||||||
xdp_xmit |= buf->act;
|
|
||||||
} else if (buf->act & ICE_XDP_CONSUMED) {
|
|
||||||
buf->pagecnt_bias++;
|
|
||||||
} else if (buf->act == ICE_XDP_PASS) {
|
|
||||||
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
|
|
||||||
}
|
|
||||||
|
|
||||||
ice_put_rx_buf(rx_ring, buf);
|
|
||||||
if (++cached_ntc >= cnt)
|
|
||||||
cached_ntc = 0;
|
|
||||||
}
|
|
||||||
rx_ring->next_to_clean = ntc;
|
rx_ring->next_to_clean = ntc;
|
||||||
/* return up to cleaned_count buffers to hardware */
|
/* return up to cleaned_count buffers to hardware */
|
||||||
failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));
|
failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));
|
||||||
|
|
|
@ -201,7 +201,6 @@ struct ice_rx_buf {
|
||||||
struct page *page;
|
struct page *page;
|
||||||
unsigned int page_offset;
|
unsigned int page_offset;
|
||||||
unsigned int pgcnt;
|
unsigned int pgcnt;
|
||||||
unsigned int act;
|
|
||||||
unsigned int pagecnt_bias;
|
unsigned int pagecnt_bias;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -5,49 +5,6 @@
|
||||||
#define _ICE_TXRX_LIB_H_
|
#define _ICE_TXRX_LIB_H_
|
||||||
#include "ice.h"
|
#include "ice.h"
|
||||||
|
|
||||||
/**
|
|
||||||
* ice_set_rx_bufs_act - propagate Rx buffer action to frags
|
|
||||||
* @xdp: XDP buffer representing frame (linear and frags part)
|
|
||||||
* @rx_ring: Rx ring struct
|
|
||||||
* act: action to store onto Rx buffers related to XDP buffer parts
|
|
||||||
*
|
|
||||||
* Set action that should be taken before putting Rx buffer from first frag
|
|
||||||
* to the last.
|
|
||||||
*/
|
|
||||||
static inline void
|
|
||||||
ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
|
|
||||||
const unsigned int act)
|
|
||||||
{
|
|
||||||
u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
|
|
||||||
u32 nr_frags = rx_ring->nr_frags + 1;
|
|
||||||
u32 idx = rx_ring->first_desc;
|
|
||||||
u32 cnt = rx_ring->count;
|
|
||||||
struct ice_rx_buf *buf;
|
|
||||||
|
|
||||||
for (int i = 0; i < nr_frags; i++) {
|
|
||||||
buf = &rx_ring->rx_buf[idx];
|
|
||||||
buf->act = act;
|
|
||||||
|
|
||||||
if (++idx == cnt)
|
|
||||||
idx = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* adjust pagecnt_bias on frags freed by XDP prog */
|
|
||||||
if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) {
|
|
||||||
u32 delta = rx_ring->nr_frags - sinfo_frags;
|
|
||||||
|
|
||||||
while (delta) {
|
|
||||||
if (idx == 0)
|
|
||||||
idx = cnt - 1;
|
|
||||||
else
|
|
||||||
idx--;
|
|
||||||
buf = &rx_ring->rx_buf[idx];
|
|
||||||
buf->pagecnt_bias--;
|
|
||||||
delta--;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ice_test_staterr - tests bits in Rx descriptor status and error fields
|
* ice_test_staterr - tests bits in Rx descriptor status and error fields
|
||||||
* @status_err_n: Rx descriptor status_error0 or status_error1 bits
|
* @status_err_n: Rx descriptor status_error0 or status_error1 bits
|
||||||
|
|
|
@ -2424,6 +2424,11 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
|
||||||
u32 chan = 0;
|
u32 chan = 0;
|
||||||
u8 qmode = 0;
|
u8 qmode = 0;
|
||||||
|
|
||||||
|
if (rxfifosz == 0)
|
||||||
|
rxfifosz = priv->dma_cap.rx_fifo_size;
|
||||||
|
if (txfifosz == 0)
|
||||||
|
txfifosz = priv->dma_cap.tx_fifo_size;
|
||||||
|
|
||||||
/* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */
|
/* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */
|
||||||
if (priv->plat->has_gmac4 || priv->plat->has_xgmac) {
|
if (priv->plat->has_gmac4 || priv->plat->has_xgmac) {
|
||||||
rxfifosz /= rx_channels_count;
|
rxfifosz /= rx_channels_count;
|
||||||
|
@ -2892,6 +2897,11 @@ static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode,
|
||||||
int rxfifosz = priv->plat->rx_fifo_size;
|
int rxfifosz = priv->plat->rx_fifo_size;
|
||||||
int txfifosz = priv->plat->tx_fifo_size;
|
int txfifosz = priv->plat->tx_fifo_size;
|
||||||
|
|
||||||
|
if (rxfifosz == 0)
|
||||||
|
rxfifosz = priv->dma_cap.rx_fifo_size;
|
||||||
|
if (txfifosz == 0)
|
||||||
|
txfifosz = priv->dma_cap.tx_fifo_size;
|
||||||
|
|
||||||
/* Adjust for real per queue fifo size */
|
/* Adjust for real per queue fifo size */
|
||||||
rxfifosz /= rx_channels_count;
|
rxfifosz /= rx_channels_count;
|
||||||
txfifosz /= tx_channels_count;
|
txfifosz /= tx_channels_count;
|
||||||
|
@ -5868,6 +5878,9 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
|
||||||
const int mtu = new_mtu;
|
const int mtu = new_mtu;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
if (txfifosz == 0)
|
||||||
|
txfifosz = priv->dma_cap.tx_fifo_size;
|
||||||
|
|
||||||
txfifosz /= priv->plat->tx_queues_to_use;
|
txfifosz /= priv->plat->tx_queues_to_use;
|
||||||
|
|
||||||
if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) {
|
if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) {
|
||||||
|
@ -7219,29 +7232,15 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
|
||||||
priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
|
priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!priv->plat->rx_fifo_size) {
|
if (priv->dma_cap.rx_fifo_size &&
|
||||||
if (priv->dma_cap.rx_fifo_size) {
|
priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
|
||||||
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
|
|
||||||
} else {
|
|
||||||
dev_err(priv->device, "Can't specify Rx FIFO size\n");
|
|
||||||
return -ENODEV;
|
|
||||||
}
|
|
||||||
} else if (priv->dma_cap.rx_fifo_size &&
|
|
||||||
priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
|
|
||||||
dev_warn(priv->device,
|
dev_warn(priv->device,
|
||||||
"Rx FIFO size (%u) exceeds dma capability\n",
|
"Rx FIFO size (%u) exceeds dma capability\n",
|
||||||
priv->plat->rx_fifo_size);
|
priv->plat->rx_fifo_size);
|
||||||
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
|
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
|
||||||
}
|
}
|
||||||
if (!priv->plat->tx_fifo_size) {
|
if (priv->dma_cap.tx_fifo_size &&
|
||||||
if (priv->dma_cap.tx_fifo_size) {
|
priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
|
||||||
priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
|
|
||||||
} else {
|
|
||||||
dev_err(priv->device, "Can't specify Tx FIFO size\n");
|
|
||||||
return -ENODEV;
|
|
||||||
}
|
|
||||||
} else if (priv->dma_cap.tx_fifo_size &&
|
|
||||||
priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
|
|
||||||
dev_warn(priv->device,
|
dev_warn(priv->device,
|
||||||
"Tx FIFO size (%u) exceeds dma capability\n",
|
"Tx FIFO size (%u) exceeds dma capability\n",
|
||||||
priv->plat->tx_fifo_size);
|
priv->plat->tx_fifo_size);
|
||||||
|
|
|
@ -574,18 +574,14 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool tun_capable(struct tun_struct *tun)
|
static inline bool tun_not_capable(struct tun_struct *tun)
|
||||||
{
|
{
|
||||||
const struct cred *cred = current_cred();
|
const struct cred *cred = current_cred();
|
||||||
struct net *net = dev_net(tun->dev);
|
struct net *net = dev_net(tun->dev);
|
||||||
|
|
||||||
if (ns_capable(net->user_ns, CAP_NET_ADMIN))
|
return ((uid_valid(tun->owner) && !uid_eq(cred->euid, tun->owner)) ||
|
||||||
return 1;
|
(gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
|
||||||
if (uid_valid(tun->owner) && uid_eq(cred->euid, tun->owner))
|
!ns_capable(net->user_ns, CAP_NET_ADMIN);
|
||||||
return 1;
|
|
||||||
if (gid_valid(tun->group) && in_egroup_p(tun->group))
|
|
||||||
return 1;
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tun_set_real_num_queues(struct tun_struct *tun)
|
static void tun_set_real_num_queues(struct tun_struct *tun)
|
||||||
|
@ -2782,7 +2778,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
|
||||||
!!(tun->flags & IFF_MULTI_QUEUE))
|
!!(tun->flags & IFF_MULTI_QUEUE))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (!tun_capable(tun))
|
if (tun_not_capable(tun))
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
err = security_tun_dev_open(tun->security);
|
err = security_tun_dev_open(tun->security);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
|
|
|
@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
|
||||||
if (likely(cpu < tq_number))
|
if (likely(cpu < tq_number))
|
||||||
tq = &adapter->tx_queue[cpu];
|
tq = &adapter->tx_queue[cpu];
|
||||||
else
|
else
|
||||||
tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
|
tq = &adapter->tx_queue[cpu % tq_number];
|
||||||
|
|
||||||
return tq;
|
return tq;
|
||||||
}
|
}
|
||||||
|
@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
|
||||||
u32 buf_size;
|
u32 buf_size;
|
||||||
u32 dw2;
|
u32 dw2;
|
||||||
|
|
||||||
|
spin_lock_irq(&tq->tx_lock);
|
||||||
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
|
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
|
||||||
dw2 |= xdpf->len;
|
dw2 |= xdpf->len;
|
||||||
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
|
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
|
||||||
|
@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
|
||||||
|
|
||||||
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
|
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
|
||||||
tq->stats.tx_ring_full++;
|
tq->stats.tx_ring_full++;
|
||||||
|
spin_unlock_irq(&tq->tx_lock);
|
||||||
return -ENOSPC;
|
return -ENOSPC;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
|
||||||
tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
|
tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
|
||||||
xdpf->data, buf_size,
|
xdpf->data, buf_size,
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
|
if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
|
||||||
|
spin_unlock_irq(&tq->tx_lock);
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
}
|
||||||
tbi->map_type |= VMXNET3_MAP_SINGLE;
|
tbi->map_type |= VMXNET3_MAP_SINGLE;
|
||||||
} else { /* XDP buffer from page pool */
|
} else { /* XDP buffer from page pool */
|
||||||
page = virt_to_page(xdpf->data);
|
page = virt_to_page(xdpf->data);
|
||||||
|
@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
|
||||||
dma_wmb();
|
dma_wmb();
|
||||||
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
|
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
|
||||||
VMXNET3_TXD_GEN);
|
VMXNET3_TXD_GEN);
|
||||||
|
spin_unlock_irq(&tq->tx_lock);
|
||||||
|
|
||||||
/* No need to handle the case when tx_num_deferred doesn't reach
|
/* No need to handle the case when tx_num_deferred doesn't reach
|
||||||
* threshold. Backend driver at hypervisor side will poll and reset
|
* threshold. Backend driver at hypervisor side will poll and reset
|
||||||
|
@ -225,6 +230,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
|
||||||
{
|
{
|
||||||
struct vmxnet3_adapter *adapter = netdev_priv(dev);
|
struct vmxnet3_adapter *adapter = netdev_priv(dev);
|
||||||
struct vmxnet3_tx_queue *tq;
|
struct vmxnet3_tx_queue *tq;
|
||||||
|
struct netdev_queue *nq;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
|
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
|
||||||
|
@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
|
||||||
if (tq->stopped)
|
if (tq->stopped)
|
||||||
return -ENETDOWN;
|
return -ENETDOWN;
|
||||||
|
|
||||||
|
nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
|
||||||
|
|
||||||
|
__netif_tx_lock(nq, smp_processor_id());
|
||||||
for (i = 0; i < n; i++) {
|
for (i = 0; i < n; i++) {
|
||||||
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
|
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
|
||||||
tq->stats.xdp_xmit_err++;
|
tq->stats.xdp_xmit_err++;
|
||||||
|
@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tq->stats.xdp_xmit += i;
|
tq->stats.xdp_xmit += i;
|
||||||
|
__netif_tx_unlock(nq);
|
||||||
|
|
||||||
return i;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1700,7 +1700,13 @@ int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count)
|
||||||
|
|
||||||
status = nvme_set_features(ctrl, NVME_FEAT_NUM_QUEUES, q_count, NULL, 0,
|
status = nvme_set_features(ctrl, NVME_FEAT_NUM_QUEUES, q_count, NULL, 0,
|
||||||
&result);
|
&result);
|
||||||
if (status < 0)
|
|
||||||
|
/*
|
||||||
|
* It's either a kernel error or the host observed a connection
|
||||||
|
* lost. In either case it's not possible communicate with the
|
||||||
|
* controller and thus enter the error code path.
|
||||||
|
*/
|
||||||
|
if (status < 0 || status == NVME_SC_HOST_PATH_ERROR)
|
||||||
return status;
|
return status;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -781,11 +781,19 @@ restart:
|
||||||
static void
|
static void
|
||||||
nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
|
nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
|
||||||
{
|
{
|
||||||
|
enum nvme_ctrl_state state;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
dev_info(ctrl->ctrl.device,
|
dev_info(ctrl->ctrl.device,
|
||||||
"NVME-FC{%d}: controller connectivity lost. Awaiting "
|
"NVME-FC{%d}: controller connectivity lost. Awaiting "
|
||||||
"Reconnect", ctrl->cnum);
|
"Reconnect", ctrl->cnum);
|
||||||
|
|
||||||
switch (nvme_ctrl_state(&ctrl->ctrl)) {
|
spin_lock_irqsave(&ctrl->lock, flags);
|
||||||
|
set_bit(ASSOC_FAILED, &ctrl->flags);
|
||||||
|
state = nvme_ctrl_state(&ctrl->ctrl);
|
||||||
|
spin_unlock_irqrestore(&ctrl->lock, flags);
|
||||||
|
|
||||||
|
switch (state) {
|
||||||
case NVME_CTRL_NEW:
|
case NVME_CTRL_NEW:
|
||||||
case NVME_CTRL_LIVE:
|
case NVME_CTRL_LIVE:
|
||||||
/*
|
/*
|
||||||
|
@ -2079,7 +2087,8 @@ done:
|
||||||
nvme_fc_complete_rq(rq);
|
nvme_fc_complete_rq(rq);
|
||||||
|
|
||||||
check_error:
|
check_error:
|
||||||
if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
|
if (terminate_assoc &&
|
||||||
|
nvme_ctrl_state(&ctrl->ctrl) != NVME_CTRL_RESETTING)
|
||||||
queue_work(nvme_reset_wq, &ctrl->ioerr_work);
|
queue_work(nvme_reset_wq, &ctrl->ioerr_work);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2533,6 +2542,8 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
|
||||||
static void
|
static void
|
||||||
nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
|
nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
|
||||||
{
|
{
|
||||||
|
enum nvme_ctrl_state state = nvme_ctrl_state(&ctrl->ctrl);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* if an error (io timeout, etc) while (re)connecting, the remote
|
* if an error (io timeout, etc) while (re)connecting, the remote
|
||||||
* port requested terminating of the association (disconnect_ls)
|
* port requested terminating of the association (disconnect_ls)
|
||||||
|
@ -2540,9 +2551,8 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
|
||||||
* the controller. Abort any ios on the association and let the
|
* the controller. Abort any ios on the association and let the
|
||||||
* create_association error path resolve things.
|
* create_association error path resolve things.
|
||||||
*/
|
*/
|
||||||
if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
|
if (state == NVME_CTRL_CONNECTING) {
|
||||||
__nvme_fc_abort_outstanding_ios(ctrl, true);
|
__nvme_fc_abort_outstanding_ios(ctrl, true);
|
||||||
set_bit(ASSOC_FAILED, &ctrl->flags);
|
|
||||||
dev_warn(ctrl->ctrl.device,
|
dev_warn(ctrl->ctrl.device,
|
||||||
"NVME-FC{%d}: transport error during (re)connect\n",
|
"NVME-FC{%d}: transport error during (re)connect\n",
|
||||||
ctrl->cnum);
|
ctrl->cnum);
|
||||||
|
@ -2550,7 +2560,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Otherwise, only proceed if in LIVE state - e.g. on first error */
|
/* Otherwise, only proceed if in LIVE state - e.g. on first error */
|
||||||
if (ctrl->ctrl.state != NVME_CTRL_LIVE)
|
if (state != NVME_CTRL_LIVE)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
dev_warn(ctrl->ctrl.device,
|
dev_warn(ctrl->ctrl.device,
|
||||||
|
@ -3167,12 +3177,18 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
|
||||||
else
|
else
|
||||||
ret = nvme_fc_recreate_io_queues(ctrl);
|
ret = nvme_fc_recreate_io_queues(ctrl);
|
||||||
}
|
}
|
||||||
if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))
|
|
||||||
ret = -EIO;
|
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_term_aen_ops;
|
goto out_term_aen_ops;
|
||||||
|
|
||||||
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
|
spin_lock_irqsave(&ctrl->lock, flags);
|
||||||
|
if (!test_bit(ASSOC_FAILED, &ctrl->flags))
|
||||||
|
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
|
||||||
|
else
|
||||||
|
ret = -EIO;
|
||||||
|
spin_unlock_irqrestore(&ctrl->lock, flags);
|
||||||
|
|
||||||
|
if (ret)
|
||||||
|
goto out_term_aen_ops;
|
||||||
|
|
||||||
ctrl->ctrl.nr_reconnects = 0;
|
ctrl->ctrl.nr_reconnects = 0;
|
||||||
|
|
||||||
|
@ -3578,8 +3594,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
|
||||||
list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list);
|
list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list);
|
||||||
spin_unlock_irqrestore(&rport->lock, flags);
|
spin_unlock_irqrestore(&rport->lock, flags);
|
||||||
|
|
||||||
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING) ||
|
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
|
||||||
!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
|
|
||||||
dev_err(ctrl->ctrl.device,
|
dev_err(ctrl->ctrl.device,
|
||||||
"NVME-FC{%d}: failed to init ctrl state\n", ctrl->cnum);
|
"NVME-FC{%d}: failed to init ctrl state\n", ctrl->cnum);
|
||||||
goto fail_ctrl;
|
goto fail_ctrl;
|
||||||
|
|
|
@ -2153,14 +2153,6 @@ static int nvme_alloc_host_mem_multi(struct nvme_dev *dev, u64 preferred,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out_free_bufs:
|
out_free_bufs:
|
||||||
while (--i >= 0) {
|
|
||||||
size_t size = le32_to_cpu(descs[i].size) * NVME_CTRL_PAGE_SIZE;
|
|
||||||
|
|
||||||
dma_free_attrs(dev->dev, size, bufs[i],
|
|
||||||
le64_to_cpu(descs[i].addr),
|
|
||||||
DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN);
|
|
||||||
}
|
|
||||||
|
|
||||||
kfree(bufs);
|
kfree(bufs);
|
||||||
out_free_descs:
|
out_free_descs:
|
||||||
dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
|
dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
|
||||||
|
@ -3147,7 +3139,9 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
|
||||||
* because of high power consumption (> 2 Watt) in s2idle
|
* because of high power consumption (> 2 Watt) in s2idle
|
||||||
* sleep. Only some boards with Intel CPU are affected.
|
* sleep. Only some boards with Intel CPU are affected.
|
||||||
*/
|
*/
|
||||||
if (dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
|
if (dmi_match(DMI_BOARD_NAME, "DN50Z-140HC-YD") ||
|
||||||
|
dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
|
||||||
|
dmi_match(DMI_BOARD_NAME, "GXxMRXx") ||
|
||||||
dmi_match(DMI_BOARD_NAME, "PH4PG31") ||
|
dmi_match(DMI_BOARD_NAME, "PH4PG31") ||
|
||||||
dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") ||
|
dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") ||
|
||||||
dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71"))
|
dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71"))
|
||||||
|
|
|
@ -792,7 +792,7 @@ static umode_t nvme_tls_attrs_are_visible(struct kobject *kobj,
|
||||||
return a->mode;
|
return a->mode;
|
||||||
}
|
}
|
||||||
|
|
||||||
const struct attribute_group nvme_tls_attrs_group = {
|
static const struct attribute_group nvme_tls_attrs_group = {
|
||||||
.attrs = nvme_tls_attrs,
|
.attrs = nvme_tls_attrs,
|
||||||
.is_visible = nvme_tls_attrs_are_visible,
|
.is_visible = nvme_tls_attrs_are_visible,
|
||||||
};
|
};
|
||||||
|
|
|
@ -1068,6 +1068,7 @@ static void nvme_execute_identify_ns_nvm(struct nvmet_req *req)
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
|
status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
|
||||||
|
kfree(id);
|
||||||
out:
|
out:
|
||||||
nvmet_req_complete(req, status);
|
nvmet_req_complete(req, status);
|
||||||
}
|
}
|
||||||
|
|
|
@ -287,7 +287,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
|
||||||
args.subsysnqn = d->subsysnqn;
|
args.subsysnqn = d->subsysnqn;
|
||||||
args.hostnqn = d->hostnqn;
|
args.hostnqn = d->hostnqn;
|
||||||
args.hostid = &d->hostid;
|
args.hostid = &d->hostid;
|
||||||
args.kato = c->kato;
|
args.kato = le32_to_cpu(c->kato);
|
||||||
|
|
||||||
ctrl = nvmet_alloc_ctrl(&args);
|
ctrl = nvmet_alloc_ctrl(&args);
|
||||||
if (!ctrl)
|
if (!ctrl)
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue