The following commit has been merged in the master branch: commit dbe69e43372212527abf48609aba7fc39a6daa27 Merge: a6eaf3850cb171c328a8b0db6d3c79286a1eba9d b6df00789e2831fff7a2c65aa7164b2a4dcbe599 Author: Linus Torvalds torvalds@linux-foundation.org Date: Wed Jun 30 15:51:09 2021 -0700
Merge tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski: "Core:
- BPF: - add syscall program type and libbpf support for generating instructions and bindings for in-kernel BPF loaders (BPF loaders for BPF), this is a stepping stone for signed BPF programs - infrastructure to migrate TCP child sockets from one listener to another in the same reuseport group/map to improve flexibility of service hand-off/restart - add broadcast support to XDP redirect
- allow bypass of the lockless qdisc to improving performance (for pktgen: +23% with one thread, +44% with 2 threads)
- add a simpler version of "DO_ONCE()" which does not require jump labels, intended for slow-path usage
- virtio/vsock: introduce SOCK_SEQPACKET support
- add getsocketopt to retrieve netns cookie
- ip: treat lowest address of a IPv4 subnet as ordinary unicast address allowing reclaiming of precious IPv4 addresses
- ipv6: use prandom_u32() for ID generation
- ip: add support for more flexible field selection for hashing across multi-path routes (w/ offload to mlxsw)
- icmp: add support for extended RFC 8335 PROBE (ping)
- seg6: add support for SRv6 End.DT46 behavior
- mptcp: - DSS checksum support (RFC 8684) to detect middlebox meddling - support Connection-time 'C' flag - time stamping support
- sctp: packetization Layer Path MTU Discovery (RFC 8899)
- xfrm: speed up state addition with seq set
- WiFi: - hidden AP discovery on 6 GHz and other HE 6 GHz improvements - aggregation handling improvements for some drivers - minstrel improvements for no-ack frames - deferred rate control for TXQs to improve reaction times - switch from round robin to virtual time-based airtime scheduler
- add trace points: - tcp checksum errors - openvswitch - action execution, upcalls - socket errors via sk_error_report
Device APIs:
- devlink: add rate API for hierarchical control of max egress rate of virtual devices (VFs, SFs etc.)
- don't require RCU read lock to be held around BPF hooks in NAPI context
- page_pool: generic buffer recycling
New hardware/drivers:
- mobile: - iosm: PCIe Driver for Intel M.2 Modem - support for Qualcomm MSM8998 (ipa)
- WiFi: Qualcomm QCN9074 and WCN6855 PCI devices
- sparx5: Microchip SparX-5 family of Enterprise Ethernet switches
- Mellanox BlueField Gigabit Ethernet (control NIC of the DPU)
- NXP SJA1110 Automotive Ethernet 10-port switch
- Qualcomm QCA8327 switch support (qca8k)
- Mikrotik 10/25G NIC (atl1c)
Driver changes:
- ACPI support for some MDIO, MAC and PHY devices from Marvell and NXP (our first foray into MAC/PHY description via ACPI)
- HW timestamping (PTP) support: bnxt_en, ice, sja1105, hns3, tja11xx
- Mellanox/Nvidia NIC (mlx5) - NIC VF offload of L2 bridging - support IRQ distribution to Sub-functions
- Marvell (prestera): - add flower and match all - devlink trap - link aggregation
- Netronome (nfp): connection tracking offload
- Intel 1GE (igc): add AF_XDP support
- Marvell DPU (octeontx2): ingress ratelimit offload
- Google vNIC (gve): new ring/descriptor format support
- Qualcomm mobile (rmnet & ipa): inline checksum offload support
- MediaTek WiFi (mt76) - mt7915 MSI support - mt7915 Tx status reporting - mt7915 thermal sensors support - mt7921 decapsulation offload - mt7921 enable runtime pm and deep sleep
- Realtek WiFi (rtw88) - beacon filter support - Tx antenna path diversity support - firmware crash information via devcoredump
- Qualcomm WiFi (wcn36xx) - Wake-on-WLAN support with magic packets and GTK rekeying
- Micrel PHY (ksz886x/ksz8081): add cable test support"
* tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2168 commits) tcp: change ICSK_CA_PRIV_SIZE definition tcp_yeah: check struct yeah size at compile time gve: DQO: Fix off by one in gve_rx_dqo() stmmac: intel: set PCI_D3hot in suspend stmmac: intel: Enable PHY WOL option in EHL net: stmmac: option to enable PHY WOL with PMT enabled net: say "local" instead of "static" addresses in ndo_dflt_fdb_{add,del} net: use netdev_info in ndo_dflt_fdb_{add,del} ptp: Set lookup cookie when creating a PTP PPS source. net: sock: add trace for socket errors net: sock: introduce sk_error_report net: dsa: replay the local bridge FDB entries pointing to the bridge dev too net: dsa: ensure during dsa_fdb_offload_notify that dev_hold and dev_put are on the same dev net: dsa: include fdb entries pointing to bridge in the host fdb list net: dsa: include bridge addresses which are local in the host fdb list net: dsa: sync static FDB entries on foreign interfaces to hardware net: dsa: install the host MDB and FDB entries in the master's RX filter net: dsa: reference count the FDB addresses at the cross-chip notifier level net: dsa: introduce a separate cross-chip notifier type for host FDBs net: dsa: reference count the MDB entries at the cross-chip notifier level ...
diff --combined Documentation/networking/devlink/devlink-trap.rst index efa5f7f42c88,ef8928c355df..90d1381b88de --- a/Documentation/networking/devlink/devlink-trap.rst +++ b/Documentation/networking/devlink/devlink-trap.rst @@@ -495,8 -495,9 +495,9 @@@ help debug packet drops caused by thes links to the description of driver-specific traps registered by various device drivers:
- * :doc:`netdevsim` - * :doc:`mlxsw` - * :doc:`prestera` + * Documentation/networking/devlink/netdevsim.rst + * Documentation/networking/devlink/mlxsw.rst ++ * Documentation/networking/devlink/prestera.rst
.. _Generic-Packet-Trap-Groups:
diff --combined MAINTAINERS index f4af84a7de42,25956727ff24..88449b7a4c95 --- a/MAINTAINERS +++ b/MAINTAINERS @@@ -973,7 -973,7 +973,7 @@@ F: drivers/net/ethernet/amd/xgbe
AMD SENSOR FUSION HUB DRIVER M: Nehal Shah nehal-bakulchandra.shah@amd.com -M: Sandeep Singh sandeep.singh@amd.com +M: Basavaraj Natikar basavaraj.natikar@amd.com L: linux-input@vger.kernel.org S: Maintained F: Documentation/hid/amd-sfh* @@@ -1811,13 -1811,12 +1811,13 @@@ F: Documentation/devicetree/bindings/ne F: Documentation/devicetree/bindings/pinctrl/cortina,gemini-pinctrl.txt F: Documentation/devicetree/bindings/rtc/faraday,ftrtc010.txt F: arch/arm/mach-gemini/ +F: drivers/crypto/gemini/ F: drivers/net/ethernet/cortina/ F: drivers/pinctrl/pinctrl-gemini.c F: drivers/rtc/rtc-ftrtc010.c
ARM/CZ.NIC TURRIS SUPPORT -M: Marek Behun kabel@kernel.org +M: Marek Beh��n kabel@kernel.org S: Maintained W: https://www.turris.cz/ F: Documentation/ABI/testing/debugfs-moxtet @@@ -1973,7 -1972,6 +1973,7 @@@ F: Documentation/devicetree/bindings/in F: Documentation/devicetree/bindings/timer/intel,ixp4xx-timer.yaml F: arch/arm/mach-ixp4xx/ F: drivers/clocksource/timer-ixp4xx.c +F: drivers/crypto/ixp4xx_crypto.c F: drivers/gpio/gpio-ixp4xx.c F: drivers/irqchip/irq-ixp4xx.c F: include/linux/irqchip/irq-ixp4xx.h @@@ -4447,18 -4445,6 +4447,18 @@@ F: include/linux/compiler-clang. F: scripts/clang-tools/ K: \b(?i:clang|llvm)\b
+CLANG CONTROL FLOW INTEGRITY SUPPORT +M: Sami Tolvanen samitolvanen@google.com +M: Kees Cook keescook@chromium.org +R: Nathan Chancellor nathan@kernel.org +R: Nick Desaulniers ndesaulniers@google.com +L: clang-built-linux@googlegroups.com +S: Supported +B: https://github.com/ClangBuiltLinux/linux/issues +T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/clang/features +F: include/linux/cfi.h +F: kernel/cfi.c + CLEANCACHE API M: Konrad Rzeszutek Wilk konrad.wilk@oracle.com L: linux-kernel@vger.kernel.org @@@ -4624,12 -4610,6 +4624,12 @@@ S: Supporte F: drivers/video/console/ F: include/linux/console*
+CONTEXT TRACKING +M: Frederic Weisbecker frederic@kernel.org +S: Maintained +F: kernel/context_tracking.c +F: include/linux/context_tracking* + CONTROL GROUP (CGROUP) M: Tejun Heo tj@kernel.org M: Zefan Li lizefan.x@bytedance.com @@@ -5199,14 -5179,7 +5199,14 @@@ DELL WMI NOTIFICATIONS DRIVE M: Matthew Garrett mjg59@srcf.ucam.org M: Pali Roh��r pali@kernel.org S: Maintained -F: drivers/platform/x86/dell/dell-wmi.c +F: drivers/platform/x86/dell/dell-wmi-base.c + +DELL WMI HARDWARE PRIVACY SUPPORT +M: Perry Yuan Perry.Yuan@dell.com +L: Dell.Client.Kernel@dell.com +L: platform-driver-x86@vger.kernel.org +S: Maintained +F: drivers/platform/x86/dell/dell-wmi-privacy.c
DELTA ST MEDIA DRIVER M: Hugues Fruchet hugues.fruchet@foss.st.com @@@ -5216,13 -5189,6 +5216,13 @@@ W: https://linuxtv.or T: git git://linuxtv.org/media_tree.git F: drivers/media/platform/sti/delta
+DELTA DPS920AB PSU DRIVER +M: Robert Marko robert.marko@sartura.hr +L: linux-hwmon@vger.kernel.org +S: Maintained +F: Documentation/hwmon/dps920ab.rst +F: drivers/hwmon/pmbus/dps920ab.c + DENALI NAND DRIVER L: linux-mtd@lists.infradead.org S: Orphan @@@ -6479,11 -6445,10 +6479,11 @@@ F: Documentation/filesystems/ecryptfs.r F: fs/ecryptfs/
EDAC-AMD64 -M: Borislav Petkov bp@alien8.de +M: Yazen Ghannam yazen.ghannam@amd.com L: linux-edac@vger.kernel.org -S: Maintained +S: Supported F: drivers/edac/amd64_edac* +F: drivers/edac/mce_amd*
EDAC-ARMADA M: Jan Luebbe jlu@pengutronix.de @@@ -6806,7 -6771,7 +6806,7 @@@ F: include/video/s1d13xxxfb.
EROFS FILE SYSTEM M: Gao Xiang xiang@kernel.org -M: Chao Yu yuchao0@huawei.com +M: Chao Yu chao@kernel.org L: linux-erofs@lists.ozlabs.org S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git @@@ -6847,6 -6812,8 +6847,8 @@@ F: Documentation/devicetree/bindings/ne F: Documentation/devicetree/bindings/net/qca,ar803x.yaml F: Documentation/networking/phy.rst F: drivers/net/mdio/ + F: drivers/net/mdio/acpi_mdio.c + F: drivers/net/mdio/fwnode_mdio.c F: drivers/net/mdio/of_mdio.c F: drivers/net/pcs/ F: drivers/net/phy/ @@@ -7201,7 -7168,7 +7203,7 @@@ F: include/video
FREESCALE CAAM (Cryptographic Acceleration and Assurance Module) DRIVER M: Horia Geant�� horia.geanta@nxp.com -M: Aymen Sghaier aymen.sghaier@nxp.com +M: Pankaj Gupta pankaj.gupta@nxp.com L: linux-crypto@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/crypto/fsl-sec4.txt @@@ -7389,6 -7356,7 +7391,6 @@@ F: drivers/net/ethernet/freescale/fs_en F: include/linux/fs_enet_pd.h
FREESCALE SOC SOUND DRIVERS -M: Timur Tabi timur@kernel.org M: Nicolin Chen nicoleotsuka@gmail.com M: Xiubo Li Xiubo.Lee@gmail.com R: Fabio Estevam festevam@gmail.com @@@ -7591,12 -7559,6 +7593,12 @@@ M: Kieran Bingham <kbingham@kernel.org S: Supported F: scripts/gdb/
+GEMINI CRYPTO DRIVER +M: Corentin Labbe clabbe@baylibre.com +L: linux-crypto@vger.kernel.org +S: Maintained +F: drivers/crypto/gemini/ + GEMTEK FM RADIO RECEIVER DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org @@@ -8812,6 -8774,22 +8814,6 @@@ L: linux-i2c@vger.kernel.or S: Maintained F: drivers/i2c/busses/i2c-icy.c
-IDE SUBSYSTEM -M: "David S. Miller" davem@davemloft.net -L: linux-ide@vger.kernel.org -S: Maintained -Q: http://patchwork.ozlabs.org/project/linux-ide/list/ -T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/ide.git -F: Documentation/ide/ -F: drivers/ide/ -F: include/linux/ide.h - -IDE/ATAPI DRIVERS -L: linux-ide@vger.kernel.org -S: Orphan -F: Documentation/cdrom/ide-cd.rst -F: drivers/ide/ide-cd* - IDEAPAD LAPTOP EXTRAS DRIVER M: Ike Panhc ike.pan@canonical.com L: platform-driver-x86@vger.kernel.org @@@ -9163,6 -9141,7 +9165,7 @@@ F: Documentation/networking/device_driv F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h + F: include/linux/net/intel/iidc.h
INTEL FRAMEBUFFER DRIVER (excluding 810 and 815) M: Maik Broemme mbroemme@libmpq.org @@@ -9268,12 -9247,6 +9271,12 @@@ F: Documentation/admin-guide/media/ipu3 F: Documentation/userspace-api/media/v4l/pixfmt-meta-intel-ipu3.rst F: drivers/staging/media/ipu3/
+INTEL IXP4XX CRYPTO SUPPORT +M: Corentin Labbe clabbe@baylibre.com +L: linux-crypto@vger.kernel.org +S: Maintained +F: drivers/crypto/ixp4xx_crypto.c + INTEL IXP4XX QMGR, NPE, ETHERNET and HSS SUPPORT M: Krzysztof Halasa khalasa@piap.pl S: Maintained @@@ -9417,11 -9390,6 +9420,11 @@@ S: Maintaine F: arch/x86/include/asm/intel_scu_ipc.h F: drivers/platform/x86/intel_scu_*
+INTEL SKYLAKE INT3472 ACPI DEVICE DRIVER +M: Daniel Scally djrscally@gmail.com +S: Maintained +F: drivers/platform/x86/intel/int3472/ + INTEL SPEED SELECT TECHNOLOGY M: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com L: platform-driver-x86@vger.kernel.org @@@ -9442,7 -9410,7 +9445,7 @@@ F: include/linux/firmware/intel/stratix F: include/linux/firmware/intel/stratix10-svc-client.h
INTEL TELEMETRY DRIVER -M: Rajneesh Bhardwaj rajneesh.bhardwaj@linux.intel.com +M: Rajneesh Bhardwaj irenic.rajneesh@gmail.com M: "David E. Box" david.e.box@linux.intel.com L: platform-driver-x86@vger.kernel.org S: Maintained @@@ -9487,6 -9455,13 +9490,13 @@@ L: Dell.Client.Kernel@dell.co S: Maintained F: drivers/platform/x86/intel-wmi-thunderbolt.c
+ INTEL WWAN IOSM DRIVER + M: M Chetan Kumar m.chetan.kumar@intel.com + M: Intel Corporation linuxwwan@intel.com + L: netdev@vger.kernel.org + S: Maintained + F: drivers/net/wwan/iosm/ + INTEL(R) TRACE HUB M: Alexander Shishkin alexander.shishkin@linux.intel.com S: Supported @@@ -10030,8 -10005,6 +10040,8 @@@ F: arch/arm64/include/asm/kvm F: arch/arm64/include/uapi/asm/kvm* F: arch/arm64/kvm/ F: include/kvm/arm_* +F: tools/testing/selftests/kvm/*/aarch64/ +F: tools/testing/selftests/kvm/aarch64/
KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips) M: Huacai Chen chenhuacai@kernel.org @@@ -10897,7 -10870,6 +10907,7 @@@ S: Maintaine F: drivers/mailbox/ F: include/linux/mailbox_client.h F: include/linux/mailbox_controller.h +F: include/dt-bindings/mailbox/ F: Documentation/devicetree/bindings/mailbox/
MAILBOX ARM MHUv2 @@@ -10984,7 -10956,7 +10994,7 @@@ F: include/linux/mv643xx.
MARVELL MV88X3310 PHY DRIVER M: Russell King linux@armlinux.org.uk -M: Marek Behun marek.behun@nic.cz +M: Marek Beh��n kabel@kernel.org L: netdev@vger.kernel.org S: Maintained F: drivers/net/phy/marvell10g.c @@@ -11330,7 -11302,6 +11340,7 @@@ F: include/media/imx.
MEDIA DRIVERS FOR FREESCALE IMX7 M: Rui Miguel Silva rmfrfs@gmail.com +M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained T: git git://linuxtv.org/media_tree.git @@@ -11440,7 -11411,6 +11450,7 @@@ L: linux-renesas-soc@vger.kernel.or S: Supported T: git git://linuxtv.org/media_tree.git F: Documentation/devicetree/bindings/media/renesas,csi2.yaml +F: Documentation/devicetree/bindings/media/renesas,isp.yaml F: Documentation/devicetree/bindings/media/renesas,vin.yaml F: drivers/media/platform/rcar-vin/
@@@ -12030,13 -12000,11 +12040,13 @@@ MICROCHIP ISC DRIVE M: Eugen Hristev eugen.hristev@microchip.com L: linux-media@vger.kernel.org S: Supported -F: Documentation/devicetree/bindings/media/atmel-isc.txt +F: Documentation/devicetree/bindings/media/atmel,isc.yaml +F: Documentation/devicetree/bindings/media/microchip,xisc.yaml F: drivers/media/platform/atmel/atmel-isc-base.c F: drivers/media/platform/atmel/atmel-isc-regs.h F: drivers/media/platform/atmel/atmel-isc.h F: drivers/media/platform/atmel/atmel-sama5d2-isc.c +F: drivers/media/platform/atmel/atmel-sama7g5-isc.c F: include/linux/atmel-isc-media.h
MICROCHIP ISI DRIVER @@@ -12234,7 -12202,7 +12244,7 @@@ M: Maximilian Luz <luzmaximilian@gmail. L: platform-driver-x86@vger.kernel.org S: Maintained W: https://github.com/linux-surface/surface-aggregator-module -C: irc://chat.freenode.net/##linux-surface +C: irc://irc.libera.chat/linux-surface F: Documentation/driver-api/surface_aggregator/ F: drivers/platform/surface/aggregator/ F: drivers/platform/surface/surface_acpi_notify.c @@@ -12430,6 -12398,12 +12440,12 @@@ F: Documentation/userspace-api/media/dr F: drivers/media/pci/meye/ F: include/uapi/linux/meye.h
+ MOTORCOMM PHY DRIVER + M: Peter Geis pgwipeout@gmail.com + L: netdev@vger.kernel.org + S: Maintained + F: drivers/net/phy/motorcomm.c + MOXA SMARTIO/INDUSTIO/INTELLIO SERIAL CARD S: Orphan F: Documentation/driver-api/serial/moxa-smartio.rst @@@ -12644,7 -12618,7 +12660,7 @@@ S: Orpha F: drivers/net/ethernet/natsemi/natsemi.c
NCR 5380 SCSI DRIVERS -M: Finn Thain fthain@telegraphics.com.au +M: Finn Thain fthain@linux-m68k.org M: Michael Schmitz schmitzmic@gmail.com L: linux-scsi@vger.kernel.org S: Maintained @@@ -12701,6 -12675,7 +12717,7 @@@ W: http://www.netfilter.org W: http://www.iptables.org/ W: http://www.nftables.org/ Q: http://patchwork.ozlabs.org/project/netfilter-devel/list/ + C: irc://irc.libera.chat/netfilter T: git git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next.git F: include/linux/netfilter* @@@ -13116,7 -13091,7 +13133,7 @@@ F: Documentation/filesystems/ntfs.rs F: fs/ntfs/
NUBUS SUBSYSTEM -M: Finn Thain fthain@telegraphics.com.au +M: Finn Thain fthain@linux-m68k.org L: linux-m68k@lists.linux-m68k.org S: Maintained F: arch/*/include/asm/nubus.h @@@ -13238,6 -13213,7 +13255,7 @@@ M: Vladimir Oltean <olteanv@gmail.com L: linux-kernel@vger.kernel.org S: Maintained F: drivers/net/dsa/sja1105 + F: drivers/net/pcs/pcs-xpcs-nxp.c
NXP TDA998X DRM DRIVER M: Russell King linux@armlinux.org.uk @@@ -15183,13 -15159,6 +15201,13 @@@ S: Maintaine F: Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt F: drivers/cpufreq/qcom-cpufreq-nvmem.c
+QUALCOMM CRYPTO DRIVERS +M: Thara Gopinath thara.gopinath@linaro.org +L: linux-crypto@vger.kernel.org +L: linux-arm-msm@vger.kernel.org +S: Maintained +F: drivers/crypto/qce/ + QUALCOMM EMAC GIGABIT ETHERNET DRIVER M: Timur Tabi timur@kernel.org L: netdev@vger.kernel.org @@@ -15625,6 -15594,13 +15643,13 @@@ F: include/linux/rpmsg F: include/uapi/linux/rpmsg.h F: samples/rpmsg/
+ REMOTE PROCESSOR MESSAGING (RPMSG) WWAN CONTROL DRIVER + M: Stephan Gerhold stephan@gerhold.net + L: netdev@vger.kernel.org + L: linux-remoteproc@vger.kernel.org + S: Maintained + F: drivers/net/wwan/rpmsg_wwan_ctrl.c + RENESAS CLOCK DRIVERS M: Geert Uytterhoeven geert+renesas@glider.be L: linux-renesas-soc@vger.kernel.org @@@ -15754,14 -15730,6 +15779,14 @@@ F: arch/riscv N: riscv K: riscv
+RISC-V/MICROCHIP POLARFIRE SOC SUPPORT +M: Lewis Hanly lewis.hanly@microchip.com +L: linux-riscv@lists.infradead.org +S: Supported +F: drivers/mailbox/mailbox-mpfs.c +F: drivers/soc/microchip/ +F: include/soc/microchip/mpfs.h + RNBD BLOCK DRIVERS M: Md. Haris Iqbal haris.iqbal@ionos.com M: Jack Wang jinpu.wang@ionos.com @@@ -17063,13 -17031,6 +17088,13 @@@ S: Maintaine F: drivers/ssb/ F: include/linux/ssb/
+SONY IMX208 SENSOR DRIVER +M: Sakari Ailus sakari.ailus@linux.intel.com +L: linux-media@vger.kernel.org +S: Maintained +T: git git://linuxtv.org/media_tree.git +F: drivers/media/i2c/imx208.c + SONY IMX214 SENSOR DRIVER M: Ricardo Ribalda ribalda@kernel.org L: linux-media@vger.kernel.org @@@ -17738,6 -17699,7 +17763,7 @@@ M: Jose Abreu <Jose.Abreu@synopsys.com L: netdev@vger.kernel.org S: Supported F: drivers/net/pcs/pcs-xpcs.c + F: drivers/net/pcs/pcs-xpcs.h F: include/linux/pcs/pcs-xpcs.h
SYNOPSYS DESIGNWARE I2C DRIVER @@@ -18240,13 -18202,6 +18266,13 @@@ W: http://thinkwiki.org/wiki/Ibm-acp T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git F: drivers/platform/x86/thinkpad_acpi.c
+THINKPAD LMI DRIVER +M: Mark Pearson markpearson@lenovo.com +L: platform-driver-x86@vger.kernel.org +S: Maintained +F: Documentation/ABI/testing/sysfs-class-firmware-attributes +F: drivers/platform/x86/think-lmi.? + THUNDERBOLT DMA TRAFFIC TEST DRIVER M: Isaac Hazan isaac.hazan@intel.com L: linux-usb@vger.kernel.org @@@ -19649,10 -19604,6 +19675,10 @@@ F: include/dt-bindings/regulator F: include/linux/regulator/ K: regulator_get_optional
+VOLTAGE AND CURRENT REGULATOR IRQ HELPERS +R: Matti Vaittinen matti.vaittinen@fi.rohmeurope.com +F: drivers/regulator/irq_helpers.c + VRF M: David Ahern dsahern@kernel.org L: netdev@vger.kernel.org @@@ -19670,7 -19621,6 +19696,7 @@@ S: Maintaine T: git git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk.git F: Documentation/core-api/printk-formats.rst F: lib/test_printf.c +F: lib/test_scanf.c F: lib/vsprintf.c
VT1211 HARDWARE MONITOR DRIVER @@@ -19854,6 -19804,16 +19880,16 @@@ F: Documentation/core-api/workqueue.rs F: include/linux/workqueue.h F: kernel/workqueue.c
+ WWAN DRIVERS + M: Loic Poulain loic.poulain@linaro.org + M: Sergey Ryazanov ryazanov.s.a@gmail.com + R: Johannes Berg johannes@sipsolutions.net + L: netdev@vger.kernel.org + S: Maintained + F: drivers/net/wwan/ + F: include/linux/wwan.h + F: include/uapi/linux/wwan.h + X-POWERS AXP288 PMIC DRIVERS M: Hans de Goede hdegoede@redhat.com S: Maintained diff --combined arch/arm64/net/bpf_jit_comp.c index dd5000da18b8,be873a7da62b..dccf98a37283 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@@ -16,7 -16,6 +16,7 @@@ #include <asm/byteorder.h> #include <asm/cacheflush.h> #include <asm/debug-monitors.h> +#include <asm/insn.h> #include <asm/set_memory.h>
#include "bpf_jit.h" @@@ -179,9 -178,6 +179,6 @@@ static bool is_addsub_imm(u32 imm return !(imm & ~0xfff) || !(imm & ~0xfff000); }
- /* Stack must be multiples of 16B */ - #define STACK_ALIGN(sz) (((sz) + 15) & ~15) - /* Tail call offset to jump into */ #if IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) #define PROLOGUE_OFFSET 8 @@@ -256,7 -252,8 +253,8 @@@ static int build_prologue(struct jit_ct emit(A64_BTI_J, ctx); }
- ctx->stack_size = STACK_ALIGN(prog->aux->stack_depth); + /* Stack must be multiples of 16B */ + ctx->stack_size = round_up(prog->aux->stack_depth, 16);
/* Set up function call stack */ emit(A64_SUB_I(1, A64_SP, A64_SP, ctx->stack_size), ctx); @@@ -488,17 -485,12 +486,12 @@@ static int build_insn(const struct bpf_ break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: + emit(A64_UDIV(is64, dst, dst, src), ctx); + break; case BPF_ALU | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X: - switch (BPF_OP(code)) { - case BPF_DIV: - emit(A64_UDIV(is64, dst, dst, src), ctx); - break; - case BPF_MOD: - emit(A64_UDIV(is64, tmp, dst, src), ctx); - emit(A64_MSUB(is64, dst, dst, tmp, src), ctx); - break; - } + emit(A64_UDIV(is64, tmp, dst, src), ctx); + emit(A64_MSUB(is64, dst, dst, tmp, src), ctx); break; case BPF_ALU | BPF_LSH | BPF_X: case BPF_ALU64 | BPF_LSH | BPF_X: diff --combined drivers/net/ethernet/broadcom/bnxt/bnxt.c index 598e5e6ace18,8f185a4883d2..f56245eeef7b --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@@ -49,6 -49,8 +49,8 @@@ #include <linux/log2.h> #include <linux/aer.h> #include <linux/bitmap.h> + #include <linux/ptp_clock_kernel.h> + #include <linux/timecounter.h> #include <linux/cpu_rmap.h> #include <linux/cpumask.h> #include <net/pkt_cls.h> @@@ -63,6 -65,7 +65,7 @@@ #include "bnxt_ethtool.h" #include "bnxt_dcb.h" #include "bnxt_xdp.h" + #include "bnxt_ptp.h" #include "bnxt_vfr.h" #include "bnxt_tc.h" #include "bnxt_devlink.h" @@@ -418,12 -421,25 +421,25 @@@ static netdev_tx_t bnxt_start_xmit(stru vlan_tag_flags |= 1 << TX_BD_CFA_META_TPID_SHIFT; }
- if (unlikely(skb->no_fcs)) { - lflags |= cpu_to_le32(TX_BD_FLAGS_NO_CRC); - goto normal_tx; + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { + struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; + + if (ptp && ptp->tx_tstamp_en && !skb_is_gso(skb) && + atomic_dec_if_positive(&ptp->tx_avail) >= 0) { + if (!bnxt_ptp_parse(skb, &ptp->tx_seqid)) { + lflags |= cpu_to_le32(TX_BD_FLAGS_STAMP); + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + } else { + atomic_inc(&bp->ptp_cfg->tx_avail); + } + } }
- if (free_size == bp->tx_ring_size && length <= bp->tx_push_thresh) { + if (unlikely(skb->no_fcs)) + lflags |= cpu_to_le32(TX_BD_FLAGS_NO_CRC); + + if (free_size == bp->tx_ring_size && length <= bp->tx_push_thresh && + !lflags) { struct tx_push_buffer *tx_push_buf = txr->tx_push; struct tx_push_bd *tx_push = &tx_push_buf->push_bd; struct tx_bd_ext *tx_push1 = &tx_push->txbd2; @@@ -590,6 -606,8 +606,8 @@@ normal_tx
netdev_tx_sent_queue(txq, skb->len);
+ skb_tx_timestamp(skb); + /* Sync BD data before updating doorbell */ wmb();
@@@ -619,6 -637,9 +637,9 @@@ tx_done return NETDEV_TX_OK;
tx_dma_error: + if (BNXT_TX_PTP_IS_SET(lflags)) + atomic_inc(&bp->ptp_cfg->tx_avail); + last_frag = i;
/* start back at beginning and unmap skb */ @@@ -653,6 -674,7 +674,7 @@@ static void bnxt_tx_int(struct bnxt *bp
for (i = 0; i < nr_pkts; i++) { struct bnxt_sw_tx_bd *tx_buf; + bool compl_deferred = false; struct sk_buff *skb; int j, last;
@@@ -679,12 -701,21 +701,21 @@@ skb_frag_size(&skb_shinfo(skb)->frags[j]), PCI_DMA_TODEVICE); } + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) { + if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (!bnxt_get_tx_ts_p5(bp, skb)) + compl_deferred = true; + else + atomic_inc(&bp->ptp_cfg->tx_avail); + } + }
next_tx_int: cons = NEXT_TX(cons);
tx_bytes += skb->len; - dev_kfree_skb_any(skb); + if (!compl_deferred) + dev_kfree_skb_any(skb); }
netdev_tx_completed_queue(txq, nr_pkts, tx_bytes); @@@ -1706,9 -1737,9 +1737,9 @@@ static int bnxt_rx_pkt(struct bnxt *bp u8 *data_ptr, agg_bufs, cmp_type; dma_addr_t dma_addr; struct sk_buff *skb; + u32 flags, misc; void *data; int rc = 0; - u32 misc;
rxcmp = (struct rx_cmp *) &cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)]; @@@ -1806,7 -1837,8 +1837,8 @@@ goto next_rx_no_len; }
- len = le32_to_cpu(rxcmp->rx_cmp_len_flags_type) >> RX_CMP_LEN_SHIFT; + flags = le32_to_cpu(rxcmp->rx_cmp_len_flags_type); + len = flags >> RX_CMP_LEN_SHIFT; dma_addr = rx_buf->mapping;
if (bnxt_rx_xdp(bp, rxr, cons, data, &data_ptr, &len, event)) { @@@ -1883,6 -1915,24 +1915,24 @@@ } }
+ if (unlikely((flags & RX_CMP_FLAGS_ITYPES_MASK) == + RX_CMP_FLAGS_ITYPE_PTP_W_TS)) { + if (bp->flags & BNXT_FLAG_CHIP_P5) { + u32 cmpl_ts = le32_to_cpu(rxcmp1->rx_cmp_timestamp); + u64 ns, ts; + + if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) { + struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; + + spin_lock_bh(&ptp->ptp_lock); + ns = timecounter_cyc2time(&ptp->tc, ts); + spin_unlock_bh(&ptp->ptp_lock); + memset(skb_hwtstamps(skb), 0, + sizeof(*skb_hwtstamps(skb))); + skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns); + } + } + } bnxt_deliver_skb(bp, bnapi, skb); rc = 1;
@@@ -2184,7 -2234,6 +2234,7 @@@ static int bnxt_hwrm_handler(struct bnx case CMPL_BASE_TYPE_HWRM_ASYNC_EVENT: bnxt_async_event_process(bp, (struct hwrm_async_event_cmpl *)txcmp); + break;
default: break; @@@ -7392,6 -7441,56 +7442,56 @@@ hwrm_func_resc_qcaps_exit return rc; }
+ /* bp->hwrm_cmd_lock already held. */ + static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) + { + struct hwrm_port_mac_ptp_qcfg_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_port_mac_ptp_qcfg_input req = {0}; + struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; + u8 flags; + int rc; + + if (bp->hwrm_spec_code < 0x10801) { + rc = -ENODEV; + goto no_ptp; + } + + req.port_id = cpu_to_le16(bp->pf.port_id); + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_MAC_PTP_QCFG, -1, -1); + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + if (rc) + goto no_ptp; + + flags = resp->flags; + if (!(flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_HWRM_ACCESS)) { + rc = -ENODEV; + goto no_ptp; + } + if (!ptp) { + ptp = kzalloc(sizeof(*ptp), GFP_KERNEL); + if (!ptp) + return -ENOMEM; + ptp->bp = bp; + bp->ptp_cfg = ptp; + } + if (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_PARTIAL_DIRECT_ACCESS_REF_CLOCK) { + ptp->refclk_regs[0] = le32_to_cpu(resp->ts_ref_clock_reg_lower); + ptp->refclk_regs[1] = le32_to_cpu(resp->ts_ref_clock_reg_upper); + } else if (bp->flags & BNXT_FLAG_CHIP_P5) { + ptp->refclk_regs[0] = BNXT_TS_REG_TIMESYNC_TS0_LOWER; + ptp->refclk_regs[1] = BNXT_TS_REG_TIMESYNC_TS0_UPPER; + } else { + rc = -ENODEV; + goto no_ptp; + } + return 0; + + no_ptp: + kfree(ptp); + bp->ptp_cfg = NULL; + return rc; + } + static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) { int rc = 0; @@@ -7463,6 -7562,8 +7563,8 @@@ bp->flags &= ~BNXT_FLAG_WOL_CAP; if (flags & FUNC_QCAPS_RESP_FLAGS_WOL_MAGICPKT_SUPPORTED) bp->flags |= BNXT_FLAG_WOL_CAP; + if (flags & FUNC_QCAPS_RESP_FLAGS_PTP_SUPPORTED) + __bnxt_hwrm_ptp_qcfg(bp); } else { #ifdef CONFIG_BNXT_SRIOV struct bnxt_vf_info *vf = &bp->vf; @@@ -10021,6 -10122,7 +10123,7 @@@ static int __bnxt_open_nic(struct bnxt } }
+ bnxt_ptp_start(bp); rc = bnxt_init_nic(bp, irq_re_init); if (rc) { netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc); @@@ -10336,6 -10438,12 +10439,12 @@@ static int bnxt_ioctl(struct net_devic return bnxt_hwrm_port_phy_write(bp, mdio->phy_id, mdio->reg_num, mdio->val_in);
+ case SIOCSHWTSTAMP: + return bnxt_hwtstamp_set(dev, ifr); + + case SIOCGHWTSTAMP: + return bnxt_hwtstamp_get(dev, ifr); + default: /* do nothing */ break; @@@ -12552,6 -12660,8 +12661,8 @@@ static void bnxt_remove_one(struct pci_
if (BNXT_PF(bp)) devlink_port_type_clear(&bp->dl_port); + + bnxt_ptp_clear(bp); pci_disable_pcie_error_reporting(pdev); unregister_netdev(dev); clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); @@@ -12572,6 -12682,8 +12683,8 @@@ bnxt_dcb_free(bp); kfree(bp->edev); bp->edev = NULL; + kfree(bp->ptp_cfg); + bp->ptp_cfg = NULL; kfree(bp->fw_health); bp->fw_health = NULL; bnxt_cleanup_pci(bp); @@@ -13133,6 -13245,11 +13246,11 @@@ static int bnxt_init_one(struct pci_de rc); }
+ if (bnxt_ptp_init(bp)) { + netdev_warn(dev, "PTP initialization failed.\n"); + kfree(bp->ptp_cfg); + bp->ptp_cfg = NULL; + } bnxt_inv_fw_health_reg(bp); bnxt_dl_register(bp);
@@@ -13162,6 -13279,8 +13280,8 @@@ init_err_pci_clean bnxt_free_hwrm_short_cmd_req(bp); bnxt_free_hwrm_resources(bp); bnxt_ethtool_free(bp); + kfree(bp->ptp_cfg); + bp->ptp_cfg = NULL; kfree(bp->fw_health); bp->fw_health = NULL; bnxt_cleanup_pci(bp); diff --combined drivers/net/ethernet/neterion/vxge/vxge-config.c index b47d74743f5a,38a273c4d593..a3204a7ef750 --- a/drivers/net/ethernet/neterion/vxge/vxge-config.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-config.c @@@ -3784,7 -3784,6 +3784,7 @@@ vxge_hw_rts_rth_data0_data1_get(u32 j, VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_ENTRY_EN | VXGE_HW_RTS_ACCESS_STEER_DATA1_RTH_ITEM1_BUCKET_DATA( itable[j]); + return; default: return; } @@@ -4885,7 -4884,7 +4885,7 @@@ vpath_open_exit1 }
/** - * vxge_hw_vpath_rx_doorbell_post - Close the handle got from previous vpath + * vxge_hw_vpath_rx_doorbell_init - Close the handle got from previous vpath * (vpath) open * @vp: Handle got from previous vpath open * diff --combined drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index 918220ad0d53,b307264e59cf..a4fa507903ee --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c @@@ -319,10 -319,8 +319,8 @@@ int qlcnic_read_mac_addr(struct qlcnic_ static void qlcnic_delete_adapter_mac(struct qlcnic_adapter *adapter) { struct qlcnic_mac_vlan_list *cur; - struct list_head *head;
- list_for_each(head, &adapter->mac_list) { - cur = list_entry(head, struct qlcnic_mac_vlan_list, list); + list_for_each_entry(cur, &adapter->mac_list, list) { if (ether_addr_equal_unaligned(adapter->mac_addr, cur->mac_addr)) { qlcnic_sre_macaddr_change(adapter, cur->mac_addr, 0, QLCNIC_MAC_DEL); @@@ -3344,9 -3342,6 +3342,6 @@@ qlcnic_can_start_firmware(struct qlcnic do { msleep(1000); prev_state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); - - if (prev_state == QLCNIC_DEV_QUISCENT) - continue; } while ((prev_state != QLCNIC_DEV_READY) && --dev_init_timeo);
if (!dev_init_timeo) { @@@ -3456,7 -3451,6 +3451,7 @@@ wait_npar adapter->fw_wait_cnt = 0; return; } + break; case QLCNIC_DEV_FAILED: break; default: diff --combined drivers/net/ethernet/qualcomm/qca_spi.c index 0a6b8112b535,79fe3ec4e581..b64c254e00ba --- a/drivers/net/ethernet/qualcomm/qca_spi.c +++ b/drivers/net/ethernet/qualcomm/qca_spi.c @@@ -504,8 -504,12 +504,12 @@@ qcaspi_qca7k_sync(struct qcaspi *qca, i qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); if (signature != QCASPI_GOOD_SIGNATURE) { + if (qca->sync == QCASPI_SYNC_READY) + qca->stats.bad_signature++; + qca->sync = QCASPI_SYNC_UNKNOWN; netdev_dbg(qca->net_dev, "sync: got CPU on, but signature was invalid, restart\n"); + return; } else { /* ensure that the WRBUF is empty */ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA, @@@ -523,10 -527,14 +527,14 @@@
switch (qca->sync) { case QCASPI_SYNC_READY: - /* Read signature, if not valid go to unknown state. */ + /* Check signature twice, if not valid go to unknown state. */ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); + if (signature != QCASPI_GOOD_SIGNATURE) + qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); + if (signature != QCASPI_GOOD_SIGNATURE) { qca->sync = QCASPI_SYNC_UNKNOWN; + qca->stats.bad_signature++; netdev_dbg(qca->net_dev, "sync: bad signature, restart\n"); /* don't reset right away */ return; @@@ -653,7 -661,8 +661,7 @@@ qcaspi_intr_handler(int irq, void *data struct qcaspi *qca = data;
qca->intr_req++; - if (qca->spi_thread && - qca->spi_thread->state != TASK_RUNNING) + if (qca->spi_thread) wake_up_process(qca->spi_thread);
return IRQ_HANDLED; @@@ -776,7 -785,8 +784,7 @@@ qcaspi_netdev_xmit(struct sk_buff *skb
netif_trans_update(dev);
- if (qca->spi_thread && - qca->spi_thread->state != TASK_RUNNING) + if (qca->spi_thread) wake_up_process(qca->spi_thread);
return NETDEV_TX_OK; diff --combined drivers/net/hyperv/hyperv_net.h index b11aa68b44ec,9e5eee44f7d3..bc48855dff10 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@@ -895,16 -895,9 +895,16 @@@ static inline u32 netvsc_rqstor_size(un ringbytes / NETVSC_MIN_IN_MSG_SIZE; }
+/* XFER PAGE packets can specify a maximum of 375 ranges for NDIS >= 6.0 + * and a maximum of 64 ranges for NDIS < 6.0 with no RSC; with RSC, this + * limit is raised to 562 (= NVSP_RSC_MAX). + */ +#define NETVSC_MAX_XFER_PAGE_RANGES NVSP_RSC_MAX #define NETVSC_XFER_HEADER_SIZE(rng_cnt) \ (offsetof(struct vmtransfer_page_packet_header, ranges) + \ (rng_cnt) * sizeof(struct vmtransfer_page_range)) +#define NETVSC_MAX_PKT_SIZE (NETVSC_XFER_HEADER_SIZE(NETVSC_MAX_XFER_PAGE_RANGES) + \ + sizeof(struct nvsp_message) + (sizeof(u32) * VRSS_SEND_TAB_SIZE))
struct multi_send_data { struct sk_buff *skb; /* skb containing the pkt */ @@@ -1170,6 -1163,7 +1170,7 @@@ struct rndis_set_request u32 info_buflen; u32 info_buf_offset; u32 dev_vc_handle; + u8 info_buf[]; };
/* Response to NdisSetRequest */ diff --combined drivers/net/hyperv/rndis_filter.c index 983bf362466a,033ed6ed78c5..f6c9c2a670f9 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@@ -1051,10 -1051,8 +1051,8 @@@ static int rndis_filter_set_packet_filt set = &request->request_msg.msg.set_req; set->oid = RNDIS_OID_GEN_CURRENT_PACKET_FILTER; set->info_buflen = sizeof(u32); - set->info_buf_offset = sizeof(struct rndis_set_request); - - memcpy((void *)(unsigned long)set + sizeof(struct rndis_set_request), - &new_filter, sizeof(u32)); + set->info_buf_offset = offsetof(typeof(*set), info_buf); + memcpy(set->info_buf, &new_filter, sizeof(u32));
ret = rndis_filter_send_request(dev, request); if (ret == 0) { @@@ -1259,11 -1257,7 +1257,11 @@@ static void netvsc_sc_open(struct vmbus /* Set the channel before opening.*/ nvchan->channel = new_sc;
+ new_sc->next_request_id_callback = vmbus_next_request_id; + new_sc->request_addr_callback = vmbus_request_addr; new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes); + new_sc->max_pkt_size = NETVSC_MAX_PKT_SIZE; + ret = vmbus_open(new_sc, netvsc_ring_bytes, netvsc_ring_bytes, NULL, 0, netvsc_channel_cb, nvchan); diff --combined include/linux/acpi.h index c8ec7803b1b6,6ace3a0f1415..b338613fb536 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@@ -132,7 -132,6 +132,7 @@@ enum acpi_address_range_id union acpi_subtable_headers { struct acpi_subtable_header common; struct acpi_hmat_structure hmat; + struct acpi_prmt_module_header prmt; };
typedef int (*acpi_tbl_table_handler)(struct acpi_table_header *table); @@@ -551,7 -550,6 +551,7 @@@ acpi_status acpi_run_osc(acpi_handle ha #define OSC_SB_OSLPI_SUPPORT 0x00000100 #define OSC_SB_CPC_DIVERSE_HIGH_SUPPORT 0x00001000 #define OSC_SB_GENERIC_INITIATOR_SUPPORT 0x00002000 +#define OSC_SB_PRM_SUPPORT 0x00020000 #define OSC_SB_NATIVE_USB4_SUPPORT 0x00040000
extern bool osc_sb_apei_support_acked; @@@ -668,6 -666,7 +668,6 @@@ extern bool acpi_driver_match_device(st const struct device_driver *drv); int acpi_device_uevent_modalias(struct device *, struct kobj_uevent_env *); int acpi_device_modalias(struct device *, char *, int); -void acpi_walk_dep_device_list(acpi_handle handle);
struct platform_device *acpi_create_platform_device(struct acpi_device *, struct property_entry *); @@@ -711,6 -710,8 +711,8 @@@ static inline u64 acpi_arch_get_root_po } #endif
+ int acpi_get_local_address(acpi_handle handle, u32 *addr); + #else /* !CONFIG_ACPI */
#define acpi_disabled 1 @@@ -766,7 -767,7 +768,7 @@@ static inline bool is_acpi_device_node( return false; }
-static inline struct acpi_device *to_acpi_device_node(struct fwnode_handle *fwnode) +static inline struct acpi_device *to_acpi_device_node(const struct fwnode_handle *fwnode) { return NULL; } @@@ -776,12 -777,12 +778,12 @@@ static inline bool is_acpi_data_node(co return false; }
-static inline struct acpi_data_node *to_acpi_data_node(struct fwnode_handle *fwnode) +static inline struct acpi_data_node *to_acpi_data_node(const struct fwnode_handle *fwnode) { return NULL; }
-static inline bool acpi_data_node_match(struct fwnode_handle *fwnode, +static inline bool acpi_data_node_match(const struct fwnode_handle *fwnode, const char *name) { return false; @@@ -912,7 -913,7 +914,7 @@@ acpi_create_platform_device(struct acpi return NULL; }
-static inline bool acpi_dma_supported(struct acpi_device *adev) +static inline bool acpi_dma_supported(const struct acpi_device *adev) { return false; } @@@ -966,6 -967,11 +968,11 @@@ static inline struct acpi_device *acpi_ return NULL; }
+ static inline int acpi_get_local_address(acpi_handle handle, u32 *addr) + { + return -ENODEV; + } + #endif /* !CONFIG_ACPI */
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC @@@ -1005,7 -1011,6 +1012,7 @@@ int acpi_dev_resume(struct device *dev) int acpi_subsys_runtime_suspend(struct device *dev); int acpi_subsys_runtime_resume(struct device *dev); int acpi_dev_pm_attach(struct device *dev, bool power_on); +bool acpi_storage_d3(struct device *dev); #else static inline int acpi_subsys_runtime_suspend(struct device *dev) { return 0; } static inline int acpi_subsys_runtime_resume(struct device *dev) { return 0; } @@@ -1013,10 -1018,6 +1020,10 @@@ static inline int acpi_dev_pm_attach(st { return 0; } +static inline bool acpi_storage_d3(struct device *dev) +{ + return false; +} #endif
#if defined(CONFIG_ACPI) && defined(CONFIG_PM_SLEEP) @@@ -1102,8 -1103,6 +1109,8 @@@ void __acpi_handle_debug(struct _ddebu #if defined(CONFIG_ACPI) && defined(CONFIG_GPIOLIB) bool acpi_gpio_get_irq_resource(struct acpi_resource *ares, struct acpi_resource_gpio **agpio); +bool acpi_gpio_get_io_resource(struct acpi_resource *ares, + struct acpi_resource_gpio **agpio); int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index); #else static inline bool acpi_gpio_get_irq_resource(struct acpi_resource *ares, @@@ -1111,11 -1110,6 +1118,11 @@@ { return false; } +static inline bool acpi_gpio_get_io_resource(struct acpi_resource *ares, + struct acpi_resource_gpio **agpio) +{ + return false; +} static inline int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index) { diff --combined include/linux/device.h index 959cb9d2c9ab,8f0ec3081a24..4cd200f8b47a --- a/include/linux/device.h +++ b/include/linux/device.h @@@ -399,7 -399,7 +399,7 @@@ struct dev_links_info * along with subsystem-level and driver-level callbacks. * @em_pd: device's energy model performance domain * @pins: For device pin management. - * See Documentation/driver-api/pinctl.rst for details. + * See Documentation/driver-api/pin-control.rst for details. * @msi_list: Hosts MSI descriptors * @msi_domain: The generic MSI domain this device is using. * @numa_node: NUMA node this device is close to. @@@ -817,6 -817,7 +817,7 @@@ int device_online(struct device *dev) void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode); void set_secondary_fwnode(struct device *dev, struct fwnode_handle *fwnode); void device_set_of_node_from_dev(struct device *dev, const struct device *dev2); + void device_set_node(struct device *dev, struct fwnode_handle *fwnode);
static inline int dev_num_vf(struct device *dev) { diff --combined include/linux/kernel.h index bf950621febf,e73f3bc3dba5..f2ad8a53f71f --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@@ -71,6 -71,18 +71,18 @@@ */ #define lower_32_bits(n) ((u32)((n) & 0xffffffff))
+ /** + * upper_16_bits - return bits 16-31 of a number + * @n: the number we're accessing + */ + #define upper_16_bits(n) ((u16)((n) >> 16)) + + /** + * lower_16_bits - return bits 0-15 of a number + * @n: the number we're accessing + */ + #define lower_16_bits(n) ((u16)((n) & 0xffff)) + struct completion; struct pt_regs; struct user; @@@ -357,8 -369,6 +369,8 @@@ int sscanf(const char *, const char *, extern __scanf(2, 0) int vsscanf(const char *, const char *, va_list);
+extern int no_hash_pointers_enable(char *str); + extern int get_option(char **str, int *pint); extern char *get_options(const char *str, int nints, int *ints); extern unsigned long long memparse(const char *ptr, char **retptr); diff --combined include/linux/mm.h index 7ec25dd2f8a9,6cf4c6842ff0..b8bc39237dac --- a/include/linux/mm.h +++ b/include/linux/mm.h @@@ -46,7 -46,7 +46,7 @@@ extern int sysctl_page_lock_unfairness
void init_mm_internals(void);
-#ifndef CONFIG_NEED_MULTIPLE_NODES /* Don't use mapnrs, do it properly */ +#ifndef CONFIG_NUMA /* Don't use mapnrs, do it properly */ extern unsigned long max_mapnr;
static inline void set_max_mapnr(unsigned long limit) @@@ -124,6 -124,16 +124,6 @@@ extern int mmap_rnd_compat_bits __read_ #define lm_alias(x) __va(__pa_symbol(x)) #endif
-/* - * With CONFIG_CFI_CLANG, the compiler replaces function addresses in - * instrumented C code with jump table addresses. Architectures that - * support CFI can define this macro to return the actual function address - * when needed. - */ -#ifndef function_nocfi -#define function_nocfi(x) (x) -#endif - /* * To prevent common memory management code establishing * a zero page mapping on a read fault. @@@ -224,11 -234,7 +224,11 @@@ int overcommit_policy_handler(struct ct int __add_to_page_cache_locked(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp, void **shadowp);
+#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) +#else +#define nth_page(page,n) ((page) + (n)) +#endif
/* to align the pointer to the (next) page boundary */ #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE) @@@ -1335,7 -1341,7 +1335,7 @@@ static inline bool page_needs_cow_for_d if (!is_cow_mapping(vma->vm_flags)) return false;
- if (!atomic_read(&vma->vm_mm->has_pinned)) + if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)) return false;
return page_maybe_dma_pinned(page); @@@ -1662,10 -1668,11 +1662,11 @@@ struct address_space *page_mapping(stru static inline bool page_is_pfmemalloc(const struct page *page) { /* - * Page index cannot be this large so this must be - * a pfmemalloc page. + * lru.next has bit 1 set if the page is allocated from the + * pfmemalloc reserves. Callers may simply overwrite it if + * they do not need to preserve that information. */ - return page->index == -1UL; + return (uintptr_t)page->lru.next & BIT(1); }
/* @@@ -1674,12 -1681,12 +1675,12 @@@ */ static inline void set_page_pfmemalloc(struct page *page) { - page->index = -1UL; + page->lru.next = (void *)BIT(1); }
static inline void clear_page_pfmemalloc(struct page *page) { - page->index = 0; + page->lru.next = NULL; }
/* @@@ -1703,8 -1710,8 +1704,8 @@@ extern bool can_do_mlock(void) #else static inline bool can_do_mlock(void) { return false; } #endif -extern int user_shm_lock(size_t, struct user_struct *); -extern void user_shm_unlock(size_t, struct user_struct *); +extern int user_shm_lock(size_t, struct ucounts *); +extern void user_shm_unlock(size_t, struct ucounts *);
/* * Parameter block passed down to zap_pte_range in exceptional cases. @@@ -1844,8 -1851,12 +1845,8 @@@ extern int try_to_release_page(struct p extern void do_invalidatepage(struct page *page, unsigned int offset, unsigned int length);
-void __set_page_dirty(struct page *, struct address_space *, int warn); -int __set_page_dirty_nobuffers(struct page *page); -int __set_page_dirty_no_writeback(struct page *page); int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page); -void account_page_dirtied(struct page *page, struct address_space *mapping); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); int set_page_dirty(struct page *page); @@@ -2410,7 -2421,7 +2411,7 @@@ static inline unsigned long free_initme extern char __init_begin[], __init_end[];
return free_reserved_area(&__init_begin, &__init_end, - poison, "unused kernel"); + poison, "unused kernel image (initmem)"); }
static inline unsigned long get_num_physpages(void) @@@ -2450,7 -2461,7 +2451,7 @@@ extern void get_pfn_range_for_nid(unsig unsigned long *start_pfn, unsigned long *end_pfn); extern unsigned long find_min_pfn_with_active_regions(void);
-#ifndef CONFIG_NEED_MULTIPLE_NODES +#ifndef CONFIG_NUMA static inline int early_pfn_to_nid(unsigned long pfn) { return 0; @@@ -2464,6 -2475,7 +2465,6 @@@ extern void set_dma_reserve(unsigned lo extern void memmap_init_range(unsigned long, int, unsigned long, unsigned long, unsigned long, enum meminit_context, struct vmem_altmap *, int migratetype); -extern void memmap_init_zone(struct zone *zone); extern void setup_per_zone_wmarks(void); extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); @@@ -2670,45 -2682,17 +2671,45 @@@ extern struct vm_area_struct * find_vma extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr, struct vm_area_struct **pprev);
-/* Look up the first VMA which intersects the interval start_addr..end_addr-1, - NULL if none. Assume start_addr < end_addr. */ -static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr) +/** + * find_vma_intersection() - Look up the first VMA which intersects the interval + * @mm: The process address space. + * @start_addr: The inclusive start user address. + * @end_addr: The exclusive end user address. + * + * Returns: The first VMA within the provided range, %NULL otherwise. Assumes + * start_addr < end_addr. + */ +static inline +struct vm_area_struct *find_vma_intersection(struct mm_struct *mm, + unsigned long start_addr, + unsigned long end_addr) { - struct vm_area_struct * vma = find_vma(mm,start_addr); + struct vm_area_struct *vma = find_vma(mm, start_addr);
if (vma && end_addr <= vma->vm_start) vma = NULL; return vma; }
+/** + * vma_lookup() - Find a VMA at a specific address + * @mm: The process address space. + * @addr: The user address. + * + * Return: The vm_area_struct at the given address, %NULL otherwise. + */ +static inline +struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) +{ + struct vm_area_struct *vma = find_vma(mm, addr); + + if (vma && addr < vma->vm_start) + vma = NULL; + + return vma; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { unsigned long vm_start = vma->vm_start; diff --combined include/linux/mm_types.h index b66d0225414e,862f88a8c28a..d33d97c69da9 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@@ -96,6 -96,13 +96,13 @@@ struct page unsigned long private; }; struct { /* page_pool used by netstack */ + /** + * @pp_magic: magic value to avoid recycling non + * page_pool allocated pages. + */ + unsigned long pp_magic; + struct page_pool *pp; + unsigned long _pp_mapping_pad; /** * @dma_addr: might require a 64-bit value on * 32-bit architectures. @@@ -435,6 -442,16 +442,6 @@@ struct mm_struct */ atomic_t mm_count;
- /** - * @has_pinned: Whether this mm has pinned any pages. This can - * be either replaced in the future by @pinned_vm when it - * becomes stable, or grow into a counter on its own. We're - * aggresive on this bit now - even if the pinned pages were - * unpinned later on, we'll still keep this bit set for the - * lifecycle of this mm just for simplicity. - */ - atomic_t has_pinned; - #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* PTE page table pages */ #endif diff --combined include/linux/printk.h index d796183f26c9,885379a1c9a1..e834d78f0478 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@@ -8,6 -8,7 +8,7 @@@ #include <linux/linkage.h> #include <linux/cache.h> #include <linux/ratelimit_types.h> + #include <linux/once_lite.h>
extern const char linux_banner[]; extern const char linux_proc_banner[]; @@@ -206,7 -207,6 +207,7 @@@ void __init setup_log_buf(int early) __printf(1, 2) void dump_stack_set_arch_desc(const char *fmt, ...); void dump_stack_print_info(const char *log_lvl); void show_regs_print_info(const char *log_lvl); +extern asmlinkage void dump_stack_lvl(const char *log_lvl) __cold; extern asmlinkage void dump_stack(void) __cold; extern void printk_safe_flush(void); extern void printk_safe_flush_on_panic(void); @@@ -270,10 -270,6 +271,10 @@@ static inline void show_regs_print_info { }
+static inline void dump_stack_lvl(const char *log_lvl) +{ +} + static inline void dump_stack(void) { } @@@ -287,47 -283,6 +288,47 @@@ static inline void printk_safe_flush_on } #endif
+#ifdef CONFIG_SMP +extern int __printk_cpu_trylock(void); +extern void __printk_wait_on_cpu_lock(void); +extern void __printk_cpu_unlock(void); + +/** + * printk_cpu_lock_irqsave() - Acquire the printk cpu-reentrant spinning + * lock and disable interrupts. + * @flags: Stack-allocated storage for saving local interrupt state, + * to be passed to printk_cpu_unlock_irqrestore(). + * + * If the lock is owned by another CPU, spin until it becomes available. + * Interrupts are restored while spinning. + */ +#define printk_cpu_lock_irqsave(flags) \ + for (;;) { \ + local_irq_save(flags); \ + if (__printk_cpu_trylock()) \ + break; \ + local_irq_restore(flags); \ + __printk_wait_on_cpu_lock(); \ + } + +/** + * printk_cpu_unlock_irqrestore() - Release the printk cpu-reentrant spinning + * lock and restore interrupts. + * @flags: Caller's saved interrupt state, from printk_cpu_lock_irqsave(). + */ +#define printk_cpu_unlock_irqrestore(flags) \ + do { \ + __printk_cpu_unlock(); \ + local_irq_restore(flags); \ + } while (0) \ + +#else + +#define printk_cpu_lock_irqsave(flags) ((void)flags) +#define printk_cpu_unlock_irqrestore(flags) ((void)flags) + +#endif /* CONFIG_SMP */ + extern int kptr_restrict;
/** @@@ -482,27 -437,9 +483,9 @@@
#ifdef CONFIG_PRINTK #define printk_once(fmt, ...) \ - ({ \ - static bool __section(".data.once") __print_once; \ - bool __ret_print_once = !__print_once; \ - \ - if (!__print_once) { \ - __print_once = true; \ - printk(fmt, ##__VA_ARGS__); \ - } \ - unlikely(__ret_print_once); \ - }) + DO_ONCE_LITE(printk, fmt, ##__VA_ARGS__) #define printk_deferred_once(fmt, ...) \ - ({ \ - static bool __section(".data.once") __print_once; \ - bool __ret_print_once = !__print_once; \ - \ - if (!__print_once) { \ - __print_once = true; \ - printk_deferred(fmt, ##__VA_ARGS__); \ - } \ - unlikely(__ret_print_once); \ - }) + DO_ONCE_LITE(printk_deferred, fmt, ##__VA_ARGS__) #else #define printk_once(fmt, ...) \ no_printk(fmt, ##__VA_ARGS__) diff --combined net/core/dev.c index 2512f672bf8a,316b4032317e..c253c2aafe97 --- a/net/core/dev.c +++ b/net/core/dev.c @@@ -148,6 -148,7 +148,7 @@@ #include <net/devlink.h> #include <linux/pm_runtime.h> #include <linux/prandom.h> + #include <linux/once_lite.h>
#include "net-sysfs.h"
@@@ -3487,13 -3488,16 +3488,16 @@@ EXPORT_SYMBOL(__skb_gso_segment)
/* Take action when hardware reception checksum errors are detected. */ #ifdef CONFIG_BUG + static void do_netdev_rx_csum_fault(struct net_device *dev, struct sk_buff *skb) + { + pr_err("%s: hw csum failure\n", dev ? dev->name : "<unknown>"); + skb_dump(KERN_ERR, skb, true); + dump_stack(); + } + void netdev_rx_csum_fault(struct net_device *dev, struct sk_buff *skb) { - if (net_ratelimit()) { - pr_err("%s: hw csum failure\n", dev ? dev->name : "<unknown>"); - skb_dump(KERN_ERR, skb, true); - dump_stack(); - } + DO_ONCE_LITE(do_netdev_rx_csum_fault, dev, skb); } EXPORT_SYMBOL(netdev_rx_csum_fault); #endif @@@ -3852,10 -3856,33 +3856,33 @@@ static inline int __dev_xmit_skb(struc qdisc_calculate_pkt_len(skb, q);
if (q->flags & TCQ_F_NOLOCK) { + if (q->flags & TCQ_F_CAN_BYPASS && nolock_qdisc_is_empty(q) && + qdisc_run_begin(q)) { + /* Retest nolock_qdisc_is_empty() within the protection + * of q->seqlock to protect from racing with requeuing. + */ + if (unlikely(!nolock_qdisc_is_empty(q))) { + rc = q->enqueue(skb, q, &to_free) & + NET_XMIT_MASK; + __qdisc_run(q); + qdisc_run_end(q); + + goto no_lock_out; + } + + qdisc_bstats_cpu_update(q, skb); + if (sch_direct_xmit(skb, q, dev, txq, NULL, true) && + !nolock_qdisc_is_empty(q)) + __qdisc_run(q); + + qdisc_run_end(q); + return NET_XMIT_SUCCESS; + } + rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; - if (likely(!netif_xmit_frozen_or_stopped(txq))) - qdisc_run(q); + qdisc_run(q);
+ no_lock_out: if (unlikely(to_free)) kfree_skb_list(to_free); return rc; @@@ -4363,7 -4390,7 +4390,7 @@@ static inline void ____napi_schedule(st * makes sure to proceed with napi polling * if the thread is explicitly woken from here. */ - if (READ_ONCE(thread->state) != TASK_INTERRUPTIBLE) + if (READ_ONCE(thread->__state) != TASK_INTERRUPTIBLE) set_bit(NAPI_STATE_SCHED_THREADED, &napi->state); wake_up_process(thread); return; @@@ -5277,9 -5304,9 +5304,9 @@@ another_round if (static_branch_unlikely(&generic_xdp_needed_key)) { int ret2;
- preempt_disable(); + migrate_disable(); ret2 = do_xdp_generic(rcu_dereference(skb->dev->xdp_prog), skb); - preempt_enable(); + migrate_enable();
if (ret2 != XDP_PASS) { ret = NET_RX_DROP; @@@ -6520,11 -6547,18 +6547,18 @@@ EXPORT_SYMBOL(napi_schedule_prep) * __napi_schedule_irqoff - schedule for receive * @n: entry to schedule * - * Variant of __napi_schedule() assuming hard irqs are masked + * Variant of __napi_schedule() assuming hard irqs are masked. + * + * On PREEMPT_RT enabled kernels this maps to __napi_schedule() + * because the interrupt disabled assumption might not be true + * due to force-threaded interrupts and spinlock substitution. */ void __napi_schedule_irqoff(struct napi_struct *n) { - ____napi_schedule(this_cpu_ptr(&softnet_data), n); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + ____napi_schedule(this_cpu_ptr(&softnet_data), n); + else + __napi_schedule(n); } EXPORT_SYMBOL(__napi_schedule_irqoff);
diff --combined net/core/filter.c index d81352ca1b5c,d22895caa164..d70187ce851b --- a/net/core/filter.c +++ b/net/core/filter.c @@@ -17,7 -17,6 +17,7 @@@ * Kris Katterjohn - Added many additional checks in bpf_check_classic() */
+#include <linux/atomic.h> #include <linux/module.h> #include <linux/types.h> #include <linux/mm.h> @@@ -42,6 -41,7 +42,6 @@@ #include <linux/timer.h> #include <linux/uaccess.h> #include <asm/unaligned.h> -#include <asm/cmpxchg.h> #include <linux/filter.h> #include <linux/ratelimit.h> #include <linux/seccomp.h> @@@ -3241,9 -3241,6 +3241,6 @@@ static int bpf_skb_proto_4_to_6(struct u32 off = skb_mac_header_len(skb); int ret;
- if (skb_is_gso(skb) && !skb_is_gso_tcp(skb)) - return -ENOTSUPP; - ret = skb_cow(skb, len_diff); if (unlikely(ret < 0)) return ret; @@@ -3255,19 -3252,11 +3252,11 @@@ if (skb_is_gso(skb)) { struct skb_shared_info *shinfo = skb_shinfo(skb);
- /* SKB_GSO_TCPV4 needs to be changed into - * SKB_GSO_TCPV6. - */ + /* SKB_GSO_TCPV4 needs to be changed into SKB_GSO_TCPV6. */ if (shinfo->gso_type & SKB_GSO_TCPV4) { shinfo->gso_type &= ~SKB_GSO_TCPV4; shinfo->gso_type |= SKB_GSO_TCPV6; } - - /* Due to IPv6 header, MSS needs to be downgraded. */ - skb_decrease_gso_size(shinfo, len_diff); - /* Header must be checked, and gso_segs recomputed. */ - shinfo->gso_type |= SKB_GSO_DODGY; - shinfo->gso_segs = 0; }
skb->protocol = htons(ETH_P_IPV6); @@@ -3282,9 -3271,6 +3271,6 @@@ static int bpf_skb_proto_6_to_4(struct u32 off = skb_mac_header_len(skb); int ret;
- if (skb_is_gso(skb) && !skb_is_gso_tcp(skb)) - return -ENOTSUPP; - ret = skb_unclone(skb, GFP_ATOMIC); if (unlikely(ret < 0)) return ret; @@@ -3296,19 -3282,11 +3282,11 @@@ if (skb_is_gso(skb)) { struct skb_shared_info *shinfo = skb_shinfo(skb);
- /* SKB_GSO_TCPV6 needs to be changed into - * SKB_GSO_TCPV4. - */ + /* SKB_GSO_TCPV6 needs to be changed into SKB_GSO_TCPV4. */ if (shinfo->gso_type & SKB_GSO_TCPV6) { shinfo->gso_type &= ~SKB_GSO_TCPV6; shinfo->gso_type |= SKB_GSO_TCPV4; } - - /* Due to IPv4 header, MSS can be upgraded. */ - skb_increase_gso_size(shinfo, len_diff); - /* Header must be checked, and gso_segs recomputed. */ - shinfo->gso_type |= SKB_GSO_DODGY; - shinfo->gso_segs = 0; }
skb->protocol = htons(ETH_P_IP); @@@ -3919,6 -3897,34 +3897,34 @@@ static const struct bpf_func_proto bpf_ .arg2_type = ARG_ANYTHING, };
+ /* XDP_REDIRECT works by a three-step process, implemented in the functions + * below: + * + * 1. The bpf_redirect() and bpf_redirect_map() helpers will lookup the target + * of the redirect and store it (along with some other metadata) in a per-CPU + * struct bpf_redirect_info. + * + * 2. When the program returns the XDP_REDIRECT return code, the driver will + * call xdp_do_redirect() which will use the information in struct + * bpf_redirect_info to actually enqueue the frame into a map type-specific + * bulk queue structure. + * + * 3. Before exiting its NAPI poll loop, the driver will call xdp_do_flush(), + * which will flush all the different bulk queues, thus completing the + * redirect. + * + * Pointers to the map entries will be kept around for this whole sequence of + * steps, protected by RCU. However, there is no top-level rcu_read_lock() in + * the core code; instead, the RCU protection relies on everything happening + * inside a single NAPI poll sequence, which means it's between a pair of calls + * to local_bh_disable()/local_bh_enable(). + * + * The map entries are marked as __rcu and the map code makes sure to + * dereference those pointers with rcu_dereference_check() in a way that works + * for both sections that to hold an rcu_read_lock() and sections that are + * called from NAPI without a separate rcu_read_lock(). The code below does not + * use RCU annotations, but relies on those in the map code. + */ void xdp_do_flush(void) { __dev_flush(); @@@ -3927,6 -3933,23 +3933,23 @@@ } EXPORT_SYMBOL_GPL(xdp_do_flush);
+ void bpf_clear_redirect_map(struct bpf_map *map) + { + struct bpf_redirect_info *ri; + int cpu; + + for_each_possible_cpu(cpu) { + ri = per_cpu_ptr(&bpf_redirect_info, cpu); + /* Avoid polluting remote cacheline due to writes if + * not needed. Once we pass this test, we need the + * cmpxchg() to make sure it hasn't been changed in + * the meantime by remote CPU. + */ + if (unlikely(READ_ONCE(ri->map) == map)) + cmpxchg(&ri->map, map, NULL); + } + } + int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { @@@ -3934,6 -3957,7 +3957,7 @@@ enum bpf_map_type map_type = ri->map_type; void *fwd = ri->tgt_value; u32 map_id = ri->map_id; + struct bpf_map *map; int err;
ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ @@@ -3943,7 -3967,14 +3967,14 @@@ case BPF_MAP_TYPE_DEVMAP: fallthrough; case BPF_MAP_TYPE_DEVMAP_HASH: - err = dev_map_enqueue(fwd, xdp, dev); + map = READ_ONCE(ri->map); + if (unlikely(map)) { + WRITE_ONCE(ri->map, NULL); + err = dev_map_enqueue_multi(xdp, dev, map, + ri->flags & BPF_F_EXCLUDE_INGRESS); + } else { + err = dev_map_enqueue(fwd, xdp, dev); + } break; case BPF_MAP_TYPE_CPUMAP: err = cpu_map_enqueue(fwd, xdp, dev); @@@ -3985,13 -4016,21 +4016,21 @@@ static int xdp_do_generic_redirect_map( enum bpf_map_type map_type, u32 map_id) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_map *map; int err;
switch (map_type) { case BPF_MAP_TYPE_DEVMAP: fallthrough; case BPF_MAP_TYPE_DEVMAP_HASH: - err = dev_map_generic_redirect(fwd, skb, xdp_prog); + map = READ_ONCE(ri->map); + if (unlikely(map)) { + WRITE_ONCE(ri->map, NULL); + err = dev_map_redirect_multi(dev, skb, xdp_prog, map, + ri->flags & BPF_F_EXCLUDE_INGRESS); + } else { + err = dev_map_generic_redirect(fwd, skb, xdp_prog); + } if (unlikely(err)) goto err; break; @@@ -10008,11 -10047,13 +10047,13 @@@ out static void bpf_init_reuseport_kern(struct sk_reuseport_kern *reuse_kern, struct sock_reuseport *reuse, struct sock *sk, struct sk_buff *skb, + struct sock *migrating_sk, u32 hash) { reuse_kern->skb = skb; reuse_kern->sk = sk; reuse_kern->selected_sk = NULL; + reuse_kern->migrating_sk = migrating_sk; reuse_kern->data_end = skb->data + skb_headlen(skb); reuse_kern->hash = hash; reuse_kern->reuseport_id = reuse->reuseport_id; @@@ -10021,12 -10062,13 +10062,13 @@@
struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, struct bpf_prog *prog, struct sk_buff *skb, + struct sock *migrating_sk, u32 hash) { struct sk_reuseport_kern reuse_kern; enum sk_action action;
- bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, hash); + bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, migrating_sk, hash); action = BPF_PROG_RUN(prog, &reuse_kern);
if (action == SK_PASS) @@@ -10136,6 -10178,8 +10178,8 @@@ sk_reuseport_func_proto(enum bpf_func_i return &sk_reuseport_load_bytes_proto; case BPF_FUNC_skb_load_bytes_relative: return &sk_reuseport_load_bytes_relative_proto; + case BPF_FUNC_get_socket_cookie: + return &bpf_get_socket_ptr_cookie_proto; default: return bpf_base_func_proto(func_id); } @@@ -10165,6 -10209,14 +10209,14 @@@ sk_reuseport_is_valid_access(int off, i case offsetof(struct sk_reuseport_md, hash): return size == size_default;
+ case offsetof(struct sk_reuseport_md, sk): + info->reg_type = PTR_TO_SOCKET; + return size == sizeof(__u64); + + case offsetof(struct sk_reuseport_md, migrating_sk): + info->reg_type = PTR_TO_SOCK_COMMON_OR_NULL; + return size == sizeof(__u64); + /* Fields that allow narrowing */ case bpf_ctx_range(struct sk_reuseport_md, eth_protocol): if (size < sizeof_field(struct sk_buff, protocol)) @@@ -10237,6 -10289,14 +10289,14 @@@ static u32 sk_reuseport_convert_ctx_acc case offsetof(struct sk_reuseport_md, bind_inany): SK_REUSEPORT_LOAD_FIELD(bind_inany); break; + + case offsetof(struct sk_reuseport_md, sk): + SK_REUSEPORT_LOAD_FIELD(sk); + break; + + case offsetof(struct sk_reuseport_md, migrating_sk): + SK_REUSEPORT_LOAD_FIELD(migrating_sk); + break; }
return insn - insn_buf; diff --combined net/ipv4/ah4.c index fab0958c41be,2d2d08aa787d..6eea1e9e998d --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@@ -450,7 -450,6 +450,7 @@@ static int ah4_err(struct sk_buff *skb case ICMP_DEST_UNREACH: if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return 0; + break; case ICMP_REDIRECT: break; default: @@@ -555,7 -554,6 +555,6 @@@ static int ah4_rcv_cb(struct sk_buff *s
static const struct xfrm_type ah_type = { - .description = "AH4", .owner = THIS_MODULE, .proto = IPPROTO_AH, .flags = XFRM_TYPE_REPLAY_PROT, diff --combined net/ipv4/esp4.c index 8e3b445a8c21,f414ad246fdf..a09e36c4a413 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@@ -673,7 -673,7 +673,7 @@@ static int esp_output(struct xfrm_stat struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb); u32 padto;
- padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached)); + padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached)); if (skb->len < padto) esp.tfclen = padto - skb->len; } @@@ -982,7 -982,6 +982,7 @@@ static int esp4_err(struct sk_buff *skb case ICMP_DEST_UNREACH: if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return 0; + break; case ICMP_REDIRECT: break; default: @@@ -1199,7 -1198,6 +1199,6 @@@ static int esp4_rcv_cb(struct sk_buff *
static const struct xfrm_type esp_type = { - .description = "ESP4", .owner = THIS_MODULE, .proto = IPPROTO_ESP, .flags = XFRM_TYPE_REPLAY_PROT, diff --combined net/ipv4/ipcomp.c index bbb56f5e06dd,2e69e81e1f5d..366094c1ce6c --- a/net/ipv4/ipcomp.c +++ b/net/ipv4/ipcomp.c @@@ -31,7 -31,6 +31,7 @@@ static int ipcomp4_err(struct sk_buff * case ICMP_DEST_UNREACH: if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return 0; + break; case ICMP_REDIRECT: break; default: @@@ -153,7 -152,6 +153,6 @@@ static int ipcomp4_rcv_cb(struct sk_buf }
static const struct xfrm_type ipcomp_type = { - .description = "IPCOMP4", .owner = THIS_MODULE, .proto = IPPROTO_COMP, .init_state = ipcomp4_init_state, diff --combined net/ipv4/tcp.c index 64bf179cc915,a0a96eb826c4..d5ab5f243640 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@@ -1738,8 -1738,8 +1738,8 @@@ int tcp_set_rcvlowat(struct sock *sk, i } EXPORT_SYMBOL(tcp_set_rcvlowat);
- static void tcp_update_recv_tstamps(struct sk_buff *skb, - struct scm_timestamping_internal *tss) + void tcp_update_recv_tstamps(struct sk_buff *skb, + struct scm_timestamping_internal *tss) { if (skb->tstamp) tss->ts[0] = ktime_to_timespec64(skb->tstamp); @@@ -2024,8 -2024,6 +2024,6 @@@ static int tcp_zerocopy_vm_insert_batch }
#define TCP_VALID_ZC_MSG_FLAGS (TCP_CMSG_TS) - static void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk, - struct scm_timestamping_internal *tss); static void tcp_zc_finalize_rx_tstamp(struct sock *sk, struct tcp_zerocopy_receive *zc, struct scm_timestamping_internal *tss) @@@ -2095,8 -2093,8 +2093,8 @@@ static int tcp_zerocopy_receive(struct
mmap_read_lock(current->mm);
- vma = find_vma(current->mm, address); - if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops) { + vma = vma_lookup(current->mm, address); + if (!vma || vma->vm_ops != &tcp_vm_ops) { mmap_read_unlock(current->mm); return -EINVAL; } @@@ -2197,8 -2195,8 +2195,8 @@@ out #endif
/* Similar to __sock_recv_timestamp, but does not require an skb */ - static void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk, - struct scm_timestamping_internal *tss) + void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk, + struct scm_timestamping_internal *tss) { int new_tstamp = sock_flag(sk, SOCK_TSTAMP_NEW); bool has_timestamping = false; @@@ -3061,7 -3059,7 +3059,7 @@@ int tcp_disconnect(struct sock *sk, in sk->sk_frag.offset = 0; }
- sk->sk_error_report(sk); + sk_error_report(sk); return 0; } EXPORT_SYMBOL(tcp_disconnect); @@@ -4450,7 -4448,7 +4448,7 @@@ int tcp_abort(struct sock *sk, int err sk->sk_err = err; /* This barrier is coupled with smp_rmb() in tcp_poll() */ smp_wmb(); - sk->sk_error_report(sk); + sk_error_report(sk); if (tcp_need_reset(sk->sk_state)) tcp_send_active_reset(sk, GFP_ATOMIC); tcp_done(sk); diff --combined net/packet/af_packet.c index d56941d51e20,77476184741d..57a1971f29e5 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@@ -1656,7 -1656,6 +1656,7 @@@ static int fanout_add(struct sock *sk, case PACKET_FANOUT_ROLLOVER: if (type_flags & PACKET_FANOUT_FLAG_ROLLOVER) return -EINVAL; + break; case PACKET_FANOUT_HASH: case PACKET_FANOUT_LB: case PACKET_FANOUT_CPU: @@@ -3207,7 -3206,7 +3207,7 @@@ static int packet_do_bind(struct sock * } else { sk->sk_err = ENETDOWN; if (!sock_flag(sk, SOCK_DEAD)) - sk->sk_error_report(sk); + sk_error_report(sk); }
out_unlock: @@@ -3935,12 -3934,9 +3935,9 @@@ packet_setsockopt(struct socket *sock, return -EFAULT;
lock_sock(sk); - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { - ret = -EBUSY; - } else { + if (!po->rx_ring.pg_vec && !po->tx_ring.pg_vec) po->tp_tx_has_off = !!val; - ret = 0; - } + release_sock(sk); return 0; } @@@ -4107,7 -4103,7 +4104,7 @@@ static int packet_notifier(struct notif __unregister_prot_hook(sk, false); sk->sk_err = ENETDOWN; if (!sock_flag(sk, SOCK_DEAD)) - sk->sk_error_report(sk); + sk_error_report(sk); } if (msg == NETDEV_UNREGISTER) { packet_cached_dev_reset(po); diff --combined net/sctp/input.c index 5ceaf75105ba,02e73264e81e..eb3c2a34a31c --- a/net/sctp/input.c +++ b/net/sctp/input.c @@@ -385,7 -385,9 +385,9 @@@ static int sctp_add_backlog(struct soc void sctp_icmp_frag_needed(struct sock *sk, struct sctp_association *asoc, struct sctp_transport *t, __u32 pmtu) { - if (!t || (t->pathmtu <= pmtu)) + if (!t || + (t->pathmtu <= pmtu && + t->pl.probe_size + sctp_transport_pl_hlen(t) <= pmtu)) return;
if (sock_owned_by_user(sk)) { @@@ -554,6 -556,49 +556,50 @@@ void sctp_err_finish(struct sock *sk, s sctp_transport_put(t); }
+ static void sctp_v4_err_handle(struct sctp_transport *t, struct sk_buff *skb, + __u8 type, __u8 code, __u32 info) + { + struct sctp_association *asoc = t->asoc; + struct sock *sk = asoc->base.sk; + int err = 0; + + switch (type) { + case ICMP_PARAMETERPROB: + err = EPROTO; + break; + case ICMP_DEST_UNREACH: + if (code > NR_ICMP_UNREACH) + return; + if (code == ICMP_FRAG_NEEDED) { + sctp_icmp_frag_needed(sk, asoc, t, SCTP_TRUNC4(info)); + return; + } + if (code == ICMP_PROT_UNREACH) { + sctp_icmp_proto_unreachable(sk, asoc, t); + return; + } + err = icmp_err_convert[code].errno; + break; + case ICMP_TIME_EXCEEDED: + if (code == ICMP_EXC_FRAGTIME) + return; + + err = EHOSTUNREACH; + break; + case ICMP_REDIRECT: + sctp_icmp_redirect(sk, t, skb); ++ return; + default: + return; + } + if (!sock_owned_by_user(sk) && inet_sk(sk)->recverr) { + sk->sk_err = err; + sk_error_report(sk); + } else { /* Only an error on timeout */ + sk->sk_err_soft = err; + } + } + /* * This routine is called by the ICMP module when it gets some * sort of error condition. If err < 0 then the socket should @@@ -572,22 -617,19 +618,19 @@@ int sctp_v4_err(struct sk_buff *skb, __u32 info) { const struct iphdr *iph = (const struct iphdr *)skb->data; - const int ihlen = iph->ihl * 4; const int type = icmp_hdr(skb)->type; const int code = icmp_hdr(skb)->code; - struct sock *sk; - struct sctp_association *asoc = NULL; + struct net *net = dev_net(skb->dev); struct sctp_transport *transport; - struct inet_sock *inet; + struct sctp_association *asoc; __u16 saveip, savesctp; - int err; - struct net *net = dev_net(skb->dev); + struct sock *sk;
/* Fix up skb to look at the embedded net header. */ saveip = skb->network_header; savesctp = skb->transport_header; skb_reset_network_header(skb); - skb_set_transport_header(skb, ihlen); + skb_set_transport_header(skb, iph->ihl * 4); sk = sctp_err_lookup(net, AF_INET, skb, sctp_hdr(skb), &asoc, &transport); /* Put back, the original values. */ skb->network_header = saveip; @@@ -596,59 -638,41 +639,41 @@@ __ICMP_INC_STATS(net, ICMP_MIB_INERRORS); return -ENOENT; } - /* Warning: The sock lock is held. Remember to call - * sctp_err_finish! - */
- switch (type) { - case ICMP_PARAMETERPROB: - err = EPROTO; - break; - case ICMP_DEST_UNREACH: - if (code > NR_ICMP_UNREACH) - goto out_unlock; + sctp_v4_err_handle(transport, skb, type, code, info); + sctp_err_finish(sk, transport);
- /* PMTU discovery (RFC1191) */ - if (ICMP_FRAG_NEEDED == code) { - sctp_icmp_frag_needed(sk, asoc, transport, - SCTP_TRUNC4(info)); - goto out_unlock; - } else { - if (ICMP_PROT_UNREACH == code) { - sctp_icmp_proto_unreachable(sk, asoc, - transport); - goto out_unlock; - } - } - err = icmp_err_convert[code].errno; - break; - case ICMP_TIME_EXCEEDED: - /* Ignore any time exceeded errors due to fragment reassembly - * timeouts. - */ - if (ICMP_EXC_FRAGTIME == code) - goto out_unlock; + return 0; + }
- err = EHOSTUNREACH; - break; - case ICMP_REDIRECT: - sctp_icmp_redirect(sk, transport, skb); - goto out_unlock; - default: - goto out_unlock; + int sctp_udp_v4_err(struct sock *sk, struct sk_buff *skb) + { + struct net *net = dev_net(skb->dev); + struct sctp_association *asoc; + struct sctp_transport *t; + struct icmphdr *hdr; + __u32 info = 0; + + skb->transport_header += sizeof(struct udphdr); + sk = sctp_err_lookup(net, AF_INET, skb, sctp_hdr(skb), &asoc, &t); + if (!sk) { + __ICMP_INC_STATS(net, ICMP_MIB_INERRORS); + return -ENOENT; }
- inet = inet_sk(sk); - if (!sock_owned_by_user(sk) && inet->recverr) { - sk->sk_err = err; - sk->sk_error_report(sk); - } else { /* Only an error on timeout */ - sk->sk_err_soft = err; + skb->transport_header -= sizeof(struct udphdr); + hdr = (struct icmphdr *)(skb_network_header(skb) - sizeof(struct icmphdr)); + if (hdr->type == ICMP_REDIRECT) { + /* can't be handled without outer iphdr known, leave it to udp_err */ + sctp_err_finish(sk, t); + return 0; } + if (hdr->type == ICMP_DEST_UNREACH && hdr->code == ICMP_FRAG_NEEDED) + info = ntohs(hdr->un.frag.mtu); + sctp_v4_err_handle(t, skb, hdr->type, hdr->code, info);
- out_unlock: - sctp_err_finish(sk, transport); - return 0; + sctp_err_finish(sk, t); + return 1; }
/* @@@ -1131,7 -1155,8 +1156,8 @@@ static struct sctp_association *__sctp_ if (!af) continue;
- af->from_addr_param(paddr, params.addr, sh->source, 0); + if (!af->from_addr_param(paddr, params.addr, sh->source, 0)) + continue;
asoc = __sctp_lookup_association(net, laddr, paddr, transportp); if (asoc) @@@ -1167,6 -1192,9 +1193,9 @@@ static struct sctp_association *__sctp_ union sctp_addr_param *param; union sctp_addr paddr;
+ if (ntohs(ch->length) < sizeof(*asconf) + sizeof(struct sctp_paramhdr)) + return NULL; + /* Skip over the ADDIP header and find the Address parameter */ param = (union sctp_addr_param *)(asconf + 1);
@@@ -1174,7 -1202,8 +1203,8 @@@ if (unlikely(!af)) return NULL;
- af->from_addr_param(&paddr, param, peer_port, 0); + if (af->from_addr_param(&paddr, param, peer_port, 0)) + return NULL;
return __sctp_lookup_association(net, laddr, &paddr, transportp); } @@@ -1236,7 -1265,6 +1266,7 @@@ static struct sctp_association *__sctp_ net, ch, laddr, sctp_hdr(skb)->source, transportp); + break; default: break; } @@@ -1246,7 -1274,7 +1276,7 @@@
ch = (struct sctp_chunkhdr *)ch_end; chunk_num++; - } while (ch_end < skb_tail_pointer(skb)); + } while (ch_end + sizeof(*ch) < skb_tail_pointer(skb));
return asoc; } diff --combined net/tipc/link.c index 1b7a487c8841,5b6181277cc5..cf586840caeb --- a/net/tipc/link.c +++ b/net/tipc/link.c @@@ -654,7 -654,6 +654,7 @@@ int tipc_link_fsm_evt(struct tipc_link break; case LINK_FAILOVER_BEGIN_EVT: l->state = LINK_FAILINGOVER; + break; case LINK_FAILURE_EVT: case LINK_RESET_EVT: case LINK_ESTABLISH_EVT: @@@ -913,7 -912,7 +913,7 @@@ static int link_schedule_user(struct ti skb = tipc_msg_create(SOCK_WAKEUP, 0, INT_H_SIZE, 0, dnode, l->addr, dport, 0, 0); if (!skb) - return -ENOBUFS; + return -ENOMEM; msg_set_dest_droppable(buf_msg(skb), true); TIPC_SKB_CB(skb)->chain_imp = msg_importance(hdr); skb_queue_tail(&l->wakeupq, skb); @@@ -1031,7 -1030,7 +1031,7 @@@ void tipc_link_reset(struct tipc_link * * * Consumes the buffer chain. * Messages at TIPC_SYSTEM_IMPORTANCE are always accepted - * Return: 0 if success, or errno: -ELINKCONG, -EMSGSIZE or -ENOBUFS + * Return: 0 if success, or errno: -ELINKCONG, -EMSGSIZE or -ENOBUFS or -ENOMEM */ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, struct sk_buff_head *xmitq) @@@ -1089,7 -1088,7 +1089,7 @@@ if (!_skb) { kfree_skb(skb); __skb_queue_purge(list); - return -ENOBUFS; + return -ENOMEM; } __skb_queue_tail(transmq, skb); tipc_link_set_skb_retransmit_time(skb, l); diff --combined net/xfrm/xfrm_policy.c index e70cf1d2c0e0,837df4b5c1bc..827d84255021 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@@ -1902,7 -1902,8 +1902,7 @@@ static int xfrm_policy_match(const stru
match = xfrm_selector_match(sel, fl, family); if (match) - ret = security_xfrm_policy_lookup(pol->security, fl->flowi_secid, - dir); + ret = security_xfrm_policy_lookup(pol->security, fl->flowi_secid); return ret; }
@@@ -2091,12 -2092,15 +2091,15 @@@ static struct xfrm_policy *xfrm_policy_ if (unlikely(!daddr || !saddr)) return NULL;
- rcu_read_lock(); retry: - do { - sequence = read_seqcount_begin(&xfrm_policy_hash_generation); - chain = policy_hash_direct(net, daddr, saddr, family, dir); - } while (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)); + sequence = read_seqcount_begin(&xfrm_policy_hash_generation); + rcu_read_lock(); + + chain = policy_hash_direct(net, daddr, saddr, family, dir); + if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) { + rcu_read_unlock(); + goto retry; + }
ret = NULL; hlist_for_each_entry_rcu(pol, chain, bydst) { @@@ -2127,11 -2131,15 +2130,15 @@@ }
skip_inexact: - if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) + if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) { + rcu_read_unlock(); goto retry; + }
- if (ret && !xfrm_pol_hold_rcu(ret)) + if (ret && !xfrm_pol_hold_rcu(ret)) { + rcu_read_unlock(); goto retry; + } fail: rcu_read_unlock();
@@@ -2180,7 -2188,8 +2187,7 @@@ static struct xfrm_policy *xfrm_sk_poli goto out; } err = security_xfrm_policy_lookup(pol->security, - fl->flowi_secid, - dir); + fl->flowi_secid); if (!err) { if (!xfrm_pol_hold_rcu(pol)) goto again; @@@ -3245,7 -3254,7 +3252,7 @@@ xfrm_state_ok(const struct xfrm_tmpl *t
/* * 0 or more than 0 is returned when validation is succeeded (either bypass - * because of optional transport mode, or next index of the mathced secpath + * because of optional transport mode, or next index of the matched secpath * state with the template. * -1 is returned when no matching template is found. * Otherwise "-2 - errored_index" is returned.
linux-merge@lists.open-mesh.org