Searched hist:4 (Results 1 - 25 of 43075) sorted by relevance

1234567891011>>

/linux-master/arch/powerpc/platforms/4xx/
H A Dpci.hbfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
bfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
bfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
bfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
bfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
bfa9a2eb Tue Aug 08 00:39:20 MDT 2017 Michael Ellerman <mpe@ellerman.id.au> powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
/linux-master/Documentation/devicetree/bindings/mtd/
H A Dibm,ndfc.txte21f9e2e Wed Apr 25 20:07:13 MDT 2018 Rob Herring <robh@kernel.org> dt-bindings: powerpc/4xx: move 4xx NDFC and EMAC bindings to subsystem directories

Bindings are supposed to be organized by device class/function. Move a
couple of powerpc 4xx bindings to the correct binding directory.

Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Rob Herring <robh@kernel.org>
e21f9e2e Wed Apr 25 20:07:13 MDT 2018 Rob Herring <robh@kernel.org> dt-bindings: powerpc/4xx: move 4xx NDFC and EMAC bindings to subsystem directories

Bindings are supposed to be organized by device class/function. Move a
couple of powerpc 4xx bindings to the correct binding directory.

Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Rob Herring <robh@kernel.org>
e21f9e2e Wed Apr 25 20:07:13 MDT 2018 Rob Herring <robh@kernel.org> dt-bindings: powerpc/4xx: move 4xx NDFC and EMAC bindings to subsystem directories

Bindings are supposed to be organized by device class/function. Move a
couple of powerpc 4xx bindings to the correct binding directory.

Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Rob Herring <robh@kernel.org>
/linux-master/arch/arm64/boot/dts/rockchip/
H A Drk3399-rock-pi-4a.dtsdiff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fd2762a6 Sun Jul 09 17:50:23 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Move OPP table from ROCK Pi 4 dtsi

The ROCK 4SE uses the RK3399-T variant of the RK3399 SoC, which has some
changes to the OPP tables. Prepare for the bringup of this SoC by moving
the inclusion of existing OPP tables from the common devicetree into
each board-specific devicetree.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-2-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fd2762a6 Sun Jul 09 17:50:23 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Move OPP table from ROCK Pi 4 dtsi

The ROCK 4SE uses the RK3399-T variant of the RK3399 SoC, which has some
changes to the OPP tables. Prepare for the bringup of this SoC by moving
the inclusion of existing OPP tables from the common devicetree into
each board-specific devicetree.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-2-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
H A Drk3399-rock-4se.dts86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
/linux-master/scripts/dtc/include-prefixes/arm64/rockchip/
H A Drk3399-rock-pi-4a.dtsdiff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fa16d7a8 Sat Dec 16 16:32:08 MST 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Increase maximum frequency of SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A/B/C boards come with a 32 Mbit SPI NOR flash chip (XTX
Technology Limited XT25F32) with a maximum clock frequency of 108 MHz.
Use this value for the device node's spi-max-frequency property.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Link: https://lore.kernel.org/r/20231217113208.64056-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff eddf7302 Fri Aug 11 14:11:18 MDT 2023 Stefan Nagy <stefan.nagy@ixypsilon.net> arm64: dts: rockchip: Enable internal SPI flash for ROCK Pi 4A/B/C

The ROCK Pi 4A, ROCK Pi 4B and ROCK Pi 4C boards contain a nor-flash chip
connected to spi1. Enable spi1 and add the device node.

This patch has been tested on ROCK Pi 4A.

Signed-off-by: Stefan Nagy <stefan.nagy@ixypsilon.net>
Link: https://lore.kernel.org/r/20230811201118.15066-1-stefan.nagy@ixypsilon.net
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fd2762a6 Sun Jul 09 17:50:23 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Move OPP table from ROCK Pi 4 dtsi

The ROCK 4SE uses the RK3399-T variant of the RK3399 SoC, which has some
changes to the OPP tables. Prepare for the bringup of this SoC by moving
the inclusion of existing OPP tables from the common devicetree into
each board-specific devicetree.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-2-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
diff fd2762a6 Sun Jul 09 17:50:23 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Move OPP table from ROCK Pi 4 dtsi

The ROCK 4SE uses the RK3399-T variant of the RK3399 SoC, which has some
changes to the OPP tables. Prepare for the bringup of this SoC by moving
the inclusion of existing OPP tables from the common devicetree into
each board-specific devicetree.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-2-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
H A Drk3399-rock-4se.dts86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
86a0e14a Sun Jul 09 17:50:25 MDT 2023 Christopher Obbard <chris.obbard@collabora.com> arm64: dts: rockchip: Add Radxa ROCK 4SE

Add board-specific devicetree file for the RK3399T-based Radxa ROCK 4SE
board. This board offers similar peripherals in a similar form-factor to
the existing ROCK Pi 4B but uses the cost-optimised RK3399T processor
(which has different OPP table than the RK3399) and other minimal hardware
changes.

Signed-off-by: Christopher Obbard <chris.obbard@collabora.com>
Link: https://lore.kernel.org/r/20230710115025.507439-4-chris.obbard@collabora.com
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
/linux-master/tools/testing/selftests/powerpc/mm/
H A Dlarge_vm_gpr_corruption.cdiff 501fe299 Wed Aug 31 08:02:15 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Skip 4PB test on 4K PAGE_SIZE systems

Systems using the hash MMU with a 4K page size don't support 4PB address
space, so skip the test because the bug it tests for can't be triggered.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220901020215.254097-1-mpe@ellerman.id.au
diff 501fe299 Wed Aug 31 08:02:15 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Skip 4PB test on 4K PAGE_SIZE systems

Systems using the hash MMU with a 4K page size don't support 4PB address
space, so skip the test because the bug it tests for can't be triggered.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220901020215.254097-1-mpe@ellerman.id.au
diff 501fe299 Wed Aug 31 08:02:15 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Skip 4PB test on 4K PAGE_SIZE systems

Systems using the hash MMU with a 4K page size don't support 4PB address
space, so skip the test because the bug it tests for can't be triggered.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220901020215.254097-1-mpe@ellerman.id.au
diff 501fe299 Wed Aug 31 08:02:15 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Skip 4PB test on 4K PAGE_SIZE systems

Systems using the hash MMU with a 4K page size don't support 4PB address
space, so skip the test because the bug it tests for can't be triggered.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220901020215.254097-1-mpe@ellerman.id.au
e96a76ee Thu Mar 17 08:39:25 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Add a test of 4PB SLB handling

Add a test for a bug we had in the 4PB address space SLB handling. It
was fixed in commit 4c2de74cc869 ("powerpc/64: Interrupts save PPR on
stack rather than thread_struct").

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220317143925.1030447-1-mpe@ellerman.id.au
e96a76ee Thu Mar 17 08:39:25 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Add a test of 4PB SLB handling

Add a test for a bug we had in the 4PB address space SLB handling. It
was fixed in commit 4c2de74cc869 ("powerpc/64: Interrupts save PPR on
stack rather than thread_struct").

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220317143925.1030447-1-mpe@ellerman.id.au
e96a76ee Thu Mar 17 08:39:25 MDT 2022 Michael Ellerman <mpe@ellerman.id.au> selftests/powerpc: Add a test of 4PB SLB handling

Add a test for a bug we had in the 4PB address space SLB handling. It
was fixed in commit 4c2de74cc869 ("powerpc/64: Interrupts save PPR on
stack rather than thread_struct").

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220317143925.1030447-1-mpe@ellerman.id.au
/linux-master/drivers/gpu/drm/i915/gt/
H A Dintel_renderstate.hdiff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
diff 42d10511 Mon Dec 02 13:43:14 MST 2019 Chris Wilson <chris@chris-wilson.co.uk> drm/i915: Lift i915_vma_pin() out of intel_renderstate_emit()

Once inside a request, inside the timeline->mutex, pinning is verboten.

<4> [896.032829] ======================================================
<4> [896.032831] WARNING: possible circular locking dependency detected
<4> [896.032835] 5.4.0-rc8-CI-Patchwork_15533+ #1 Tainted: G U
<4> [896.032838] ------------------------------------------------------
<4> [896.032841] gem_exec_parall/3720 is trying to acquire lock:
<4> [896.032844] ffff888401863270 (&kernel#2){+.+.}, at: i915_request_create+0x16/0x1c0 [i915]
<4> [896.032915]
but task is already holding lock:
<4> [896.032917] ffff8883ec1c93c0 (&vm->mutex){+.+.}, at: i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.032952]
which lock already depends on the new lock.

<4> [896.032954]
the existing dependency chain (in reverse order) is:
<4> [896.032956]
-> #1 (&vm->mutex){+.+.}:
<4> [896.032961] __mutex_lock+0x9a/0x9d0
<4> [896.032995] i915_vma_pin+0xf3/0x11c0 [i915]
<4> [896.033033] intel_renderstate_emit+0xb9/0x9e0 [i915]
<4> [896.033081] i915_gem_init+0x5a9/0xa50 [i915]
<4> [896.033112] i915_driver_probe+0xb00/0x15f0 [i915]
<4> [896.033144] i915_pci_probe+0x43/0x1c0 [i915]
<4> [896.033149] pci_device_probe+0x9e/0x120
<4> [896.033154] really_probe+0xea/0x420
<4> [896.033158] driver_probe_device+0x10b/0x120
<4> [896.033161] device_driver_attach+0x4a/0x50
<4> [896.033164] __driver_attach+0x97/0x130
<4> [896.033168] bus_for_each_dev+0x74/0xc0
<4> [896.033171] bus_add_driver+0x142/0x220
<4> [896.033174] driver_register+0x56/0xf0
<4> [896.033178] do_one_initcall+0x58/0x2ff
<4> [896.033183] do_init_module+0x56/0x1f8
<4> [896.033187] load_module+0x243e/0x29f0
<4> [896.033190] __do_sys_finit_module+0xe9/0x110
<4> [896.033194] do_syscall_64+0x4f/0x210
<4> [896.033197] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [896.033200]
-> #0 (&kernel#2){+.+.}:
<4> [896.033206] __lock_acquire+0x1328/0x15d0
<4> [896.033209] lock_acquire+0xa7/0x1c0
<4> [896.033213] __mutex_lock+0x9a/0x9d0
<4> [896.033255] i915_request_create+0x16/0x1c0 [i915]
<4> [896.033287] intel_engine_flush_barriers+0x4c/0x100 [i915]
<4> [896.033327] ggtt_flush+0x37/0x60 [i915]
<4> [896.033366] i915_gem_evict_something+0x46b/0x5a0 [i915]
<4> [896.033407] i915_gem_gtt_insert+0x21d/0x6a0 [i915]
<4> [896.033449] i915_vma_pin+0xb36/0x11c0 [i915]
<4> [896.033488] gen6_ppgtt_pin+0xd5/0x170 [i915]
<4> [896.033523] ring_context_pin+0x2e/0xc0 [i915]
<4> [896.033554] __intel_context_do_pin+0x6b/0x190 [i915]
<4> [896.033591] i915_gem_do_execbuffer+0x1814/0x26c0 [i915]
<4> [896.033627] i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [896.033632] drm_ioctl_kernel+0xa7/0xf0
<4> [896.033635] drm_ioctl+0x2e1/0x390
<4> [896.033638] do_vfs_ioctl+0xa0/0x6f0
<4> [896.033641] ksys_ioctl+0x35/0x60
<4> [896.033644] __x64_sys_ioctl+0x11/0x20
<4> [896.033647] do_syscall_64+0x4f/0x210
<4> [896.033650] entry_SYSCALL_64_after_hwframe+0x49/0xbe

Lift the object allocation and pin prior to the request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202204316.2665847-1-chris@chris-wilson.co.uk
H A Dintel_breadcrumbs_types.hdiff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
diff c744d503 Thu Nov 26 07:04:06 MST 2020 Chris Wilson <chris@chris-wilson.co.uk> drm/i915/gt: Split the breadcrumb spinlock between global and contexts

As we funnel more and more contexts into the breadcrumbs on an engine,
the hold time of b->irq_lock grows. As we may then contend with the
b->irq_lock during request submission, this increases the burden upon
the engine->active.lock and so directly impacts both our execution
latency and client latency. If we split the b->irq_lock by introducing a
per-context spinlock to manage the signalers within a context, we then
only need the b->irq_lock for enabling/disabling the interrupt and can
avoid taking the lock for walking the list of contexts within the signal
worker. Even with the current setup, this greatly reduces the number of
times we have to take and fight for b->irq_lock.

Furthermore, this closes the race between enabling the signaling context
while it is in the process of being signaled and removed:

<4>[ 416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
<4>[ 416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
<4>[ 416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G U 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
<4>[ 416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
<4>[ 416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
<4>[ 416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
<4>[ 416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
<4>[ 416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
<4>[ 416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
<4>[ 416.208669] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.208671] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.208675] PKRU: 55555554
<4>[ 416.208677] Call Trace:
<4>[ 416.208679] <IRQ>
<4>[ 416.208751] i915_request_enable_breadcrumb+0x278/0x400 [i915]
<4>[ 416.208839] __i915_request_submit+0xca/0x2a0 [i915]
<4>[ 416.208892] __execlists_submission_tasklet+0x480/0x1830 [i915]
<4>[ 416.208942] execlists_submission_tasklet+0xc4/0x130 [i915]
<4>[ 416.208947] tasklet_action_common.isra.17+0x6c/0x1c0
<4>[ 416.208954] __do_softirq+0xdf/0x498
<4>[ 416.208960] ? handle_fasteoi_irq+0x150/0x150
<4>[ 416.208964] asm_call_on_stack+0xf/0x20
<4>[ 416.208966] </IRQ>
<4>[ 416.208969] do_softirq_own_stack+0xa1/0xc0
<4>[ 416.208972] irq_exit_rcu+0xb5/0xc0
<4>[ 416.208976] common_interrupt+0xf7/0x260
<4>[ 416.208980] asm_common_interrupt+0x1e/0x40
<4>[ 416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
<4>[ 416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
<4>[ 416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
<4>[ 416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
<4>[ 416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
<4>[ 416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
<4>[ 416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
<4>[ 416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
<4>[ 416.209012] cpuidle_enter+0x24/0x40
<4>[ 416.209016] do_idle+0x22f/0x2d0
<4>[ 416.209022] cpu_startup_entry+0x14/0x20
<4>[ 416.209025] start_secondary+0x158/0x1a0
<4>[ 416.209030] secondary_startup_64+0xa4/0xb0
<4>[ 416.209039] irq event stamp: 10186977
<4>[ 416.209042] hardirqs last enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
<4>[ 416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[ 416.209047] softirqs last enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
<4>[ 416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20

<4>[ 416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
<4>[ 416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
<4>[ 416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G U W 5.8.0-CI-CI_DRM_8852+ #1
<4>[ 416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
<4>[ 416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
<4>[ 416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
<4>[ 416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
<4>[ 416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
<4>[ 416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
<4>[ 416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
<4>[ 416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
<4>[ 416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
<4>[ 416.209317] FS: 0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
<4>[ 416.209317] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[ 416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
<4>[ 416.209317] PKRU: 55555554
<4>[ 416.209317] Call Trace:
<4>[ 416.209317] <IRQ>
<4>[ 416.209317] remove_signaling_context.isra.13+0xd/0x70 [i915]
<4>[ 416.209513] signal_irq_work+0x1f7/0x4b0 [i915]

This is caused by virtual engines where although we take the breadcrumb
lock on each of the active engines, they may be different engines on
different requests, It turns out that the b->irq_lock was not a
sufficient proxy for the engine->active.lock in the case of more than
one request, so introduce an explicit lock around ce->signals.

v2: ce->signal_lock is acquired with only RCU protection and so must be
treated carefully and not cleared during reallocation. We also then need
to confirm that the ce we lock is the same as we found in the breadcrumb
list.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
Fixes: c18636f76344 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
Fixes: 2854d866327a ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
/linux-master/tools/testing/selftests/bpf/progs/
H A Dbpf_iter_test_kern6.c12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
12e6196f Tue Jul 28 16:18:01 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Test bpf_iter buffer access with negative offset

Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers
in verifier") added readonly/readwrite buffer support which
is currently used by bpf_iter tracing programs. It has
a bug with incorrect parameter ordering which later fixed
by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls
to check_buffer_access").

This patch added a test case with a negative offset access
which will trigger the error path.

Without Commit f6dfbe31e8fa, running the test case in the patch,
the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid (null) buffer access: off=-4, size=4

With the above commit, the error message looks like:
R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0
; value_sum += *(__u32 *)(value - 4);
2: (61) r1 = *(u32 *)(r1 -4)
R1 invalid rdwr buffer access: off=-4, size=4

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200728221801.1090406-1-yhs@fb.com
H A Dpercpu_alloc_array.cdiff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
diff 46200d6d Sun Aug 27 09:28:21 MDT 2023 Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Remove unnecessary direct read of local percpu kptr

For the second argument of bpf_kptr_xchg(), if the reg type contains
MEM_ALLOC and MEM_PERCPU, which means a percpu allocation,
after bpf_kptr_xchg(), the argument is marked as MEM_RCU and MEM_PERCPU
if in rcu critical section. This way, re-reading from the map value
is not needed. Remove it from the percpu_alloc_array.c selftest.

Without previous kernel change, the test will fail like below:

0: R1=ctx(off=0,imm=0) R10=fp0
; int BPF_PROG(test_array_map_10, int a)
0: (b4) w1 = 0 ; R1_w=0
; int i, index = 0;
1: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000????
2: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
;
3: (07) r2 += -4 ; R2_w=fp-4
; e = bpf_map_lookup_elem(&array, &index);
4: (18) r1 = 0xffff88810e771800 ; R1_w=map_ptr(off=0,ks=4,vs=16,imm=0)
6: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
7: (bf) r6 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0) R6_w=map_value_or_null(id=1,off=0,ks=4,vs=16,imm=0)
; if (!e)
8: (15) if r6 == 0x0 goto pc+81 ; R6_w=map_value(off=0,ks=4,vs=16,imm=0)
; bpf_rcu_read_lock();
9: (85) call bpf_rcu_read_lock#87892 ;
; p = e->pc;
10: (bf) r7 = r6 ; R6=map_value(off=0,ks=4,vs=16,imm=0) R7_w=map_value(off=0,ks=4,vs=16,imm=0)
11: (07) r7 += 8 ; R7_w=map_value(off=8,ks=4,vs=16,imm=0)
12: (79) r6 = *(u64 *)(r6 +8) ; R6_w=percpu_rcu_ptr_or_null_val_t(id=2,off=0,imm=0)
; if (!p) {
13: (55) if r6 != 0x0 goto pc+13 ; R6_w=0
; p = bpf_percpu_obj_new(struct val_t);
14: (18) r1 = 0x12 ; R1_w=18
16: (b7) r2 = 0 ; R2_w=0
17: (85) call bpf_percpu_obj_new_impl#87883 ; R0_w=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
18: (bf) r6 = r0 ; R0=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_or_null_val_t(id=4,ref_obj_id=4,off=0,imm=0) refs=4
; if (!p)
19: (15) if r6 == 0x0 goto pc+69 ; R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
; p1 = bpf_kptr_xchg(&e->pc, p);
20: (bf) r1 = r7 ; R1_w=map_value(off=8,ks=4,vs=16,imm=0) R7=map_value(off=8,ks=4,vs=16,imm=0) refs=4
21: (bf) r2 = r6 ; R2_w=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0) refs=4
22: (85) call bpf_kptr_xchg#194 ; R0_w=percpu_ptr_or_null_val_t(id=6,ref_obj_id=6,off=0,imm=0) refs=6
; if (p1) {
23: (15) if r0 == 0x0 goto pc+3 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
; bpf_percpu_obj_drop(p1);
24: (bf) r1 = r0 ; R0_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) R1_w=percpu_ptr_val_t(ref_obj_id=6,off=0,imm=0) refs=6
25: (b7) r2 = 0 ; R2_w=0 refs=6
26: (85) call bpf_percpu_obj_drop_impl#87882 ;
; v = bpf_this_cpu_ptr(p);
27: (bf) r1 = r6 ; R1_w=scalar(id=7) R6=scalar(id=7)
28: (85) call bpf_this_cpu_ptr#154
R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_

The R1 which gets its value from R6 is a scalar. But before insn 22, R6 is
R6=percpu_ptr_val_t(ref_obj_id=4,off=0,imm=0)
Its type is changed to a scalar at insn 22 without previous patch.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230827152821.2001129-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
H A Dbpf_iter_bpf_percpu_array_map.c60dd49ea Thu Jul 23 12:41:21 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Add test for bpf array map iterators

Two subtests are added.
$ ./test_progs -n 4
...
#4/20 bpf_array_map:OK
#4/21 bpf_percpu_array_map:OK
...

The bpf_array_map subtest also tested bpf program
changing array element values and send key/value
to user space through bpf_seq_write() interface.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184121.591367-1-yhs@fb.com
60dd49ea Thu Jul 23 12:41:21 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Add test for bpf array map iterators

Two subtests are added.
$ ./test_progs -n 4
...
#4
/20 bpf_array_map:OK
#4/21 bpf_percpu_array_map:OK
...

The bpf_array_map subtest also tested bpf program
changing array element values and send key/value
to user space through bpf_seq_write() interface.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184121.591367-1-yhs@fb.com
60dd49ea Thu Jul 23 12:41:21 MDT 2020 Yonghong Song <yhs@fb.com> selftests/bpf: Add test for bpf array map iterators

Two subtests are added.
$ ./test_progs -n 4
...
#4/20 bpf_array_map:OK
#4/21 bpf_percpu_array_map:OK
...

The bpf_array_map subtest also tested bpf program
changing array element values and send key/value
to user space through bpf_seq_write() interface.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184121.591367-1-yhs@fb.com
/linux-master/drivers/gpu/drm/amd/include/asic_reg/dcn/
H A Ddcn_3_1_4_offset.h8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
H A Ddcn_3_1_4_sh_mask.h8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
/linux-master/drivers/gpu/drm/amd/include/asic_reg/dpcs/
H A Ddpcs_3_1_4_offset.h8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
H A Ddpcs_3_1_4_sh_mask.h8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
8955ff11 Tue Jun 28 13:06:23 MDT 2022 Roman Li <roman.li@amd.com> drm/amdgpu: Add reg headers for DCN314

Register headers for the following IPs:
- DCN 3.1.4
- DPCS 3.1.4

v2:(squash) clean up (Alex)

Signed-off-by: Roman Li <roman.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
/linux-master/drivers/gpu/drm/amd/amdgpu/
H A Dpsp_v13_0_4.h2605e60c Wed Jul 27 23:25:26 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add files for PSP 13.0.4

This patch will add files for PSP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2605e60c Wed Jul 27 23:25:26 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add files for PSP 13.0.4

This patch will add files for PSP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
/linux-master/drivers/gpu/drm/amd/include/asic_reg/mp/
H A Dmp_13_0_4_offset.h492af34c Wed Jul 27 23:23:38 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add header files for MP 13.0.4

This patch will add header files for MP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
492af34c Wed Jul 27 23:23:38 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add header files for MP 13.0.4

This patch will add header files for MP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
H A Dmp_13_0_4_sh_mask.h492af34c Wed Jul 27 23:23:38 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add header files for MP 13.0.4

This patch will add header files for MP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
492af34c Wed Jul 27 23:23:38 MDT 2022 Xiaojian Du <Xiaojian.Du@amd.com> drm/amdgpu: add header files for MP 13.0.4

This patch will add header files for MP 13.0.4.

Signed-off-by: Xiaojian Du <Xiaojian.Du@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
/linux-master/arch/x86/um/shared/sysdep/
H A Dfaultinfo_32.hdiff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
H A Dfaultinfo_64.hdiff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
diff d0b5e15f Wed Mar 18 14:31:27 MDT 2015 Richard Weinberger <richard@nod.at> um: Remove SKAS3/4 support

Before we had SKAS0 UML had two modes of operation
TT (tracing thread) and SKAS3/4 (separated kernel address space).
TT was known to be insecure and got removed a long time ago.
SKAS3/4 required a few (3 or 4) patches on the host side which never went
mainline. The last host patch is 10 years old.

With SKAS0 mode (separated kernel address space using 0 host patches),
default since 2005, SKAS3/4 is obsolete and can be removed.

Signed-off-by: Richard Weinberger <richard@nod.at>
/linux-master/tools/perf/util/bpf_skel/
H A Dbperf_u.h7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
7fac83aa Tue Mar 16 15:18:35 MDT 2021 Song Liu <songliubraving@fb.com> perf stat: Introduce 'bperf' to share hardware PMCs with BPF

The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.

Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.

On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.

bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.

Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.

bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.

Committer testing:

# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)

0.100800885 seconds time elapsed

#

We can see that the counters were enabled for this workload 41.87% of
the time.

Now with --bpf-counters:

# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#

# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#

# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1

Performance counter stats for 'system wide':

119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle

0.101395093 seconds time elapsed

#

See? We had the counters enabled all the time.

Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
/linux-master/sound/soc/codecs/
H A Dcs4234.hd4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
d4edae9c Sun Sep 27 17:18:21 MDT 2020 Lucas Tanure <tanureal@opensource.cirrus.com> ASoC: cs4234: Add support for Cirrus Logic CS4234 codec

The CS4234 is a highly versatile CODEC that combines 4 channels of
high performance analog to digital conversion, 4 channels of high
performance digital to analog conversion for audio, and 1 channel of
digital to analog conversion to provide a nondelayed audio reference
signal to an external Class H tracking power supply.

DAC5 is only supported as a 5th audio channel. Tracking Power Supply
mode is not currently supported by the driver.

In DSP_A mode the slots for DAC1-4 and optionally DAC5 can be set.
The codec always claims 4 slots for DAC1-4 and these must be in the
same nibble of the mask. The codec has a fixed mapping for ADC slots.

In I2S/LJ modes the codec has a fixed mapping for DAC1-4 and ADC1-4.
DAC5 is not available in these modes.

The MCLK source must be preset to a valid frequency before probe()
because it must be running all the time the codec is out of reset.

The VA_SEL bit will be set automatically to 3.3v or 5v during probe()
based on the reported voltage of the regulator supplying VA.

Signed-off-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200928111821.26967-2-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
/linux-master/arch/arm/boot/dts/aspeed/
H A Daspeed-bmc-facebook-yosemite4.dts2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>
2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>
2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>
/linux-master/scripts/dtc/include-prefixes/arm/aspeed/
H A Daspeed-bmc-facebook-yosemite4.dts2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>
2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>
2b8d94f4 Thu Aug 10 01:00:30 MDT 2023 Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com> ARM: dts: aspeed: yosemite4: add Facebook Yosemite 4 BMC

Add linux device tree entry for Yosemite 4 devices connected to BMC.
The Yosemite 4 is a Meta multi-node server platform, based on AST2600 SoC.

Signed-off-by: Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Link: https://lore.kernel.org/r/20230810070032.335161-3-Delphine_CC_Chiu@wiwynn.com
Signed-off-by: Joel Stanley <joel@jms.id.au>

Completed in 2090 milliseconds

1234567891011>>