Searched hist:9461 (Results 26 - 34 of 34) sorted by relevance

12

/linux-master/drivers/clk/imx/
H A Dclk.hdiff 379c9a24 Sat Mar 13 05:28:17 MST 2021 Adam Ford <aford173@gmail.com> clk: imx: Fix reparenting of UARTs not associated with stdout

Most if not all i.MX SoC's call a function which enables all UARTS.
This is a problem for users who need to re-parent the clock source,
because any attempt to change the parent results in an busy error
due to the fact that the clocks have been enabled already.

clk: failed to reparent uart1 to sys_pll1_80m: -16

Instead of pre-initializing all UARTS, scan the device tree to see
which UART clocks are associated to stdout, and only enable those
UART clocks if it's needed early. This will move initialization of
the remaining clocks until after the parenting of the clocks.

When the clocks are shutdown, this mechanism will also disable any
clocks that were pre-initialized.

Fixes: 9461f7b33d11c ("clk: fix CLK_SET_RATE_GATE with clock rate protection")
Suggested-by: Aisheng Dong <aisheng.dong@nxp.com>
Signed-off-by: Adam Ford <aford173@gmail.com>
Reviewed-by: Abel Vesa <abel.vesa@nxp.com>
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Abel Vesa <abel.vesa@nxp.com>
/linux-master/scripts/package/
H A Dbuilddebdiff 9461f666 Fri Apr 24 11:08:24 MDT 2009 Frans Pop <elendil@planet.nl> kbuild, deb-pkg: generate debian/copyright file

On Thursday 23 April 2009, Frans Pop wrote:
Add a basic debian/copyright to the binary packages.

Based on an earlier patch from Maximilian Attems.

Signed-off-by: Frans Pop <elendil@planet.nl>
Acked-by: maximilian attems <max@stro.at>
Cc: Andres Salomon <dilinger@debian.org>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
/linux-master/drivers/net/wireless/intel/iwlwifi/
H A Diwl-config.hdiff b200dba7 Mon Mar 09 01:16:12 MDT 2020 Luca Coelho <luciano.coelho@intel.com> iwlwifi: map 9461 and 9462 using RF type and RF ID

These devices can be differentiated depending on the RF type and RF
ID. Change them to use these instead of relying on the subsystem
device IDs.

This also fixes some names that were not including 160MHz (as they
should).

Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20200309091348.345de1efb3ec.Ib9221027a955188ea7c1ffca8a45bccd6c1e6a13@changeid
/linux-master/drivers/gpu/drm/i915/display/
H A Dintel_bios.cdiff 9e372744 Fri Oct 13 08:02:14 MDT 2023 Ville Syrjälä <ville.syrjala@linux.intel.com> drm/i915/bios: Clamp VBT HDMI level shift on BDW

Apparently some BDW machines (eg. HP Pavilion 15-ab) shipped with
a VBT inherited from some earlier HSW model. On HSW the HDMI level
shift value could go up to 11, whereas on BDW the maximum value is
9.

The DDI code does clamp the bogus value, but it does so with
a WARN which we don't really want. To avoid that let's just sanitize
the bogus VBT HDMI level shift value ahead of time for all BDW machines.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/9461
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231013140214.1713-1-ville.syrjala@linux.intel.com
Reviewed-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
/linux-master/drivers/clk/
H A Dclk.cdiff 9461f7b3 Tue Jun 19 07:40:51 MDT 2018 Jerome Brunet <jbrunet@baylibre.com> clk: fix CLK_SET_RATE_GATE with clock rate protection

CLK_SET_RATE_GATE should prevent any operation which may result in a rate
change or glitch while the clock is prepared/enabled.

IOW, the following sequence is not allowed anymore with CLK_SET_RATE_GATE:
* clk_get()
* clk_prepare_enable()
* clk_get_rate()
* clk_set_rate()

At the moment this is enforced on the leaf clock of the operation, not
along the tree. This problematic because, if a PLL has the CLK_RATE_GATE,
it won't be enforced if the clk_set_rate() is called on its child clocks.

Using clock rate protection, we can now enforce CLK_SET_RATE_GATE along the
clock tree

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Quentin Schulz <quentin.schulz@free-electrons.com>
Tested-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Michael Turquette <mturquette@baylibre.com>
Link: lkml.kernel.org/r/20180619134051.16726-3-jbrunet@baylibre.com
/linux-master/drivers/net/wireless/ath/ath9k/
H A Dmain.cdiff bd96d390 Tue Oct 06 19:19:10 MDT 2009 Luis R. Rodriguez <lrodriguez@atheros.com> ath9k: move ath_cleanup() below helpers to avoid forward declarations

This should fix the oops which occurs during module unload
due to the dereferencig of ah upon debugfs exit.

IP: [<46412d6b>] 0x46412d6b
*pde = 00000000
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
last sysfs file: /sys/class/power_supply/BAT0/energy_full
Modules linked in: ath9k(-) ath9k_hw mac80211 ath cfg80211 <bleh>

Pid: 3112, comm: rmmod Not tainted (2.6.32-rc2-wl #101) 9461DUU
EIP: 0060:[<46412d6b>] EFLAGS: 00010246 CPU: 0
EIP is at 0x46412d6b
EAX: f5870004 EBX: f6700d94 ECX: 00000000 EDX: c14313a7
ESI: f5870000 EDI: fb58ce70 EBP: f6661eb4 ESP: f6661ea8
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process rmmod (pid: 3112, ti=f6660000 task=f6579380 task.ti=f6660000)
Stack:
fb57e5e5 f5ca5d50 fb58ce70 f6661ebc fb58629a f6661ec8 c11b715e f5ca5da8
<0> f6661ed8 c1223d98 f5ca5da8 f5ca5ddc f6661eec c1223e6f fb58ce70 fb58ce70
<0> c14958a0 f6661f00 c1222edb fb58ce70 fb58ce70 fb58cebc f6661f1c c12243c9
Call Trace:
[<fb57e5e5>] ? ath_cleanup+0x35/0x50 [ath9k]
[<fb58629a>] ? ath_pci_remove+0x1a/0x20 [ath9k]
[<c11b715e>] ? pci_device_remove+0x1e/0x40
[<c1223d98>] ? __device_release_driver+0x58/0xa0
[<c1223e6f>] ? driver_detach+0x8f/0xa0
[<c1222edb>] ? bus_remove_driver+0x7b/0xb0
[<c12243c9>] ? driver_unregister+0x49/0x80
[<c1158cf2>] ? sysfs_remove_file+0x12/0x20
[<c11b73b5>] ? pci_unregister_driver+0x35/0x90
[<fb586172>] ? ath_pci_exit+0x12/0x20 [ath9k]
[<fb5883ec>] ? ath9k_exit+0x10/0x3d [ath9k]
[<c131971d>] ? mutex_unlock+0xd/0x10
[<c1088c0f>] ? sys_delete_module+0x16f/0x220
[<c10e3d5d>] ? do_munmap+0x23d/0x290
[<c11a629c>] ? trace_hardirqs_off_thunk+0xc/0x10
[<c11a628c>] ? trace_hardirqs_on_thunk+0xc/0x10
[<c1003b41>] ? sysenter_exit+0xf/0x1a
[<c1003b08>] ? sysenter_do_call+0x12/0x3c
Code: Bad EIP value.
EIP: [<46412d6b>] 0x46412d6b SS:ESP 0068:f6661ea8
CR2: 0000000046412d6b
---[ end trace 847f3b05ff3dcb19 ]---

Reported-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
/linux-master/drivers/mfd/
H A DKconfigdiff 9461f65a Sun Jun 14 16:10:24 MDT 2009 Philipp Zabel <philipp.zabel@gmail.com> mfd: asic3: enable DS1WM cell

This enables the ASIC3's DS1WM MFD cell, supported by the ds1wm driver.

Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
/linux-master/lib/
H A DKconfig.debugdiff 8ff12cfc Thu Feb 07 18:47:41 MST 2008 Christoph Lameter <clameter@sgi.com> SLUB: Support for performance statistics

The statistics provided here allow the monitoring of allocator behavior but
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure. The per cpu structure may be extended by the
statistics to grow larger than one cacheline which will increase the cache
footprint of SLUB.

There is a compile option to enable/disable the inclusion of the runtime
statistics and its off by default.

The slabinfo tool is enhanced to support these statistics via two options:

-D Switches the line of information displayed for a slab from size
mode to activity mode.

-A Sorts the slabs displayed by activity. This allows the display of
the slabs most important to the performance of a certain load.

-r Report option will report detailed statistics on

Example (tbench load):

slabinfo -AD ->Shows the most active slabs

Name Objects Alloc Free %Fast
skbuff_fclone_cache 33 111953835 111953835 99 99
:0000192 2666 5283688 5281047 99 99
:0001024 849 5247230 5246389 83 83
vm_area_struct 1349 119642 118355 91 22
:0004096 15 66753 66751 98 98
:0000064 2067 25297 23383 98 78
dentry 10259 28635 18464 91 45
:0000080 11004 18950 8089 98 98
:0000096 1703 12358 10784 99 98
:0000128 762 10582 9875 94 18
:0000512 184 9807 9647 95 81
:0002048 479 9669 9195 83 65
anon_vma 777 9461 9002 99 71
kmalloc-8 6492 9981 5624 99 97
:0000768 258 7174 6931 58 15

So the skbuff_fclone_cache is of highest importance for the tbench load.
Pretty high load on the 192 sized slab. Look for the aliases

slabinfo -a | grep 000192
:0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP
request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili

Likely skbuff_head_cache.


Looking into the statistics of the skbuff_fclone_cache is possible through

slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned


.... Usual output ...

Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 111953360 111946981 99 99
Slowpath 1044 7423 0 0
Page Alloc 272 264 0 0
Add partial 25 325 0 0
Remove partial 86 264 0 0
RemoteObj/SlabFrozen 350 4832 0 0
Total 111954404 111954404

Flushes 49 Refill 0
Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%)

Looks good because the fastpath is overwhelmingly taken.


skbuff_head_cache:

Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 5297262 5259882 99 99
Slowpath 4477 39586 0 0
Page Alloc 937 824 0 0
Add partial 0 2515 0 0
Remove partial 1691 824 0 0
RemoteObj/SlabFrozen 2621 9684 0 0
Total 5301739 5299468

Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%)


Descriptions of the output:

Total: The total number of allocation and frees that occurred for a
slab

Fastpath: The number of allocations/frees that used the fastpath.

Slowpath: Other allocations

Page Alloc: Number of calls to the page allocator as a result of slowpath
processing

Add Partial: Number of slabs added to the partial list through free or
alloc (occurs during cpuslab flushes)

Remove Partial: Number of slabs removed from the partial list as a result of
allocations retrieving a partial slab or by a free freeing
the last object of a slab.

RemoteObj/Froz: How many times were remotely freed object encountered when a
slab was about to be deactivated. Frozen: How many times was
free able to skip list processing because the slab was in use
as the cpuslab of another processor.

Flushes: Number of times the cpuslab was flushed on request
(kmem_cache_shrink, may result from races in __slab_alloc)

Refill: Number of times we were able to refill the cpuslab from
remotely freed objects for the same slab.

Deactivate: Statistics how slabs were deactivated. Shows how they were
put onto the partial list.

In general fastpath is very good. Slowpath without partial list processing is
also desirable. Any touching of partial list uses node specific locks which
may potentially cause list lock contention.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
/linux-master/mm/
H A Dslub.cdiff 8ff12cfc Thu Feb 07 18:47:41 MST 2008 Christoph Lameter <clameter@sgi.com> SLUB: Support for performance statistics

The statistics provided here allow the monitoring of allocator behavior but
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure. The per cpu structure may be extended by the
statistics to grow larger than one cacheline which will increase the cache
footprint of SLUB.

There is a compile option to enable/disable the inclusion of the runtime
statistics and its off by default.

The slabinfo tool is enhanced to support these statistics via two options:

-D Switches the line of information displayed for a slab from size
mode to activity mode.

-A Sorts the slabs displayed by activity. This allows the display of
the slabs most important to the performance of a certain load.

-r Report option will report detailed statistics on

Example (tbench load):

slabinfo -AD ->Shows the most active slabs

Name Objects Alloc Free %Fast
skbuff_fclone_cache 33 111953835 111953835 99 99
:0000192 2666 5283688 5281047 99 99
:0001024 849 5247230 5246389 83 83
vm_area_struct 1349 119642 118355 91 22
:0004096 15 66753 66751 98 98
:0000064 2067 25297 23383 98 78
dentry 10259 28635 18464 91 45
:0000080 11004 18950 8089 98 98
:0000096 1703 12358 10784 99 98
:0000128 762 10582 9875 94 18
:0000512 184 9807 9647 95 81
:0002048 479 9669 9195 83 65
anon_vma 777 9461 9002 99 71
kmalloc-8 6492 9981 5624 99 97
:0000768 258 7174 6931 58 15

So the skbuff_fclone_cache is of highest importance for the tbench load.
Pretty high load on the 192 sized slab. Look for the aliases

slabinfo -a | grep 000192
:0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP
request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili

Likely skbuff_head_cache.


Looking into the statistics of the skbuff_fclone_cache is possible through

slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned


.... Usual output ...

Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 111953360 111946981 99 99
Slowpath 1044 7423 0 0
Page Alloc 272 264 0 0
Add partial 25 325 0 0
Remove partial 86 264 0 0
RemoteObj/SlabFrozen 350 4832 0 0
Total 111954404 111954404

Flushes 49 Refill 0
Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%)

Looks good because the fastpath is overwhelmingly taken.


skbuff_head_cache:

Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 5297262 5259882 99 99
Slowpath 4477 39586 0 0
Page Alloc 937 824 0 0
Add partial 0 2515 0 0
Remove partial 1691 824 0 0
RemoteObj/SlabFrozen 2621 9684 0 0
Total 5301739 5299468

Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%)


Descriptions of the output:

Total: The total number of allocation and frees that occurred for a
slab

Fastpath: The number of allocations/frees that used the fastpath.

Slowpath: Other allocations

Page Alloc: Number of calls to the page allocator as a result of slowpath
processing

Add Partial: Number of slabs added to the partial list through free or
alloc (occurs during cpuslab flushes)

Remove Partial: Number of slabs removed from the partial list as a result of
allocations retrieving a partial slab or by a free freeing
the last object of a slab.

RemoteObj/Froz: How many times were remotely freed object encountered when a
slab was about to be deactivated. Frozen: How many times was
free able to skip list processing because the slab was in use
as the cpuslab of another processor.

Flushes: Number of times the cpuslab was flushed on request
(kmem_cache_shrink, may result from races in __slab_alloc)

Refill: Number of times we were able to refill the cpuslab from
remotely freed objects for the same slab.

Deactivate: Statistics how slabs were deactivated. Shows how they were
put onto the partial list.

In general fastpath is very good. Slowpath without partial list processing is
also desirable. Any touching of partial list uses node specific locks which
may potentially cause list lock contention.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

Completed in 1213 milliseconds

12