History log of /linux-master/drivers/cpufreq/cpufreq_ondemand.c
Revision Date Author Comments
# 88debc69 22-Feb-2024 Pierre Gondois <pierre.gondois@arm.com>

cpufreq: Remove references to 10ms min sampling rate

A minimum sampling rate value of 10ms was introduced in:
commit cef9615a853e ("[CPUFREQ] ondemand: Uncouple minimal sampling rate from HZ in NO_HZ case")

The use of this value was removed in:
commit ed4676e25463 ("cpufreq: Replace "max_transition_latency" with "dynamic_switching"")

Remove:
- a comment referencing this value
- an unused macro associated to this value

Signed-off-by: Pierre Gondois <pierre.gondois@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 3e5c04f9 21-Jul-2022 Zhao Liu <zhao1.liu@linux.intel.com>

cpufreq: ondemand: Use cpumask_var_t for on-stack cpu mask

A cpumask structure on the stack can cause a warning with
CONFIG_NR_CPUS=8192 (e.g. Ubuntu 22.04 uses this):

drivers/cpufreq/cpufreq_ondemand.c: In function 'od_set_powersave_bias':
drivers/cpufreq/cpufreq_ondemand.c:449:1: warning: the frame size of
1032 bytes is larger than 1024 bytes [-Wframe-larger-than=]
449 | }
| ^

CONFIG_CPUMASK_OFFSTACK=y is enabled by default for most distros, and
hence we can work around the warning by using cpumask_var_t.

Signed-off-by: Zhao Liu <zhao1.liu@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 85750bcd 10-Mar-2022 Lianjie Zhang <zhanglianjie@uniontech.com>

cpufreq: unify show() and store() naming and use __ATTR_XX

Usually, sysfs attributes have .show and .store and their naming
convention is filename_show() and filename_store().

But in cpufreq the naming convention of these functions is
show_filename() and store_filename() which prevents __ATTR_RW() and
__ATTR_RO() from being used in there to simplify code.

Accordingly, change the naming convention of the sysfs .show and
.store methods in cpufreq to follow the one expected by __ATTR_RW()
and __ATTR_RO() and use these macros in that code.

Signed-off-by: Lianjie Zhang <zhanglianjie@uniontech.com>
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# fe262d5c 28-Dec-2021 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

cpufreq: use default_groups in kobj_type

There are currently 2 ways to create a set of sysfs files for a
kobj_type, through the default_attrs field, and the default_groups
field. Move the cpufreq code to use default_groups field which has been
the preferred way since aa30f47cf666 ("kobject: Add support for default
attribute groups to kobj_type") so that we can soon get rid of the
obsolete default_attrs field.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# b894d20e 08-Sep-2021 Vincent Donnefort <vincent.donnefort@arm.com>

cpufreq: Use CPUFREQ_RELATION_E in DVFS governors

Let the governors schedutil, conservative and ondemand to work, if possible
on efficient frequencies only.

Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 1f39fa0d 08-Sep-2021 Vincent Donnefort <vincent.donnefort@arm.com>

cpufreq: Introducing CPUFREQ_RELATION_E

This newly introduced flag can be applied by a governor to a CPUFreq
relation, when looking for a frequency within the policy table. The
resolution would then only walk through efficient frequencies.

Even with the flag set, the policy max limit will still be honoured. If no
efficient frequencies can be found within the limits of the policy, an
inefficient one would be returned.

Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 09681a07 03-Aug-2021 Sebastian Andrzej Siewior <bigeasy@linutronix.de>

cpufreq: Replace deprecated CPU-hotplug functions

The functions get_online_cpus() and put_online_cpus() have been
deprecated during the CPU hotplug rework. They map directly to
cpus_read_lock() and cpus_read_unlock().

Replace deprecated CPU-hotplug functions with the official version.
The behavior remains unchanged.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 10dd8573 29-Jun-2020 Quentin Perret <qperret@google.com>

cpufreq: Register governors at core_initcall

Currently, most CPUFreq governors are registered at the core_initcall
time when the given governor is the default one, and the module_init
time otherwise.

In preparation for letting users specify the default governor on the
kernel command line, change all of them to be registered at the
core_initcall unconditionally, as it is already the case for the
schedutil and performance governors. This will allow us to assume
that builtin governors have been registered before the built-in
CPUFreq drivers probe.

And since all governors have similar init/exit patterns now, introduce
two new macros, cpufreq_governor_{init,exit}(), to factorize the code.

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 3f6ec871 21-Oct-2019 Amit Kucheria <amit.kucheria@linaro.org>

cpufreq: Initialize the governors in core_initcall

Initialize the cpufreq governors earlier to allow for earlier
performance control during the boot process.

Signed-off-by: Amit Kucheria <amit.kucheria@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/b98eae9b44eb2f034d7f5d12a161f5f831be1eb7.1571656015.git.amit.kucheria@linaro.org


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 2d045036 19-Jul-2017 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Drop min_sampling_rate

The cpufreq core and governors aren't supposed to set a limit on how
fast we want to try changing the frequency. This is currently done for
the legacy governors with help of min_sampling_rate.

At worst, we may end up setting the sampling rate to a value lower than
the rate at which frequency can be changed and then one of the CPUs in
the policy will be only changing frequency for ever.

But that is something for the user to decide and there is no need to
have special handling for such cases in the core. Leave it for the user
to figure out.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 55687da1 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/cpufreq.h>

We are going to split <linux/sched/cpufreq.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/cpufreq.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 4dd63b49 03-Jan-2016 Chen Yu <yu.c.chen@intel.com>

cpufreq: ondemand: Set MIN_FREQUENCY_UP_THRESHOLD to 1

Currently the minimal up_threshold is 11, and user may want to
use a smaller minimal up_threshold for performance tuning,
so MIN_FREQUENCY_UP_THRESHOLD could be set to 1 because:

1. Current systems wouldn't be affected as they have already
a value >= 11.
2. New systems with a default kernel would keep still the default
value that is >= 11.

Users now have the advantage that they can make their own decisions
and customize the 'trip point' to switch to the max frequency.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=65501
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 26f0dbc9 07-Nov-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Don't use 'timer' keyword

The earlier implementation of governors used background timers and so
functions, mutex, etc had 'timer' keyword in their names.

But that's not true anymore. Replace 'timer' with 'update', as those
functions, variables are based around updates to frequency.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 82577360 26-Jun-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Reuse new freq-table helpers

This patch migrates few users of cpufreq tables to the new helpers
that work on sorted freq-tables.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# d218ed77 02-Jun-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Return index from cpufreq_frequency_table_target()

This routine can't fail unless the frequency table is invalid and
doesn't contain any valid entries.

Make it return the index and WARN() in case it is used for an invalid
table.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 7ab4aabb 02-Jun-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Drop freq-table param to cpufreq_frequency_table_target()

The policy already has this pointer set, use it instead.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 34ac5d7a 02-Jun-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Don't keep a copy of freq_table pointer

There is absolutely no need to keep a copy to the freq-table in 'struct
od_policy_dbs_info'. Use policy->freq_table instead.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# f8bfc116 02-Jun-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Remove cpufreq_frequency_get_table()

Most of the callers of cpufreq_frequency_get_table() already have the
pointer to a valid 'policy' structure and they don't really need to go
through the per-cpu variable first and then a check to validate the
frequency, in order to find the freq-table for the policy.

Directly use the policy->freq_table field instead for them.

Only one user of that API is left after above changes, cpu_cooling.c and
it accesses the freq_table in a racy way as the policy can get freed in
between.

Fix it by using cpufreq_cpu_get() properly.

Since there are no more users of cpufreq_frequency_get_table() left, get
rid of it.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Javi Merino <javi.merino@arm.com> (cpu_cooling.c)
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 9a15fb2c 18-May-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: Drop the 'initialized' field from struct cpufreq_governor

The 'initialized' field in struct cpufreq_governor is only used by
the conservative governor (as a usage counter) and the way that
happens is far from straightforward and arguably incorrect.

Namely, the value of 'initialized' is checked by
cpufreq_dbs_governor_init() and cpufreq_dbs_governor_exit() and
the results of those checks are passed (as the second argument) to
the ->init() and ->exit() callbacks in struct dbs_governor. Those
callbacks are only implemented by the ondemand and conservative
governors and ondemand doesn't use their second argument at all.
In turn, the conservative governor uses it to decide whether or not
to either register or unregister a transition notifier.

That whole mechanism is not only unnecessarily convoluted, but also
racy, because the 'initialized' field of struct cpufreq_governor is
updated in cpufreq_init_governor() and cpufreq_exit_governor() under
policy->rwsem which doesn't help if one of these functions is run
twice in parallel for different policies (which isn't impossible in
principle), for example.

Instead of it, add a proper usage counter to the conservative
governor and update it from cs_init() and cs_exit() which is
guaranteed to be non-racy, as those functions are only called
under gov_dbs_data_mutex which is global.

With that in place, drop the 'initialized' field from struct
cpufreq_governor as it is not used any more.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# a69d6b29 18-May-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Remove prints from allocation failures

These aren't required anymore as the allocation core already prints such
messages. Remove the redundant ones.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# e788892b 02-Jun-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Get rid of governor events

The design of the cpufreq governor API is not very straightforward,
as struct cpufreq_governor provides only one callback to be invoked
from different code paths for different purposes. The purpose it is
invoked for is determined by its second "event" argument, causing it
to act as a "callback multiplexer" of sorts.

Unfortunately, that leads to extra complexity in governors, some of
which implement the ->governor() callback as a switch statement
that simply checks the event argument and invokes a separate function
to handle that specific event.

That extra complexity can be eliminated by replacing the all-purpose
->governor() callback with a family of callbacks to carry out specific
governor operations: initialization and exit, start and stop and policy
limits updates. That also turns out to reduce the code size too, so
do it.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 0dd3c1d6 21-Mar-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: New data type for management part of dbs_data

In addition to fields representing governor tunables, struct dbs_data
contains some fields needed for the management of objects of that
type. As it turns out, that part of struct dbs_data may be shared
with (future) governors that won't use the common code used by
"ondemand" and "conservative", so move it to a separate struct type
and modify the code using struct dbs_data to follow.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 8c8f77fd 20-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Move per-CPU data to the common code

After previous changes there is only one piece of code in the
ondemand governor making references to per-CPU data structures,
but it can be easily modified to avoid doing that, so modify it
accordingly and move the definition of per-CPU data used by the
ondemand and conservative governors to the common code. Next,
change that code to access the per-CPU data structures directly
rather than via a governor callback.

This causes the ->get_cpu_cdbs governor callback to become
unnecessary, so drop it along with the macro and function
definitions related to it.

Finally, drop the definitions of struct od_cpu_dbs_info_s and
struct cs_cpu_dbs_info_s that aren't necessary any more.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 7d5a9956 18-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Make governor private data per-policy

Some fields in struct od_cpu_dbs_info_s and struct cs_cpu_dbs_info_s
are only used for a limited set of CPUs. Namely, if a policy is
shared between multiple CPUs, those fields will only be used for one
of them (policy->cpu). This means that they really are per-policy
rather than per-CPU and holding room for them in per-CPU data
structures is generally wasteful. Also moving those fields into
per-policy data structures will allow some significant simplifications
to be made going forward.

For this reason, introduce struct cs_policy_dbs_info and
struct od_policy_dbs_info to hold those fields. Define each of the
new structures as an extension of struct policy_dbs_info (such that
struct policy_dbs_info is embedded in each of them) and introduce
new ->alloc and ->free governor callbacks to allocate and free
those structures, respectively, such that ->alloc() will return
a pointer to the struct policy_dbs_info embedded in the allocated
data structure and ->free() will take that pointer as its argument.

With that, modify the code accessing the data fields in question
in per-CPU data objects to look for them in the new structures
via the struct policy_dbs_info pointer available to it and drop
them from struct od_cpu_dbs_info_s and struct cs_cpu_dbs_info_s.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# d1db75ff 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: ondemand: Rework the handling of powersave bias updates

The ondemand_powersave_bias_init() function used for resetting data
fields related to the powersave bias tunable of the ondemand governor
works by walking all of the online CPUs in the system and updating the
od_cpu_dbs_info_s structures for all of them.

However, if governor tunables are per policy, the update should not
touch the CPUs that are not associated with the given dbs_data.

Moreover, since the data fields in question are only ever used for
policy->cpu in each policy governed by ondemand, the update can be
limited to those specific CPUs.

Rework the code to take the above observations into account.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# a33cce1c 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Fix CPU load information updates via ->store

The ->store() callbacks of some tunable sysfs attributes of the
ondemand and conservative governors trigger immediate updates of
the CPU load information for all CPUs "governed" by the given
dbs_data by walking the cpu_dbs_info structures for all online
CPUs in the system and updating them.

This is questionable for two reasons. First, it may lead to a lot of
extra overhead on a system with many CPUs if the given dbs_data is
only associated with a few of them. Second, if governor tunables are
per-policy, the CPUs associated with the other sets of governor
tunables should not be updated.

To address this issue, use the observation that in all of the places
in question the update operation may be carried out in the same way
(because all of the tunables involved are now located in struct
dbs_data and readily available to the common code) and make the
code in those places invoke the same (new) helper function that
will carry out the update correctly.

That new function always checks the ignore_nice_load tunable value
and updates the CPUs' prev_cpu_nice data fields if that's set, which
wasn't done by the original code in store_io_is_busy(), but it
should have been done in there too.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 76c5f66a 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: ondemand: Drop one more callback from struct od_ops

The ->powersave_bias_init_cpu callback in struct od_ops is only used
in one place and that invocation may be replaced with a direct call
to the function pointed to by that callback, so change the code
accordingly and drop the callback.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 8434dadb 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Drop unused governor callback and data fields

After some previous changes, the ->get_cpu_dbs_info_s governor
callback and the "governor" field in struct dbs_governor (whose
value represents the governor type) are not used any more, so
drop them.

Also drop the unused gov_ops field from struct dbs_governor.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 702c9e54 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Add a ->start callback for governors

To avoid having to check the governor type explicitly in the common
code in order to initialize data structures specific to the governor
type properly, add a ->start callback to struct dbs_governor and
use it to initialize those data structures for the ondemand and
conservative governors.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 8847e038 17-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Move io_is_busy to struct dbs_data

The io_is_busy governor tunable is only used by the ondemand governor
and is located in the ondemand-specific data structure, but it is
looked at by the common governor code that has to do ugly things to
get to that value, so move it to struct dbs_data and modify ondemand
accordingly.

Since the conservative governor never touches that field, it will
be always 0 for that governor and it won't have any effect on the
results of computations in that case.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 8eb055d3 16-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: ondemand: Drop unused callback from struct od_ops

The ->freq_increase callback in struct od_ops is never invoked,
so drop it.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# a7f35cff 16-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: ondemand: Simplify od_update() slightly

Drop some lines of code from od_update() by arranging the statements
in there in a more logical way.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 07aa4402 14-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Use microseconds in sample delay computations

Do not convert microseconds to jiffies and the other way around
in governor computations related to the sampling rate and sample
delay and drop delay_for_sampling_rate() which isn't of any use
then.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 6e96c5b3 14-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: ondemand: Simplify conditionals in od_dbs_timer()

Reduce the indentation level in the conditionals in od_dbs_timer()
and drop the delay variable from it.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>


# 57dc3bcd 14-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Move rate_mult to struct policy_dbs

The rate_mult field in struct od_cpu_dbs_info_s is used by the code
shared with the conservative governor and to access it that code
has to do an ugly governor type check. However, first of all it
is ever only used for policy->cpu, so it is per-policy rather than
per-CPU and second, it is initialized to 1 by cpufreq_governor_start(),
so if the conservative governor never modifies it, it will have no
effect on the results of any computations.

For these reasons, move rate_mult to struct policy_dbs_info (as a
common field).

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 4cccf755 14-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Get rid of the ->gov_check_cpu callback

The way the ->gov_check_cpu governor callback is used by the ondemand
and conservative governors is not really straightforward. Namely, the
governor calls dbs_check_cpu() that updates the load information for
the policy and the invokes ->gov_check_cpu() for the governor.

To get rid of that entanglement, notice that cpufreq_governor_limits()
doesn't need to call dbs_check_cpu() directly. Instead, it can simply
reset the sample delay to 0 which will cause a sample to be taken
immediately. The result of that is practically equivalent to calling
dbs_check_cpu() except that it will trigger a full update of governor
internal state and not just the ->gov_check_cpu() part.

Following that observation, make cpufreq_governor_limits() reset
the sample delay and turn dbs_check_cpu() into a function that will
simply evaluate the load and return the result called dbs_update().

That function can now be called by governors from the routines that
previously were pointed to by ->gov_check_cpu and those routines
can be called directly by each governor instead of dbs_check_cpu().
This way ->gov_check_cpu becomes unnecessary, so drop it.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# a23d6d18 11-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Rearrange od_dbs_timer() to avoid updating delay

Avoid extra checks in od_dbs_timer() by rearranging updates to the
local delay variable in it.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# aded387b 11-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: conservative: Update sample_delay_ns immediately

The ondemand governor already updates sample_delay_ns immediately on
updates to the sampling rate, but conservative doesn't do that.

It was left out earlier as the code was really too complex to get
that done easily. Things are sorted out very well now, however, and
the conservative governor can be modified to follow ondemand in that
respect.

Moreover, since the code needed to implement that in the
conservative governor would be identical to the corresponding
ondemand governor's code, make that code common and change both
governors to use it.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# c54df071 09-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Create and traverse list of policy_dbs to avoid deadlock

The dbs_data_mutex lock is currently used in two places. First,
cpufreq_governor_dbs() uses it to guarantee mutual exclusion between
invocations of governor operations from the core. Second, it is used by
ondemand governor's update_sampling_rate() to ensure the stability of
data structures walked by it.

The second usage is quite problematic, because update_sampling_rate() is
called from a governor sysfs attribute's ->store callback and that leads
to a deadlock scenario involving cpufreq_governor_exit() which runs
under dbs_data_mutex. Thus it is better to rework the code so
update_sampling_rate() doesn't need to acquire dbs_data_mutex.

To that end, rework update_sampling_rate() to walk a list of policy_dbs
objects supported by the dbs_data one it has been called for (instead of
walking cpu_dbs_info object for all CPUs). The list manipulation is
protected with dbs_data->mutex which also is held around the execution
of update_sampling_rate(), it is not necessary to hold dbs_data_mutex in
that function any more.

Reported-by: Juri Lelli <juri.lelli@arm.com>
Reported-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# c4435630 08-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: New sysfs show/store callbacks for governor tunables

The ondemand and conservative governors use the global-attr or freq-attr
structures to represent sysfs attributes corresponding to their tunables
(which of them is actually used depends on whether or not different
policy objects can use the same governor with different tunables at the
same time and, consequently, on where those attributes are located in
sysfs).

Unfortunately, in the freq-attr case, the standard cpufreq show/store
sysfs attribute callbacks are applied to the governor tunable attributes
and they always acquire the policy->rwsem lock before carrying out the
operation. That may lead to an ABBA deadlock if governor tunable
attributes are removed under policy->rwsem while one of them is being
accessed concurrently (if sysfs attributes removal wins the race, it
will wait for the access to complete with policy->rwsem held while the
attribute callback will block on policy->rwsem indefinitely).

We attempted to address this issue by dropping policy->rwsem around
governor tunable attributes removal (that is, around invocations of the
->governor callback with the event arg equal to CPUFREQ_GOV_POLICY_EXIT)
in cpufreq_set_policy(), but that opened up race conditions that had not
been possible with policy->rwsem held all the time. Therefore
policy->rwsem cannot be dropped in cpufreq_set_policy() at any point,
but the deadlock situation described above must be avoided too.

To that end, use the observation that in principle governor tunables may
be represented by the same data type regardless of whether the governor
is system-wide or per-policy and introduce a new structure, struct
governor_attr, for representing them and new corresponding macros for
creating show/store sysfs callbacks for them. Also make their parent
kobject use a new kobject type whose default show/store callbacks are
not related to the standard core cpufreq ones in any way (and they don't
acquire policy->rwsem in particular).

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
[ rjw: Subject & changelog + rebase ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# ff4b1789 08-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Move common tunables to 'struct dbs_data'

There are a few common tunables shared between the ondemand and
conservative governors. Move them to struct dbs_data to simplify
code.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# d0684d3b 08-Feb-2016 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Create generic macro for common tunables

Some tunables are present in governor-specific structures, whereas one
(min_sampling_rate) is located directly in struct dbs_data.

There is a special macro for creating its sysfs attribute and the
show/store callbacks, but since more tunables are going to be moved
to struct dbs_data, a new generic macro for such cases will be useful,
so add it and use it for min_sampling_rate.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# bc505475 07-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Rearrange governor data structures

The struct policy_dbs_info objects representing per-policy governor
data are not accessible directly from the corresponding policy
objects. To access them, one has to get a pointer to the
struct cpu_dbs_info of policy->cpu and use the policy_dbs field of
that which isn't really straightforward.

To address that rearrange the governor data structures so the
governor_data pointer in struct cpufreq_policy will point to
struct policy_dbs_info (instead of struct dbs_data) and that will
contain a pointer to struct dbs_data.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# d10b5eb5 06-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Drop cpu argument from dbs_check_cpu()

Since policy->cpu is always passed as the second argument to
dbs_check_cpu(), it is not really necessary to pass it, because
the function can obtain that value via its first argument just fine.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# e40e7b25 10-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Rename cpu_common_dbs_info to policy_dbs_info

The struct cpu_common_dbs_info structure represents the per-policy
part of the governor data (for the ondemand and conservative
governors), but its name doesn't reflect its purpose.

Rename it to struct policy_dbs_info and rename variables related to
it accordingly.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# ea59ee0d 07-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Drop the gov pointer from struct dbs_data

Since it is possible to obtain a pointer to struct dbs_governor
from a pointer to the struct governor embedded in it with the help
of container_of(), the additional gov pointer in struct dbs_data
isn't really necessary.

Drop that pointer and make the code using it reach the dbs_governor
object via policy->governor.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 906a6e5a 07-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Rework cpufreq_governor_dbs()

Since it is possible to obtain a pointer to struct dbs_governor
from a pointer to the struct governor embedded in it via
container_of(), the second argument of cpufreq_governor_init()
is not necessary. Accordingly, cpufreq_governor_dbs() doesn't
need its second argument either and the ->governor callbacks
for both the ondemand and conservative governors may be set
to cpufreq_governor_dbs() directly. Make that happen.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Saravana Kannan <skannan@codeaurora.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 7bdad34d 07-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Rename some data types and variables

The ondemand and conservative governors are represented by
struct common_dbs_data whose name doesn't reflect the purpose it
is used for, so rename it to struct dbs_governor and rename
variables of that type accordingly.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# af926185 04-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Put governor structure into common_dbs_data

For the ondemand and conservative governors (generally, governors
that use the common code in cpufreq_governor.c), there are two static
data structures representing the governor, the struct governor
structure (the interface to the cpufreq core) and the struct
common_dbs_data one (the interface to the cpufreq_governor.c code).

There's no fundamental reason why those two structures have to be
separate. Moreover, if the struct governor one is included into
struct common_dbs_data, it will be possible to reach the latter from
the policy via its policy->governor pointer, so it won't be necessary
to pass a separate pointer to it around. For this reason, embed
struct governor in struct common_dbs_data.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Saravana Kannan <skannan@codeaurora.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 2bb8d94f 07-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Use common mutex for dbs_data protection

Every governor relying on the common code in cpufreq_governor.c
has to provide its own mutex in struct common_dbs_data. However,
there actually is no need to have a separate mutex per governor
for this purpose, they may be using the same global mutex just
fine. Accordingly, introduce a single common mutex for that and
drop the mutex field from struct common_dbs_data.

That at least will ensure that the mutex is always present and
initialized regardless of what the particular governors do.

Another benefit is that the common code does not need a pointer to
a governor-related structure to get to the mutex which sometimes
helps.

Finally, it makes the code generally easier to follow.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Saravana Kannan <skannan@codeaurora.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# 9be4fd2c 10-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: governor: Replace timers with utilization update callbacks

Instead of using a per-CPU deferrable timer for queuing up governor
work items, register a utilization update callback that will be
invoked from the scheduler on utilization changes.

The sampling rate is still the same as what was used for the
deferrable timers and the added irq_work overhead should be offset by
the eliminated timers overhead, so in theory the functional impact of
this patch should not be significant.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>


# de1df26b 04-Feb-2016 Rafael J. Wysocki <rafael.j.wysocki@intel.com>

cpufreq: Clean up default and fallback governor setup

The preprocessor magic used for setting the default cpufreq governor
(and for using the performance governor as a fallback one for that
matter) is really nasty, so replace it with __weak functions and
overrides.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Saravana Kannan <skannan@codeaurora.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>


# f08f638b 02-Dec-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: update update_sampling_rate() to make it more efficient

Currently update_sampling_rate() runs over each online CPU and
cancels/queues timers on all policy->cpus every time. This should be
done just once for any cpu belonging to a policy.

Create a cpumask and keep on clearing it as and when we process
policies, so that we don't have to traverse through all CPUs of the same
policy.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 70f43e5e 08-Dec-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: replace per-CPU delayed work with timers

cpufreq governors evaluate load at sampling rate and based on that they
update frequency for a group of CPUs belonging to the same cpufreq
policy.

This is required to be done in a single thread for all policy->cpus, but
because we don't want to wakeup idle CPUs to do just that, we use
deferrable work for this. If we would have used a single delayed
deferrable work for the entire policy, there were chances that the CPU
required to run the handler can be in idle and we might end up not
changing the frequency for the entire group with load variations.

And so we were forced to keep per-cpu works, and only the one that
expires first need to do the real work and others are rescheduled for
next sampling time.

We have been using the more complex solution until now, where we used a
delayed deferrable work for this, which is a combination of a timer and
a work.

This could be made lightweight by keeping per-cpu deferred timers with a
single work item, which is scheduled by the first timer that expires.

This patch does just that and here are important changes:
- The timer handler will run in irq context and so we need to use a
spin_lock instead of the timer_mutex. And so a separate timer_lock is
created. This also makes the use of the mutex and lock quite clear, as
we know what exactly they are protecting.
- A new field 'skip_work' is added to track when the timer handlers can
queue a work. More comments present in code.

Suggested-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# affde5d0 02-Dec-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Pass policy as argument to ->gov_dbs_timer()

Pass 'policy' as argument to ->gov_dbs_timer() instead of cdbs and
dbs_data.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# e68fe18c 02-Dec-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Work is guaranteed to be pending

We are guaranteed to have works scheduled for policy->cpus, as the
policy isn't stopped yet. And so there is no need to check that again.
Drop it.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# e128c864 02-Dec-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Update sampling rate only for concerned policies

We are comparing policy->governor against cpufreq_gov_ondemand to make
sure that we update sampling rate only for the concerned CPUs. But that
isn't enough.

In case of governor_per_policy, there can be multiple instances of
ondemand governor and we will always end up updating all of them with
current code. What we rather need to do, is to compare dbs_data with
poilcy->governor_data, which will match only for the policies governed
by dbs_data.

This code is also racy as the governor might be getting stopped at that
time and we may end up scheduling work for a policy, which we have just
disabled.

Fix that by protecting the entire function with &od_dbs_cdata.mutex,
which will prevent against races with policy START/STOP/etc.

After these locks are in place, we can safely get the policy via per-cpu
dbs_info.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 083701b1 13-Oct-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Drop unnecessary locks from update_sampling_rate()

'timer_mutex' is required to sync work-handlers of policy->cpus.
update_sampling_rate() is just canceling the works and queuing them
again. This isn't protecting anything at all in update_sampling_rate()
and is not gonna be of any use.

Even if a work-handler is already running for a CPU,
cancel_delayed_work_sync() will wait for it to finish.

Drop these unnecessary locks.

Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 43e0ee36 18-Jul-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: split out common part of {cs|od}_dbs_timer()

Some part of cs_dbs_timer() and od_dbs_timer() is exactly same and is
unnecessarily duplicated.

Create the real work-handler in cpufreq_governor.c and put the common
code in this routine (dbs_timer()).

Shouldn't make any functional change.

Reviewed-and-tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 44152cb8 18-Jul-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Keep single copy of information common to policy->cpus

Some information is common to all CPUs belonging to a policy, but are
kept on per-cpu basis. Lets keep that in another structure common to all
policy->cpus. That will make updates/reads to that less complex and less
error prone.

The memory for cpu_common_dbs_info is allocated/freed at INIT/EXIT, so
that it we don't reallocate it for STOP/START sequence. It will be also
be used (in next patch) while the governor is stopped and so must not be
freed that early.

Reviewed-and-tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 42994af6 19-Jun-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: rename cur_policy as policy

Just call it 'policy', cur_policy is unnecessarily long and doesn't
have any special meaning.

Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 386d46e6 19-Jun-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Name delayed-work as dwork

Delayed work was named as 'work' and to access work within it we do
work.work. Not much readable. Rename delayed_work as 'dwork'.

Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 732b6d61 03-Jun-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Serialize governor callbacks

There are several races reported in cpufreq core around governors (only
ondemand and conservative) by different people.

There are at least two race scenarios present in governor code:
(a) Concurrent access/updates of governor internal structures.

It is possible that fields such as 'dbs_data->usage_count', etc. are
accessed simultaneously for different policies using same governor
structure (i.e. CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag unset). And
because of this we can dereference bad pointers.

For example consider a system with two CPUs with separate 'struct
cpufreq_policy' instances. CPU0 governor: ondemand and CPU1: powersave.
CPU0 switching to powersave and CPU1 to ondemand:
CPU0 CPU1

store* store*

cpufreq_governor_exit() cpufreq_governor_init()
dbs_data = cdata->gdbs_data;

if (!--dbs_data->usage_count)
kfree(dbs_data);

dbs_data->usage_count++;
*Bad pointer dereference*

There are other races possible between EXIT and START/STOP/LIMIT as
well. Its really complicated.

(b) Switching governor state in bad sequence:

For example trying to switch a governor to START state, when the
governor is in EXIT state. There are some checks present in
__cpufreq_governor() but they aren't sufficient as they compare events
against 'policy->governor_enabled', where as we need to take governor's
state into account, which can be used by multiple policies.

These two issues need to be solved separately and the responsibility
should be properly divided between cpufreq and governor core.

The first problem is more about the governor core, as it needs to
protect its structures properly. And the second problem should be fixed
in cpufreq core instead of governor, as its all about sequence of
events.

This patch is trying to solve only the first problem.

There are two types of data we need to protect,
- 'struct common_dbs_data': No matter what, there is going to be a
single copy of this per governor.
- 'struct dbs_data': With CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag set, we
will have per-policy copy of this data, otherwise a single copy.

Because of such complexities, the mutex present in 'struct dbs_data' is
insufficient to solve our problem. For example we need to protect
fetching of 'dbs_data' from different structures at the beginning of
cpufreq_governor_dbs(), to make sure it isn't currently being updated.

This can be fixed if we can guarantee serialization of event parsing
code for an individual governor. This is best solved with a mutex per
governor, and the placeholder for that is 'struct common_dbs_data'.

And so this patch moves the mutex from 'struct dbs_data' to 'struct
common_dbs_data' and takes it at the beginning and drops it at the end
of cpufreq_governor_dbs().

Tested with and without following configuration options:

CONFIG_LOCKDEP_SUPPORT=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_PI_LIST=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
CONFIG_LOCKDEP=y
CONFIG_DEBUG_ATOMIC_SLEEP=y

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 8e0484d2 03-Jun-2015 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: register notifier from cs_init()

Notifiers are required only for conservative governor and the common
governor code is unnecessarily polluted with that. Handle that from
cs_init/exit() instead of cpufreq_governor_dbs().

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 6393d6a1 30-Jun-2014 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: ondemand: Eliminate the deadband effect

Currently, ondemand calculates the target frequency proportional to load
using the formula:
Target frequency = C * load
where C = policy->cpuinfo.max_freq / 100

Though, in many cases, the minimum available frequency is pretty high and
the above calculation introduces a dead band from load 0 to
100 * policy->cpuinfo.min_freq / policy->cpuinfo.max_freq where the target
frequency is always calculated to less than policy->cpuinfo.min_freq and
the minimum frequency is selected.

For example: on Intel i7-3770 @ 3.4GHz the policy->cpuinfo.min_freq = 1600000
and the policy->cpuinfo.max_freq = 3400000 (without turbo). Thus, the CPU
starts to scale up at a load above 47.
On quad core 1500MHz Krait the policy->cpuinfo.min_freq = 384000
and the policy->cpuinfo.max_freq = 1512000. Thus, the CPU starts to scale
at load above 25.

Change the calculation of target frequency to eliminate the above effect using
the formula:

Target frequency = A + B * load
where A = policy->cpuinfo.min_freq and
B = (policy->cpuinfo.max_freq - policy->cpuinfo->min_freq) / 100

This will map load values 0 to 100 linearly to cpuinfo.min_freq to
cpuinfo.max_freq.

Also, use the CPUFREQ_RELATION_C in __cpufreq_driver_target to select the
closest frequency in frequency_table. This is necessary to avoid selection
of minimum frequency only when load equals to 0. It will also help for selection
of frequencies using a more 'fair' criterion.

Tables below show the difference in selected frequency for specific values
of load without and with this patch. On Intel i7-3770 @ 3.40GHz:
Without With
Load Target Selected Target Selected
0 0 1600000 1600000 1600000
5 170050 1600000 1690050 1700000
10 340100 1600000 1780100 1700000
15 510150 1600000 1870150 1900000
20 680200 1600000 1960200 2000000
25 850250 1600000 2050250 2100000
30 1020300 1600000 2140300 2100000
35 1190350 1600000 2230350 2200000
40 1360400 1600000 2320400 2400000
45 1530450 1600000 2410450 2400000
50 1700500 1900000 2500500 2500000
55 1870550 1900000 2590550 2600000
60 2040600 2100000 2680600 2600000
65 2210650 2400000 2770650 2800000
70 2380700 2400000 2860700 2800000
75 2550750 2600000 2950750 3000000
80 2720800 2800000 3040800 3000000
85 2890850 2900000 3130850 3100000
90 3060900 3100000 3220900 3300000
95 3230950 3300000 3310950 3300000
100 3401000 3401000 3401000 3401000

On ARM quad core 1500MHz Krait:
Without With
Load Target Selected Target Selected
0 0 384000 384000 384000
5 75600 384000 440400 486000
10 151200 384000 496800 486000
15 226800 384000 553200 594000
20 302400 384000 609600 594000
25 378000 384000 666000 702000
30 453600 486000 722400 702000
35 529200 594000 778800 810000
40 604800 702000 835200 810000
45 680400 702000 891600 918000
50 756000 810000 948000 918000
55 831600 918000 1004400 1026000
60 907200 918000 1060800 1026000
65 982800 1026000 1117200 1134000
70 1058400 1134000 1173600 1134000
75 1134000 1134000 1230000 1242000
80 1209600 1242000 1286400 1242000
85 1285200 1350000 1342800 1350000
90 1360800 1458000 1399200 1350000
95 1436400 1458000 1455600 1458000
100 1512000 1512000 1512000 1512000

Tested on Intel i7-3770 CPU @ 3.40GHz and on ARM quad core 1500MHz Krait
(Android smartphone).
Benchmarks on Intel i7 shows a performance improvement on low and medium
work loads with lower power consumption. Specifics:

Phoronix Linux Kernel Compilation 3.1:
Time: -0.40%, energy: -0.07%
Phoronix Apache:
Time: -4.98%, energy: -2.35%
Phoronix FFMPEG:
Time: -6.29%, energy: -4.02%

Also, running mp3 decoding (very low load) shows no differences with and
without this patch.

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 880eef04 31-Oct-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: ondemand: Remove redundant return statement

After commit dfa5bb622555 (cpufreq: ondemand: Change the calculation
of target frequency), this return statement is no longer needed.

Reported-by: Henrik Nilsson <Karl.Henrik.Nilsson@gmail.com>
Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 934dac1e 26-Aug-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: governors: Remove duplicate check of target freq in supported range

Function __cpufreq_driver_target() checks if target_freq is within
policy->min and policy->max range. generic_powersave_bias_target() also
checks if target_freq is valid via a cpufreq_frequency_table_target()
call. So, drop the unnecessary duplicate check in *_check_cpu().

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# d5b73cd8 06-Aug-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Use sizeof(*ptr) convetion for computing sizes

Chapter 14 of Documentation/CodingStyle says:

The preferred form for passing a size of a struct is the following:

p = kmalloc(sizeof(*p), ...);

The alternative form where struct name is spelled out hurts
readability and introduces an opportunity for a bug when the pointer
variable type is changed but the corresponding sizeof that is passed
to a memory allocator is not.

This wasn't followed consistently in drivers/cpufreq, let's make it
more consistent by always following this rule.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 3a3e9e06 06-Aug-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Give consistent names to cpufreq_policy objects

They are called policy, cur_policy, new_policy, data, etc. Just call
them policy wherever possible.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 5ff0a268 06-Aug-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Clean up header files included in the core

This patch addresses the following issues in the header files in the
cpufreq core:
- Include headers in ascending order, so that we don't add same
many times by mistake.
- <asm/> must be included after <linux/>, so that they override
whatever they need to.
- Remove unnecessary includes.
- Don't include files already included by cpufreq.h or
cpufreq_governor.h.

[rjw: Changelog]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 6c4640c3 04-Aug-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: rename ignore_nice as ignore_nice_load

This sysfs file was called ignore_nice_load earlier and commit
4d5dcc4 (cpufreq: governor: Implement per policy instances of
governors) changed its name to ignore_nice by mistake.

Lets get it renamed back to its original name.

Reported-by: Martin von Gagern <Martin.vGagern@gmx.net>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# dfa5bb62 05-Jun-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: ondemand: Change the calculation of target frequency

The ondemand governor calculates load in terms of frequency and
increases it only if load_freq is greater than up_threshold
multiplied by the current or average frequency. This appears to
produce oscillations of frequency between min and max because,
for example, a relatively small load can easily saturate minimum
frequency and lead the CPU to the max. Then, it will decrease
back to the min due to small load_freq.

Change the calculation method of load and target frequency on the
basis of the following two observations:

- Load computation should not depend on the current or average
measured frequency. For example, absolute load of 80% at 100MHz
is not necessarily equivalent to 8% at 1000MHz in the next
sampling interval.

- It should be possible to increase the target frequency to any
value present in the frequency table proportional to the absolute
load, rather than to the max only, so that:

Target frequency = C * load

where we take C = policy->cpuinfo.max_freq / 100.

Tested on Intel i7-3770 CPU @ 3.40GHz and on Quad core 1500MHz Krait.
Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an
increase ~1.5% in performance. cpufreq_stats (time_in_state) shows
that middle frequencies are used more, with this patch. Highest
and lowest frequencies were used less by ~9%.

[rjw: We have run multiple other tests on kernels with this
change applied and in the vast majority of cases it turns out
that the resulting performance improvement also leads to reduced
consumption of energy. The change is additionally justified by
the overall simplification of the code in question.]

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# c2837558 25-Jun-2013 Jacob Shin <jacob.shin@amd.com>

cpufreq: fix NULL pointer deference at od_set_powersave_bias()

When initializing the default powersave_bias value, we need to first
make sure that this policy is running the ondemand governor.

Reported-and-tested-by: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Jacob Shin <jacob.shin@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 60e6726c 08-May-2013 Borislav Petkov <bp@suse.de>

cpufreq, ondemand: Remove leftover debug line

I don't see how the virtual address of the tuners pointer would be of
any help to anyone so remove it.

Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# fb30809e 02-Apr-2013 Jacob Shin <jacob.shin@amd.com>

cpufreq: ondemand: allow custom powersave_bias_target handler to be registered

This allows for another [arch specific] driver to hook into existing
powersave bias function of the ondemand governor. i.e. This allows AMD
specific powersave bias function (in a separate AMD specific driver)
to aid ondemand governor's frequency transition decisions.

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
Acked-by: Thomas Renninger <trenn@suse.de>
Acked-by: Borislav Petkov <bp@suse.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 9366d840 28-Feb-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: governors: Calculate iowait time only when necessary

Currently we always calculate the CPU iowait time and add it to idle time.
If we are in ondemand and we use io_is_busy, we re-calculate iowait time
and we subtract it from idle time.

With this patch iowait time is calculated only when necessary avoiding
the double call to get_cpu_iowait_time_us. We use a parameter in
function get_cpu_idle_time to distinguish when the iowait time will be
added to idle time or not, without the need of keeping the prev_io_wait.

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.,org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 031299b3 26-Feb-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governors: Avoid unnecessary per cpu timer interrupts

Following patch has introduced per cpu timers or works for ondemand and
conservative governors.

commit 2abfa876f1117b0ab45f191fb1f82c41b1cbc8fe
Author: Rickard Andersson <rickard.andersson@stericsson.com>
Date: Thu Dec 27 14:55:38 2012 +0000

cpufreq: handle SW coordinated CPUs

This causes additional unnecessary interrupts on all cpus when the load is
recently evaluated by any other cpu. i.e. When load is recently evaluated by cpu
x, we don't really need any other cpu to evaluate this load again for the next
sampling_rate time.

Some sort of code is present to avoid that but we are still getting timer
interrupts for all cpus. A good way of avoiding this would be to modify delays
for all cpus (policy->cpus) whenever any cpu has evaluated load.

This patch does this change and some related code cleanup.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 9d445920 26-Feb-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: ondemand: Don't update sample_type if we don't evaluate load again

Because we have per cpu timer now, we check if we need to evaluate load again or
not (In case it is recently evaluated). Here the 2nd cpu which got timer
interrupt updates core_dbs_info->sample_type irrespective of load evaluation is
required or not. Which is wrong as the first cpu is dependent on this variable
set to an older value.

Moreover it would be best in this case to schedule 2nd cpu's timer to
sampling_rate instead of freq_lo or hi as that must be managed by the other cpu.
In case the other cpu idles in between then also we wouldn't loose much power.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 4d5dcc42 27-Mar-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governor: Implement per policy instances of governors

Currently, there can't be multiple instances of single governor_type.
If we have a multi-package system, where we have multiple instances
of struct policy (per package), we can't have multiple instances of
same governor. i.e. We can't have multiple instances of ondemand
governor for multiple packages.

Governors directory in sysfs is created at /sys/devices/system/cpu/cpufreq/
governor-name/. Which again reflects that there can be only one
instance of a governor_type in the system.

This is a bottleneck for multicluster system, where we want different
packages to use same governor type, but with different tunables.

This patch uses the infrastructure provided by earlier patch and
implements init/exit routines for ondemand and conservative
governors.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 06eb09d1 08-Feb-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: ondemand: Fix typos in comments

Fix some typos in comments.

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 4bd4e428 06-Feb-2013 Stratos Karafotis <stratosk@semaphore.gr>

cpufreq: ondemand: Replace down_differential tuner with adj_up_threshold

In order to avoid the calculation of up_threshold - down_differential
every time that the frequency must be decreased, we replace the
down_differential tuner with the adj_up_threshold which keeps the
difference across multiple checks.

Update the adj_up_threshold only when the up_theshold is also updated.

Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 4447266b 31-Jan-2013 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governors: Remove code redundancy between governors

With the inclusion of following patches:

9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary

code redundancy between the conservative and ondemand governors is
introduced again, so get rid of it.

[rjw: Changelog]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 09dca5ae 31-Jan-2013 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: governors: fix misuse of cdbs.cpu

Fix governors code to set all cpu's cdbs->cpu to the the actual cpu id
and use cur_policy->cpu istead of cdbs->cpu to track current governor's
leader cpu.

Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 2624f90c 31-Jan-2013 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: governors: implement generic policy_is_shared

Implement a generic helper function policy_is_shared() to replace the
current dbs_sw_coordinated_cpus() at cpufreq level, so that it can be
used by code other than cpufreq governors.

Suggested-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 8ee2ec51 27-Dec-2012 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: ondemand: use all CPUs in update_sampling_rate

Modify update_sampling_rate() to check, and eventually immediately
schedule, all CPU's do_dbs_timer delayed work.

This is required in case of software coordinated CPUs, as we now have a
separate delayed work for each CPU.

Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# da53d61e 27-Dec-2012 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: ondemand: call dbs_check_cpu only when necessary

Modify ondemand timer to not resample CPU utilization if recently
sampled from another SW coordinated core.

Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 2abfa876 27-Dec-2012 Rickard Andersson <rickard.andersson@stericsson.com>

cpufreq: handle SW coordinated CPUs

This patch fixes a bug that occurred when we had load on a secondary CPU
and the primary CPU was sleeping. Only one sampling timer was spawned
and it was spawned as a deferred timer on the primary CPU, so when a
secondary CPU had a change in load this was not detected by the cpufreq
governor (both ondemand and conservative).

This patch make sure that deferred timers are run on all CPUs in the
case of software controlled CPUs that run on the same frequency.

Signed-off-by: Rickard Andersson <rickard.andersson@stericsson.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 3e33ee9e 26-Nov-2012 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: ondemand: update sampling rate only on right CPUs

Fix cpufreq_gov_ondemand to skip CPU where another governor is used.

The bug present itself as NULL pointer access on the mutex_lock() call,
an can be reproduced on an SMP machine by setting the default governor
to anything other than ondemand, setting a single CPU's governor to
ondemand, then changing the sample rate by writing on:

> /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate

Backtrace:

Nov 26 17:36:54 balto kernel: [ 839.585241] BUG: unable to handle kernel NULL pointer dereference at (null)
Nov 26 17:36:54 balto kernel: [ 839.585311] IP: [<ffffffff8174e082>] __mutex_lock_slowpath+0xb2/0x170
[snip]
Nov 26 17:36:54 balto kernel: [ 839.587005] Call Trace:
Nov 26 17:36:54 balto kernel: [ 839.587030] [<ffffffff8174da82>] mutex_lock+0x22/0x40
Nov 26 17:36:54 balto kernel: [ 839.587067] [<ffffffff81610b8f>] store_sampling_rate+0xbf/0x150
Nov 26 17:36:54 balto kernel: [ 839.587110] [<ffffffff81031e9c>] ? __do_page_fault+0x1cc/0x4c0
Nov 26 17:36:54 balto kernel: [ 839.587153] [<ffffffff813309bf>] kobj_attr_store+0xf/0x20
Nov 26 17:36:54 balto kernel: [ 839.587192] [<ffffffff811bb62d>] sysfs_write_file+0xcd/0x140
Nov 26 17:36:54 balto kernel: [ 839.587234] [<ffffffff8114c12c>] vfs_write+0xac/0x180
Nov 26 17:36:54 balto kernel: [ 839.587271] [<ffffffff8114c472>] sys_write+0x52/0xa0
Nov 26 17:36:54 balto kernel: [ 839.587306] [<ffffffff810321ce>] ? do_page_fault+0xe/0x10
Nov 26 17:36:54 balto kernel: [ 839.587345] [<ffffffff81751202>] system_call_fastpath+0x16/0x1b

Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# d3c31a77 23-Nov-2012 Fabio Baltieri <fabio.baltieri@linaro.org>

cpufreq: ondemand: fix wrong delay sampling rate

Restore the correct delay value for ondemand's od_dbs_timer, as it was
changed erroneously in commit 83f0e55 (cpufreq: governors: remove
redundant code).

Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 4471a34f 25-Oct-2012 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: governors: remove redundant code

Initially ondemand governor was written and then using its code conservative
governor is written. It used a lot of code from ondemand governor, but copy of
code was created instead of using the same routines from both governors. Which
increased code redundancy, which is difficult to manage.

This patch is an attempt to move common part of both the governors to
cpufreq_governor.c file to come over above mentioned issues.

This shouldn't change anything from functionality point of view.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# 2aacdfff 22-Oct-2012 Viresh Kumar <viresh.kumar@linaro.org>

cpufreq: Move common part from governors to separate file, v2

Multiple cpufreq governers have defined similar get_cpu_idle_time_***()
routines. These routines must be moved to some common place, so that all
governors can use them.

So moving them to cpufreq_governor.c, which seems to be a better place for
keeping these routines.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


# f2636517 14-Sep-2012 Michal Pecio <mpecio@nvidia.com>

cpufreq / ondemand: update frequency when limits are relaxed

Reevaluate CPU load and update frequency immediately whenever limits
are changed. Currently ondemand doesn't do that when limits are
relaxed, wasting power on systems with relatively low sampling rate.

Signed-off-by: Michal Pecio <mpecio@nvidia.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>


# 203b42f7 21-Aug-2012 Tejun Heo <tj@kernel.org>

workqueue: make deferrable delayed_work initializer names consistent

Initalizers for deferrable delayed_work are confused.

* __DEFERRED_WORK_INITIALIZER()
* DECLARE_DEFERRED_WORK()
* INIT_DELAYED_WORK_DEFERRABLE()

Rename them to

* __DEFERRABLE_WORK_INITIALIZER()
* DECLARE_DEFERRABLE_WORK()
* INIT_DEFERRABLE_WORK()

This patch doesn't cause any functional changes.

Signed-off-by: Tejun Heo <tj@kernel.org>


# fd0ef7a0 29-Feb-2012 MyungJoo Ham <myungjoo.ham@samsung.com>

[CPUFREQ] CPUfreq ondemand: update sampling rate without waiting for next sampling

When a new sampling rate is shorter than the current one, (e.g., 1 sec
--> 10 ms) regardless how short the new one is, the current ondemand
mechanism wait for the previously set timer to be expired.

For example, if the user has just expressed that the sampling rate
should be 10 ms from now and the previous was 1000 ms, the new rate may
become effective 999 ms later, which could be not acceptable for the
user if the user has intended to speed up sampling because the system is
expected to react to CPU load fluctuation quickly from __now__.

In order to address this issue, we need to cancel the previously set
timer (schedule_delayed_work) and reset the timer if resetting timer is
expected to trigger the delayed_work ealier.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 64861634 15-Dec-2011 Martin Schwidefsky <schwidefsky@de.ibm.com>

[S390] cputime: add sparse checking and cleanup

Make cputime_t and cputime64_t nocast to enable sparse checking to
detect incorrect use of cputime. Drop the cputime macros for simple
scalar operations. The conversion macros are still needed.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>


# 21f2e3c8 09-Dec-2011 Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>

[CPUFREQ] Remove wall variable from cpufreq_gov_dbs_init()

CPUFREQ Remove wall variable from cpufreq_gov_dbs_init()

Remove wall variable from cpufreq_gov_dbs_init() as
get_cpu_idle_time_us() no longer updates the last_update_time
unconditionally. Passing non-NULL last_update_time address
will result in accounting additional idle time with
update_ts_time_stats() before returning idle_sleeptime.

Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Dave Jones <davej@redhat.com>
--
drivers/cpufreq/cpufreq_ondemand.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)


# 3292beb3 28-Nov-2011 Glauber Costa <glommer@parallels.com>

sched/accounting: Change cpustat fields to an array

This patch changes fields in cpustat from a structure, to an
u64 array. Math gets easier, and the code is more flexible.

Signed-off-by: Glauber Costa <glommer@parallels.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Tuner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1322498719-2255-2-git-send-email-glommer@parallels.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 6beea0cd 24-Aug-2011 Michal Hocko <mhocko@suse.cz>

nohz: Fix update_ts_time_stat idle accounting

update_ts_time_stat currently updates idle time even if we are in
iowait loop at the moment. The only real users of the idle counter
(via get_cpu_idle_time_us) are CPU governors and they expect to get
cumulative time for both idle and iowait times.
The value (idle_sleeptime) is also printed to userspace by print_cpu
but it prints both idle and iowait times so the idle part is misleading.

Let's clean this up and fix update_ts_time_stat to account both counters
properly and update consumers of idle to consider iowait time as well.
If we do this we might use get_cpu_{idle,iowait}_time_us from other
contexts as well and we will get expected values.

Signed-off-by: Michal Hocko <mhocko@suse.cz>
Cc: Dave Jones <davej@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Link: http://lkml.kernel.org/r/e9c909c221a8da402c4da07e4cd968c3218f8eb1.1314172057.git.mhocko@suse.cz
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# bd74b32b 06-Aug-2011 Paul Bolle <pebolle@tiscali.nl>

Fix documentation and comment typo 'no_hz'

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Len Brown <len.brown@intel.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>


# 326c86de 03-Mar-2011 Thomas Renninger <trenn@suse.de>

[CPUFREQ] Remove unneeded locks

There cannot be any concurrent access to these through
different cpu sysfs files anymore, because these tunables
are now all global (not per cpu).

I still have some doubts whether some of these locks
were needed at all. Anyway, let's get rid of them.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>
CC: cpufreq@vger.kernel.org


# e8951251 03-Mar-2011 Thomas Renninger <trenn@suse.de>

[CPUFREQ] Remove old, deprecated per cpu ondemand/conservative sysfs files

Marked deprecated for quite a whilte now...

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>
CC: cpufreq@vger.kernel.org


# ef598549 03-Mar-2011 Thomas Renninger <trenn@suse.de>

[CPUFREQ] Remove deprecated sysfs file sampling_rate_max

Marked deprecated for quite a while now...

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>
CC: cpufreq@vger.kernel.org


# 5cb2c3bd 07-Feb-2011 Vincent Guittot <vincent.guittot@linaro.org>

[CPUFREQ] calculate delay after dbs_check_cpu

calculate ondemand delay after dbs_check_cpu call because it can
modify rate_mult value

use freq_lo_jiffies value for the sub sample period of powersave_bias mode

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# 57df5573 25-Jan-2011 Tejun Heo <tj@kernel.org>

cpufreq: use system_wq instead of dedicated workqueues

With cmwq, there's no reason for cpufreq drivers to use separate
workqueues. Remove the dedicated workqueues from cpufreq_conservative
and cpufreq_ondemand and use system_wq instead. The work items are
already sync canceled on stop, so it's already guaranteed that no work
is running on module exit.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Dave Jones <davej@redhat.com>
Cc: cpufreq@vger.kernel.org


# 3f78a9f7 06-Oct-2010 David C Niemi <dniemi@verisign.com>

[CPUFREQ] add sampling_down_factor tunable to improve ondemand performance

Adds a new global tunable, sampling_down_factor. Set to 1 it makes no
changes from existing behavior, but set to greater than 1 (e.g. 100)
it acts as a multiplier for the scheduling interval for reevaluating
load when the CPU is at its top speed due to high load. This improves
performance by reducing the overhead of load evaluation and helping
the CPU stay at its top speed when truly busy, rather than shifting
back and forth in speed. This tunable has no effect on behavior at
lower speeds/lower CPU loads.

This patch is against 2.6.36-rc6.

This patch should help solve kernel bug 19672 "ondemand is slow".

Signed-off-by: David Niemi <dniemi@verisign.com>
Acked-by: Venkatesh Pallipadi <venki@google.com>
CC: Daniel Hollocher <danielhollocher@gmail.com>
CC: <cpufreq-list@vger.kernel.org>
CC: <linux-kernel@vger.kernel.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# a665df9d 11-Mar-2010 Jocelyn Falempe <jocelyn.falempe@motorola.com>

[CPUFREQ] ondemand: don't synchronize sample rate unless multiple cpus present

For UP systems this is not required, and results in a more consistent
sample interval.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Jocelyn Falempe <jocelyn.falempe@motorola.com>
Signed-off-by: Mike Chan <mike@android.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# 00e299ff 26-Jan-2010 Mike Chan <mike@android.com>

[CPUFREQ] ondemand: Refactor frequency increase code

Make simpler to read and call.

*** v3 - Always call when powersave_bias is enabled.

Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Mike Chan <mike@android.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 19379b11 09-May-2010 Arjan van de Ven <arjan@linux.intel.com>

ondemand: Make the iowait-is-busy time a sysfs tunable

Pavel Machek pointed out that not all CPUs have an efficient
idle at high frequency. Specifically, older Intel and various
AMD cpus would get a higher powerusage when copying files from
USB.

Mike Chan pointed out that the same is true for various ARM
chips as well.

Thomas Renninger suggested to make this a sysfs tunable with a
reasonable default.

This patch adds a sysfs tunable for the new behavior, and uses
a very simple function to determine a reasonable default,
depending on the CPU vendor/type.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: davej@redhat.com
LKML-Reference: <20100509082651.46914d04@infradead.org>
[ minor tidyup ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 6b8fcd90 09-May-2010 Arjan van de Ven <arjan@linux.intel.com>

ondemand: Solve a big performance issue by counting IOWAIT time as busy

The ondemand cpufreq governor uses CPU busy time (e.g. not-idle
time) as a measure for scaling the CPU frequency up or down.
If the CPU is busy, the CPU frequency scales up, if it's idle,
the CPU frequency scales down. Effectively, it uses the CPU busy
time as proxy variable for the more nebulous "how critical is
performance right now" question.

This algorithm falls flat on its face in the light of workloads
where you're alternatingly disk and CPU bound, such as the ever
popular "git grep", but also things like startup of programs and
maildir using email clients... much to the chagarin of Andrew
Morton.

This patch changes the ondemand algorithm to count iowait time
as busy, not idle, time. As shown in the breakdown cases above,
iowait is performance critical often, and by counting iowait,
the proxy variable becomes a more accurate representation of the
"how critical is performance" question.

The problem and fix are both verified with the "perf timechar"
tool.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20100509082606.3d9f00d0@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 6dad2a29 31-Mar-2010 Borislav Petkov <borislav.petkov@amd.com>

cpufreq: Unify sysfs attribute definition macros

Multiple modules used to define those which are with identical
functionality and were needlessly replicated among the different cpufreq
drivers. Push them into the header and remove duplication.

Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-7-git-send-email-bp@amd64.org>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>


# 1dbf5888 21-Dec-2009 Nagananda.Chumbalkar@hp.com <Nagananda.Chumbalkar@hp.com>

[CPUFREQ] Fix ondemand to not request targets outside policy limits

Dominik said:
target_freq cannot be below policy->min or above policy->max.
If it were, the whole cpufreq subsystem is broken.

But (answer):
I think the "ondemand" governor can ask for a target frequency that is
below policy->min.
...
A patch such as below may be needed to sanitize the target frequency
requested by "ondemand". The "conservative" governor already has this check:

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>

# diff -bur x/drivers/cpufreq/cpufreq_ondemand.c.orig y/drivers/cpufreq/cpufreq_ondemand.c


# 54c9a35d 11-Nov-2009 Pallipadi, Venkatesh <venkatesh.pallipadi@intel.com>

[CPUFREQ] Resolve time unit thinko in ondemand/conservative govs

ondemand and conservative governors are messing up time units in the
code path where NO_HZ is not enabled and ignore_nice is set. The walltime
idletime stored is in jiffies and nice time calculation is happening in
microseconds.

The problem was reported and diagnosed by Alexander here.
http://marc.info/?l=linux-kernel&m=125752550404513&w=2

The patch below fixes this thinko.

Reported-by: Alexander Miller <Miller@fmi.uni-stuttgart.de>
Tested-by: Alexander Miller <Miller@fmi.uni-stuttgart.de>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 0e625ac1 24-Jul-2009 Thomas Renninger <trenn@suse.de>

[CPUFREQ] ondemand - Use global sysfs dir for tuning settings

Ondemand has only global variables for userspace tunings via sysfs.
But they were exposed per CPU which wrongly implies to the user that
his settings are applied per cpu. Also locking sysfs against concurrent
access won't be necessary anymore after deprecation time.

This means the ondemand config dir is moved:
/sys/devices/system/cpu/cpu*/cpufreq/ondemand ->
/sys/devices/system/cpu/cpufreq/ondemand

The old files will still exist, but reading or writing to them will
result in one (printk_once) deprecation msg to syslog per file.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# 5a75c828 02-Jul-2009 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ] Cleanup locking in ondemand governor

Redesign the locking inside ondemand driver. Make dbs_mutex handle all the
global state changes inside the driver and invent a new percpu mutex
to serialize percpu timer and frequency limit change.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 7d26e2d5 02-Jul-2009 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ] Eliminate the recent lockdep warnings in cpufreq

Commit b14893a62c73af0eca414cfed505b8c09efc613c although it was very
much needed to properly cleanup ondemand timer, opened-up a can of worms
related to locking dependencies in cpufreq.

Patch here defines the need for dbs_mutex and cleans up its usage in
ondemand governor. This also resolves the lockdep warnings reported here

http://lkml.indiana.edu/hypermail/linux/kernel/0906.1/01925.html
http://lkml.indiana.edu/hypermail/linux/kernel/0907.0/00820.html

and few others..

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 245b2e70 24-Jun-2009 Tejun Heo <tj@kernel.org>

percpu: clean up percpu variable definitions

Percpu variable definition is about to be updated such that all percpu
symbols including the static ones must be unique. Update percpu
variable definitions accordingly.

* as,cfq: rename ioc_count uniquely

* cpufreq: rename cpu_dbs_info uniquely

* xen: move nesting_count out of xen_evtchn_do_upcall() and rename it

* mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
rename it

* ipv4,6: rename cookie_scratch uniquely

* x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to
pmc_irq_entry and nmi_entry to pmc_nmi_entry

* perf_counter: rename disable_count to perf_disable_count

* ftrace: rename test_event_disable to ftrace_test_event_disable

* kmemleak: rename test_pointer to kmemleak_test_pointer

* mce: rename next_interval to mce_next_interval

[ Impact: percpu usage cleanups, no duplicate static percpu var names ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <srostedt@redhat.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andi Kleen <andi@firstfloor.org>


# 4f4d1ad6 22-Apr-2009 Thomas Renninger <trenn@suse.de>

[CPUFREQ] Only set sampling_rate_max deprecated, sampling_rate_min is useful

Update the documentation accordingly.
Cleanup and use printk_once.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# cef9615a 22-Apr-2009 Thomas Renninger <trenn@suse.de>

[CPUFREQ] ondemand: Uncouple minimal sampling rate from HZ in NO_HZ case

With this patch you have following minimal sampling rate restrictions:

Kernel restrictions:
If CONFIG_NO_HZ is set, the limit is 10ms fixed.
If CONFIG_NO_HZ is not set or no_hz=off boot parameter is used, the
limits depend on the CONFIG_HZ option:
HZ=1000: min=20000us (20ms)
HZ=250: min=80000us (80ms)
HZ=100: min=200000us (200ms)

HW restrictions:
Do not sample/poll more often than HW latency * 100 exported by the low
level cpufreq HW driver

The higher value of above restrictions is the minimal sampling rate
that can be set (and can be seen via ondemand/sampling_rate_min sysfs file)

Default sampling rate still is HW latency * 1000, but this will now end
up in lower values on latest (Intel and AMD) hardware as these can switch
really fast and sampling rate mostly was limited to the 80ms or 200ms
(depending on whether HZ=250 or HZ=1000 is used).

Signed-off-by: Thomas Renninger <trenn@suse.de>
Cc: Pallipadi Venkatesh <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# b14893a6 17-May-2009 Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>

[CPUFREQ] fix timer teardown in ondemand governor

* Rafael J. Wysocki (rjw@sisk.pl) wrote:
> This message has been generated automatically as a part of a report
> of regressions introduced between 2.6.28 and 2.6.29.
>
> The following bug entry is on the current list of known regressions
> introduced between 2.6.28 and 2.6.29. Please verify if it still should
> be listed and let me know (either way).
>
>
> Bug-Entry : http://bugzilla.kernel.org/show_bug.cgi?id=13186
> Subject : cpufreq timer teardown problem
> Submitter : Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
> Date : 2009-04-23 14:00 (24 days old)
> References : http://marc.info/?l=linux-kernel&m=124049523515036&w=4
> Handled-By : Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
> Patch : http://patchwork.kernel.org/patch/19754/
> http://patchwork.kernel.org/patch/19753/
>

(updated changelog)

cpufreq fix timer teardown in ondemand governor

The problem is that dbs_timer_exit() uses cancel_delayed_work() when it should
use cancel_delayed_work_sync(). cancel_delayed_work() does not wait for the
workqueue handler to exit.

The ondemand governor does not seem to be affected because the
"if (!dbs_info->enable)" check at the beginning of the workqueue handler returns
immediately without rescheduling the work. The conservative governor in
2.6.30-rc has the same check as the ondemand governor, which makes things
usually run smoothly. However, if the governor is quickly stopped and then
started, this could lead to the following race :

dbs_enable could be reenabled and multiple do_dbs_timer handlers would run.
This is why a synchronized teardown is required.

The following patch applies to, at least, 2.6.28.x, 2.6.29.1, 2.6.30-rc2.

Depends on patch
cpufreq: remove rwsem lock from CPUFREQ_GOV_STOP call

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: gregkh@suse.de
CC: stable@kernel.org
CC: cpufreq@vger.kernel.org
CC: Ingo Molnar <mingo@elte.hu>
CC: rjw@sisk.pl
CC: Ben Slusky <sluskyb@paranoiacs.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# 112124ab 04-Feb-2009 Thomas Renninger <trenn@suse.de>

[CPUFREQ] ondemand/conservative: sanitize sampling_rate restrictions

Limit sampling rate to transition_latency * 100 or kernel limits.
If sampling_rate is tried to be set too low, set the lowest allowed value.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# 9411b4ef 04-Feb-2009 Thomas Renninger <trenn@suse.de>

[CPUFREQ] ondemand/conservative: deprecate sampling_rate{min,max}

The same info can be obtained via the transition_latency sysfs file

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# 2b03f891 17-Jan-2009 Dave Jones <davej@redhat.com>

[CPUFREQ] checkpatch cleanups for ondemand governor.

Signed-off-by: Dave Jones <davej@redhat.com>


# 1ca3abdb 23-Jan-2009 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] Make ignore_nice_load setting of ondemand work as expected.

ondemand micro-accounting of idle time changes broke ignore_nice_load
sysfs setting due to a thinko in the code.

The bug entry:
http://bugzilla.kernel.org/show_bug.cgi?id=12310

Reported-by: Jim Bray <jimsantelmo@gmail.com>

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 835481d9 04-Jan-2009 Rusty Russell <rusty@rustcorp.com.au>

cpumask: convert struct cpufreq_policy to cpumask_var_t

Impact: use new cpumask API to reduce memory usage

This is part of an effort to reduce structure sizes for machines
configured with large NR_CPUS. cpumask_t gets replaced by
cpumask_var_t, which is either struct cpumask[1] (small NR_CPUS) or
struct cpumask * (large NR_CPUS).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>
Acked-by: Dave Jones <davej@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 4f6e6b9f 18-Sep-2008 Andrea Righi <righi.andrea@gmail.com>

[CPUFREQ] Fix BUG: using smp_processor_id() in preemptible code

Use get_cpu()/put_cpu() in cpufreq_ondemand init routine, instead of
smp_processor_id() to avoid the following BUG:

[ 35.313118] BUG: using smp_processor_id() in preemptible [00000000] code=: modprobe/4952
[ 35.313132] caller is cpufreq_gov_dbs_init+0xa/0x8f [cpufreq_ondemand]
[ 35.313140] Pid: 4952, comm: modprobe Not tainted 2.6.27-rc5-mm1 #23
[ 35.313145] Call Trace:
[ 35.313158] [<ffffffff80361ff7>] debug_smp_processor_id+0xd7/0xe0
[ 35.313167] [<ffffffffa010800a>] cpufreq_gov_dbs_init+0xa/0x8f [cpufreq_ondemand]
[ 35.313176] [<ffffffff8020903b>] _stext+0x3b/0x160
[ 35.313185] [<ffffffff804768c5>] __mutex_unlock_slowpath+0xe5/0x190
[ 35.313195] [<ffffffff8026236a>] trace_hardirqs_on_caller+0xca/0x140
[ 35.313205] [<ffffffff8026ef4c>] sys_init_module+0xdc/0x210
[ 35.313212] [<ffffffff8020b7cb>] system_call_fastpath+0x16/0x1b

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# c4d14bc0 20-Sep-2008 Sven Wegener <sven.wegener@stealer.net>

[CPUFREQ] Don't export governors for default governor

We don't need to export the governors for use as the default governor,
because the default governor will be built-in anyway and we can access
the symbol directly.

This also fixes the following sparse warnings:

drivers/cpufreq/cpufreq_conservative.c:578:25: warning: symbol 'cpufreq_gov_conservative' was not declared. Should it be static?
drivers/cpufreq/cpufreq_ondemand.c:582:25: warning: symbol 'cpufreq_gov_ondemand' was not declared. Should it be static?
drivers/cpufreq/cpufreq_performance.c:39:25: warning: symbol 'cpufreq_gov_performance' was not declared. Should it be static?
drivers/cpufreq/cpufreq_powersave.c:38:25: warning: symbol 'cpufreq_gov_powersave' was not declared. Should it be static?
drivers/cpufreq/cpufreq_userspace.c:190:25: warning: symbol 'cpufreq_gov_userspace' was not declared. Should it be static?

Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Signed-off-by: Dave Jones <davej@redhat.com>


# 80800913 04-Aug-2008 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ][6/6] cpufreq: Add idle microaccounting in ondemand governor

Use get_cpu_idle_time_us() to get micro-accounted idle information.
This enables ondemand to get more accurate idle and busy timings
than the jiffy based calculation. As a result, we can decrease
the ondemand safety gaurd band from 80-10 to 95-3.

Results in more aggressive power savings.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# e9d95bf7 04-Aug-2008 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ][4/6] cpufreq_ondemand: Parameterize down differential

Use a parameter for down differential, instead of hardcoded 10%. Follow-on
patch changes the down-differential dynamically, based on whether
we are using idle micro-accounting or not.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 3430502d 04-Aug-2008 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ][3/6] cpufreq: get_cpu_idle_time() changes in ondemand for idle-microaccounting

Preparatory changes for doing idle micro-accounting in ondemand governor.
get_cpu_idle_time() gets extra parameter and returns idle time and also the
wall time that corresponds to the idle time measurement.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# c43aa3bd 04-Aug-2008 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ][2/6] cpufreq: Change load calculation in ondemand for software coordination

Change the load calculation algorithm in ondemand to work well with software
coordination of frequency across the dependent cpus.

Multiply individual CPU utilization with the average freq of that logical CPU
during the measurement interval (using getavg call). And find the max CPU
utilization number in terms of CPU freq. That number is then used to
get to the target freq for next sampling interval.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# bf0b90e3 04-Aug-2008 venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>

[CPUFREQ][1/6] cpufreq: Add cpu number parameter to __cpufreq_driver_getavg()

Add a cpu parameter to __cpufreq_driver_getavg(). This is needed for software
cpufreq coordination where policy->cpu may not be same as the CPU on which we
want to getavg frequency.

A follow-on patch will use this parameter to getavg freq from all cpus
in policy->cpus.

Change since last patch. Fix the offline/online and suspend/resume
oops reported by Youquan Song <youquan.song@intel.com>

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 888a794c 13-Jul-2008 Akinobu Mita <akinobu.mita@gmail.com>

[CPUFREQ] add error handling for cpufreq_register_governor() error

Add error handling for cpufreq_register_governor() error

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: cpufreq@lists.linux.org.uk
Signed-off-by: Dave Jones <davej@redhat.com>


# 068b1277 12-May-2008 Mike Travis <travis@sgi.com>

cpufreq: use performance variant for_each_cpu_mask_nr

Change references from for_each_cpu_mask to for_each_cpu_mask_nr
where appropriate

Reviewed-by: Paul Jackson <pj@sgi.com>
Reviewed-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 6915719b 17-Jan-2008 Johannes Weiner <hannes@saeurebad.de>

cpufreq: Initialise default governor before use

When the cpufreq driver starts up at boot time, it calls into the default
governor which might not be initialised yet. This hurts when the
governor's worker function relies on memory that is not yet set up by its
init function.

This migrates all governors from module_init() to fs_initcall() when being
the default, as was already done in cpufreq_performance when it was the
only possible choice. The performance governor is always initialized early
because it might be used as fallback even when not being the default.

Fixes at least one actual oops where ondemand is the default governor and
cpufreq_governor_dbs() uses the uninitialised kondemand_wq work-queue
during boot-time.

Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 1c256245 02-Oct-2007 Thomas Renninger <trenn@suse.de>

[CPUFREQ] allow ondemand and conservative cpufreq governors to be used as default

Depending on the transition latency of the HW for cpufreq switches, the
ondemand or conservative governor cannot be used with certain cpufreq
drivers. Still the ondemand should be the default governor on a wide range
of systems. This patch allows this and lets the governor fallback to the
performance governor at cpufreq driver load time, if the driver does not
support fast enough frequency switching.

Main benefit is that on e.g. installation or other systems without
userspace support a working dynamic cpufreq support can be achieved on most
systems by simply loading the cpufreq driver. This is especially essential
for recent x86(_64) laptop hardware which may rely on working dynamic
cpufreq OS support.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Bryan Wu <bryan.wu@analog.com>
Cc: Andi Kleen <ak@suse.de>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# ea487615 20-Jun-2007 Venki Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] ondemand: fix tickless accounting and software coordination bug

With tickless kernel and software coordination os P-states, ondemand
can look at wrong idle statistics. This can happen when ondemand sampling
is happening on CPU 0 and due to software coordination sampling also looks at
utilization of CPU 1. If CPU 1 is in tickless state at that moment, its idle
statistics will not be uptodate and CPU 0 thinks CPU 1 is idle for less
amount of time than it actually is.

This can be resolved by looking at all the busy times of CPUs, which is
accurate, even with tickless, and use that to determine idle time in a
round about way (total time - busy time).

Thanks to Arjan for originally reporting the ondemand bug on
Lenovo T61.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 0af99b13 20-Jun-2007 Venki Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] ondemand: add a check to avoid negative load calculation

Due to rounding and inexact jiffy accounting, idle_ticks can sometimes
be higher than total_ticks. Make sure those cases are handled as
zero load case.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 28287033 08-May-2007 Venki Pallipadi <venkatesh.pallipadi@intel.com>

Add a new deferrable delayed work init

Add a new deferrable delayed work init. This can be used to schedule work
that are 'unimportant' when CPU is idle and can be called later, when CPU
eventually comes out of idle.

Use this init in cpufreq ondemand governor.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 48ac3271 18-Feb-2007 Oleg Nesterov <oleg@tv-sign.ru>

[CPUFREQ] cpufreq_ondemand.c: don't use _WORK_NAR

Looks like dbs_timer() is very careful wrt per_cpu(cpu_dbs_info),
and it doesn't need the help of WORK_STRUCT_NOAUTOREL.

Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Acked-By: David Howells <dhowells@redhat.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# c18a1483 10-Feb-2007 Dave Jones <davej@redhat.com>

[CPUFREQ] Whitespace fixup

Signed-off-by: Dave Jones <davej@redhat.com>


# 56463b78 05-Feb-2007 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] ondemand governor use new cpufreq rwsem locking in work callback

Eliminate flush_workqueue in cpufreq_governor(STOP) callpath. Using flush
there has a deadlock potential as in

http://uwsg.iu.edu/hypermail/linux/kernel/0611.3/1223.html

Also, cleanup the locking issues with do_dbs_timer delayed_work callback. As
it changes the CPU frequency using __cpufreq_target, it needs to have
policy_rwsem in write mode, which also protects it from hot plug.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# 529af7a1 05-Feb-2007 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] ondemand governor restructure the work callback

Restructure the delayed_work callback in ondemand.

This eliminates the need for smp_processor_id in the callback function and
also helps in proper locking and avoiding flush_workqueue when stopping the
governor (done in subsequent patch).

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# c1200697 05-Feb-2007 Dave Jones <davej@redhat.com>

[CPUFREQ] Remove hotplug cpu crap

The hotplug CPU locking in cpufreq is horrendous. No-one seems to care
enough to fix it, so just remove it so that the 99.9% of the real world
users of this code can use cpufreq without being bothered by warnings.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# c4028958 22-Nov-2006 David Howells <dhowells@redhat.com>

WorkStruct: make allyesconfig

Fix up for make allyesconfig.

Signed-Off-By: David Howells <dhowells@redhat.com>


# e08f5f5b 26-Oct-2006 Gautham R Shenoy <ego@in.ibm.com>

[CPUFREQ] Fix coding style issues in cpufreq.

Clean up cpufreq subsystem to fix coding style issues and to improve
the readability.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 914f7c31 20-Oct-2006 Jeff Garzik <jeff@garzik.org>

[CPUFREQ] handle sysfs errors

Signed-off-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# dfde5d62 03-Oct-2006 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ][8/8] acpi-cpufreq: Add support for freq feedback from hardware

Enable ondemand governor and acpi-cpufreq to use IA32_APERF and IA32_MPERF MSR
to get active frequency feedback for the last sampling interval. This will
make ondemand take right frequency decisions when hardware coordination of
frequency is going on.

Without APERF/MPERF, ondemand can take wrong decision at times due
to underlying hardware coordination or TM2.
Example:
* CPU 0 and CPU 1 are hardware cooridnated.
* CPU 1 running at highest frequency.
* CPU 0 was running at highest freq. Now ondemand reduces it to
some intermediate frequency based on utilization.
* Due to underlying hardware coordination with other CPU 1, CPU 0 continues to
run at highest frequency (as long as other CPU is at highest).
* When ondemand samples CPU 0 again next time, without actual frequency
feedback from APERF/MPERF, it will think that previous frequency change
was successful and can go to wrong target frequency. This is because it
thinks that utilization it has got this sampling interval is when running at
intermediate frequency, rather than actual highest frequency.

More information about IA32_APERF IA32_MPERF MSR:
Refer to IA-32 Intel® Architecture Software Developer's Manual at
http://developer.intel.com

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 3906f4ed 05-Sep-2006 Dave Jones <davej@redhat.com>

[CPUFREQ] Fix sparse warning in ondemand

drivers/cpufreq/cpufreq_ondemand.c:323:2: warning: Using plain integer as NULL pointer

Signed-off-by: Dave Jones <davej@redhat.com>


# b5ecf60f 13-Aug-2006 Adrian Bunk <bunk@stusta.de>

[CPUFREQ] make drivers/cpufreq/cpufreq_ondemand.c:powersave_bias_target() static

This patch makes the needlessly global powersave_bias_target() static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# 05ca0350 31-Jul-2006 Alexey Starikovskiy <alexey_y_starikovskiy@linux.intel.com>

[CPUFREQ][2/2] ondemand: updated add powersave_bias tunable

ondemand selects the minimum frequency that can retire
a workload with negligible idle time -- ideally resulting in the highest
performance/power efficiency with negligible performance impact.

But on some systems and some workloads, this algorithm
is more performance biased than necessary, and
de-tuning it a bit to allow some performance impact
can save measurable power.

This patch adds a "powersave_bias" tunable to ondemand
to allow it to reduce its target frequency by a specified percent.

By default, the powersave_bias is 0 and has no effect.
powersave_bias is in units of 0.1%, so it has an effective range
of 1 through 1000, resulting in 0.1% to 100% impact.

In practice, users will not be able to detect a difference between
0.1% increments, but 1.0% increments turned out to be too large.
Also, the max value of 1000 (100%) would simply peg the system
in its deepest power saving P-state, unless the processor really has
a hardware P-state at 0Hz:-)

For example, If ondemand requests 2.0GHz based on utilization,
and powersave_bias=100, this code will knock 10% off the target
and seek a target of 1.8GHz instead of 2.0GHz until the
next sampling. If 1.8 is an exact match with an hardware frequency
we use it, otherwise we average our time between the frequency
next higher than 1.8 and next lower than 1.8.

Note that a user or administrative program can change powersave_bias
at run-time depending on how they expect the system to be used.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi at intel.com>
Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy at intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 1ce28d6b 31-Jul-2006 Alexey Starikovskiy <alexey_y_starikovskiy@linux.intel.com>

[CPUFREQ][1/2] ondemand: updated tune for hardware coordination

Try to make dbs_check_cpu() call on all CPUs at the same jiffy.
This will help when multiple cores share P-states via Hardware Coordination.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi at intel.com>
Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy at intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 153d7f3f 26-Jul-2006 Arjan van de Ven <arjan@linux.intel.com>

[PATCH] Reorganize the cpufreq cpu hotplug locking to not be totally bizare

The patch below moves the cpu hotplugging higher up in the cpufreq
layering; this is needed to avoid recursive taking of the cpu hotplug
lock and to otherwise detangle the mess.

The new rules are:
1. you must do lock_cpu_hotplug() around the following functions:
__cpufreq_driver_target
__cpufreq_governor (for CPUFREQ_GOV_LIMITS operation only)
__cpufreq_set_policy
2. governer methods (.governer) must NOT take the lock_cpu_hotplug()
lock in any way; they are called with the lock taken already
3. if your governer spawns a thread that does things, like calling
__cpufreq_driver_target, your thread must honor rule #1.
4. the policy lock and other cpufreq internal locks nest within
the lock_cpu_hotplug() lock.

I'm not entirely happy about how the __cpufreq_governor rule ended up
(conditional locking rule depending on the argument) but basically all
callers pass this as a constant so it's not too horrible.

The patch also removes the cpufreq_governor() function since during the
locking audit it turned out to be entirely unused (so no need to fix it)

The patch works on my testbox, but it could use more testing
(otoh... it can't be much worse than the current code)

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 2cd7cbdf 23-Jul-2006 Linus Torvalds <torvalds@macmini.osdl.org>

[cpufreq] ondemand: make shutdown sequence more robust

Shutting down the ondemand policy was fraught with potential
problems, causing issues for SMP suspend (which wants to hot-
unplug) all but the last CPU.

This should fix at least the worst problems (divide-by-zero
and infinite wait for the workqueue to shut down).

Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# ffac80e9 28-Jun-2006 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] Misc cleanups in ondemand.

Misc cleanups in ondemand. Should have zero functional impact.
Also adding Alexey as author.

Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 2f8a835c 28-Jun-2006 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] Make ondemand sampling per CPU and remove the mutex usage in sampling path.

Make ondemand sampling per CPU and remove the mutex usage in sampling path.

Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# ccb2fe20 28-Jun-2006 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] Remove slowdown from ondemand sampling path.

Remove slowdown from ondemand sampling path. This reduces the code path length
in dbs_check_cpu() by half. slowdown was not used by ondemand by default.
If there are any user level tools that were using this tunable, they
may report error now.

Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 138a0128 23-Jun-2006 Andrew Morton <akpm@osdl.org>

[PATCH] cpufreq build fix

drivers/cpufreq/cpufreq_ondemand.c: In function 'do_dbs_timer':
drivers/cpufreq/cpufreq_ondemand.c:374: warning: implicit declaration of function 'lock_cpu_hotplug'
drivers/cpufreq/cpufreq_ondemand.c:381: warning: implicit declaration of function 'unlock_cpu_hotplug'
drivers/cpufreq/cpufreq_conservative.c: In function 'do_dbs_timer':
drivers/cpufreq/cpufreq_conservative.c:425: warning: implicit declaration of function 'lock_cpu_hotplug'
drivers/cpufreq/cpufreq_conservative.c:432: warning: implicit declaration of function 'unlock_cpu_hotplug'

Cc: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 4ec223d0 21-Jun-2006 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

[CPUFREQ] Fix ondemand vs suspend deadlock

Rootcaused the bug to a deadlock in cpufreq and ondemand. Due to non-existent
ordering between cpu_hotplug lock and dbs_mutex. Basically a race condition
between cpu_down() and do_dbs_timer().

cpu_down() flow:
* cpu_down() call for CPU 1
* Takes hot plug lock
* Calls pre down notifier
* cpufreq notifier handler calls cpufreq_driver_target() which takes
cpu_hotplug lock again. OK as cpu_hotplug lock is recursive in same
process context
* CPU 1 goes down
* Calls post down notifier
* cpufreq notifier handler calls ondemand event stop which takes dbs_mutex

So, cpu_hotplug lock is taken before dbs_mutex in this flow.

do_dbs_timer is triggerred by a periodic timer event.
It first takes dbs_mutex and then takes cpu_hotplug lock in
cpufreq_driver_target().
Note the reverse order here compared to above. So, if this timer event happens
at right moment during cpu_down, system will deadlok.

Attached patch fixes the issue for both ondemand and conservative.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 6810b548 08-May-2006 Andi Kleen <ak@linux.intel.com>

[PATCH] x86_64: Move ondemand timer into own work queue

Taking the cpu hotplug semaphore in a normal events workqueue
is unsafe because other tasks can wait for any workqueues with
it hold. This results in a deadlock.

Move the DBS timer into its own work queue which is not
affected by other work queue flushes to avoid this.

Has been acked by Venkatesh.

Cc: venkatesh.pallipadi@intel.com
Cc: cpufreq@lists.linux.org.uk
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 7c9d8c0e 26-Mar-2006 Dominik Brodowski <linux@dominikbrodowski.net>

[PATCH] cpufreq_ondemand: add range check

Assert that cpufreq_target is, at least, called with the minimum frequency
allowed by this policy, not something lower. It triggered problems on ARM.

Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>


# 9cbad61b 10-Mar-2006 Eric Piel <Eric.Piel@tremplin-utc.net>

[PATCH] cpufreq_ondemand: keep ignore_nice_load value when it is reselected

Keep the value of ignore_nice_load of the ondemand governor even after
the governor has been deselected and selected back. This is the behavior
of the other exported values of the ondemand governor and it's much more
user-friendly.

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>


# ff8c288d 10-Mar-2006 Eric Piel <Eric.Piel@tremplin-utc.net>

[PATCH] cpufreq_ondemand: Warn if it cannot run due to too long transition latency

Display a warning if the ondemand governor can not be selected due to a
transition latency of the cpufreq driver which is too long.

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>


# 32ee8c3e 27-Feb-2006 Dave Jones <davej@redhat.com>

[CPUFREQ] Lots of whitespace & CodingStyle cleanup.

Signed-off-by: Dave Jones <davej@redhat.com>


# 3fc54d37 13-Jan-2006 akpm@osdl.org <akpm@osdl.org>

[CPUFREQ] Convert drivers/cpufreq semaphores to mutexes.

Semaphore to mutex conversion.

The conversion was generated via scripts, and the result was validated
automatically via a script as well.

Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# 001893cd 01-Dec-2005 Alexander Clouter <alex@digriz.org.uk>

[PATCH] cpufreq_conservative/ondemand: invert meaning of 'ignore nice'

The use of the 'ignore_nice' sysfs file is confusing to anyone using it.
This removes the sysfs file 'ignore_nice' and in its place creates a
'ignore_nice_load' entry that defaults to '0'; meaning nice'd processes
_are_ counted towards the 'business' calculation.

WARNING: this obvious breaks any userland tools that expected ignore_nice'
to exist, to draw attention to this fact it was concluded on the mailing
list that the entry should be removed altogether so the userland app breaks
and so the author can build simple to detect workaround. Having said that
it seems currently very few tools even make use of this functionality; all
I could find was a Gentoo Wiki entry.

Signed-off-by: Alexander Clouter <alex-kernel@digriz.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Dave Jones <davej@redhat.com>


# df8b59be 20-Sep-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] Avoid the ondemand cpufreq governor to use a too high frequency for stats.

The problem is in the ondemand governor, there is a periodic measurement
of the CPU usage. This CPU usage is updated by the scheduler after every
tick (basically, by adding 1 either to "idle" or to "user" or to
"system"). So if the frequency of the governor is too high, the stat
will be meaningless (as mostly no number have changed).

So this patch checks that the measurements are separated by at least 10
ticks. It means that by default, stats will have about 5% error (20
ticks). Of course those numbers can be argued but, IMHO, they look sane.
The patch also includes a small clean-up to check more explictly the
result of the conversion from ns to µs being null.

Let's note that (on x86) this has never been really needed before 2.6.13
because HZ was always 1000. Now that HZ can be 100, some CPU might be
affected by this problem. For instance when HZ=100, the centrino ,which
has a 10µs transition latency, would lead to the governor allowing to
read stats every tick (10ms)!

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Dave Jones <davej@redhat.com>


# e131832c 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand governor default sampling downfactor as 1

[PATCH] [5/5] ondemand governor default sampling downfactor as 1

Make default sampling downfactor 1.
This works better with earlier auto downscaling change in ondemand governor.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# c29f1403 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand governor automatic downscaling

[PATCH] [4/5] ondemand governor automatic downscaling

Here is a change of policy for the ondemand governor. The modification
concerns the frequency downscaling. Instead of decreasing to a lower
frequency when the CPU usage is under 20%, this new policy automatically
scales to the optimal frequency. The optimal frequency being the lowest
frequency which provides enough power to not trigger the upscaling policy.

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 9c7d269b 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand,conservative governor idle_tick clean-up

[PATCH] [3/5] ondemand,conservative governor idle_tick clean-up

Ondemand and conservative governor clean-up, it factorises the idle ticks
measurement.

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 790d76fa 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand,conservative governor store the idle ticks for all cpus

[PATCH] [2/5] ondemand,conservative governor store the idle ticks for all cpus

Ondemand, conservative governor did not store prev_cpu_idle_up into
prev_cpu_idle_down for other CPUs than the current CPU.

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# dac1c1a5 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand,conservative minor bug-fix and cleanup

[PATCH] [1/5] ondemand,conservative minor bug-fix and cleanup

Attached patch fixes some minor issues with Alexander's patch and related
cleanup in both ondemand and conservative governor.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>


# 1206aaac 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] Allow ondemand stepping to be changed by user.

Adds support so that the cpufreq change stepping is no longer fixed at 5% and
can be changed dynamically by the user

Signed-off-by: Alexander Clouter <alex-kernel@digriz.org.uk>
Signed-off-by: Dave Jones <davej@redhat.com>


# c11420a6 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] Prevents un-necessary cpufreq changes if we are already at min/max

Signed-off-by: Alexander Clouter <alex-kernel@digriz.org.uk>
Signed-off-by: Dave Jones <davej@redhat.com>


# 3d5ee9e5 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] Add support to cpufreq_ondemand to ignore 'nice' cpu time

Signed-off-by: Alexander Clouter <alex-kernel@digriz.org.uk>
Signed-off-by: Dave Jones <davej@redhat.com>


# 7f335d4e 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] make cpufreq_gov_dbs static

This patch makes a needlessly global and EXPORT_SYMBOL'ed struct static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Dave Jones <davej@redhat.com>


# 6fe71165 31-May-2005 Dave Jones <davej@redhat.com>

[CPUFREQ] ondemand: trivial clean-ups

Trivial ondemand governor clean-ups:
- change from sampling_rate_in_HZ() to the official function
usecs_to_jiffies().
- use for_each_online_cpu() to instead of using "if (cpu_online(i))"

Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Dave Jones <davej@redhat.com>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!