Searched +hist:4 +hist:a5838ad (Results 1 - 2 of 2) sorted by relevance

/linux-master/scripts/
H A DMakefile.builddiff 3ed03f4d Tue Apr 18 15:43:47 MDT 2023 Miguel Ojeda <ojeda@kernel.org> rust: upgrade to Rust 1.68.2

This is the first upgrade to the Rust toolchain since the initial Rust
merge, from 1.62.0 to 1.68.2 (i.e. the latest).

# Context

The kernel currently supports only a single Rust version [1] (rather
than a minimum) given our usage of some "unstable" Rust features [2]
which do not promise backwards compatibility.

The goal is to reach a point where we can declare a minimum version for
the toolchain. For instance, by waiting for some of the features to be
stabilized. Therefore, the first minimum Rust version that the kernel
will support is "in the future".

# Upgrade policy

Given we will eventually need to reach that minimum version, it would be
ideal to upgrade the compiler from time to time to be as close as
possible to that goal and find any issues sooner. In the extreme, we
could upgrade as soon as a new Rust release is out. Of course, upgrading
so often is in stark contrast to what one normally would need for GCC
and LLVM, especially given the release schedule: 6 weeks for Rust vs.
half a year for LLVM and a year for GCC.

Having said that, there is no particular advantage to updating slowly
either: kernel developers in "stable" distributions are unlikely to be
able to use their distribution-provided Rust toolchain for the kernel
anyway [3]. Instead, by routinely upgrading to the latest instead,
kernel developers using Linux distributions that track the latest Rust
release may be able to use those rather than Rust-provided ones,
especially if their package manager allows to pin / hold back /
downgrade the version for some days during windows where the version may
not match. For instance, Arch, Fedora, Gentoo and openSUSE all provide
and track the latest version of Rust as they get released every 6 weeks.

Then, when the minimum version is reached, we will stop upgrading and
decide how wide the window of support will be. For instance, a year of
Rust versions. We will probably want to start small, and then widen it
over time, just like the kernel did originally for LLVM, see commit
3519c4d6e08e ("Documentation: add minimum clang/llvm version").

# Unstable features stabilized

This upgrade allows us to remove the following unstable features since
they were stabilized:

- `feature(explicit_generic_args_with_impl_trait)` (1.63).
- `feature(core_ffi_c)` (1.64).
- `feature(generic_associated_types)` (1.65).
- `feature(const_ptr_offset_from)` (1.65, *).
- `feature(bench_black_box)` (1.66, *).
- `feature(pin_macro)` (1.68).

The ones marked with `*` apply only to our old `rust` branch, not
mainline yet, i.e. only for code that we may potentially upstream.

With this patch applied, the only unstable feature allowed to be used
outside the `kernel` crate is `new_uninit`, though other code to be
upstreamed may increase the list.

Please see [2] for details.

# Other required changes

Since 1.63, `rustdoc` triggers the `broken_intra_doc_links` lint for
links pointing to exported (`#[macro_export]`) `macro_rules`. An issue
was opened upstream [4], but it turns out it is intended behavior. For
the moment, just add an explicit reference for each link. Later we can
revisit this if `rustdoc` removes the compatibility measure.

Nevertheless, this was helpful to discover a link that was pointing to
the wrong place unintentionally. Since that one was actually wrong, it
is fixed in a previous commit independently.

Another change was the addition of `cfg(no_rc)` and `cfg(no_sync)` in
upstream [5], thus remove our original changes for that.

Similarly, upstream now tests that it compiles successfully with
`#[cfg(not(no_global_oom_handling))]` [6], which allow us to get rid
of some changes, such as an `#[allow(dead_code)]`.

In addition, remove another `#[allow(dead_code)]` due to new uses
within the standard library.

Finally, add `try_extend_trusted` and move the code in `spec_extend.rs`
since upstream moved it for the infallible version.

# `alloc` upgrade and reviewing

There are a large amount of changes, but the vast majority of them are
due to our `alloc` fork being upgraded at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

# Get the difference with respect to the old version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > old.patch
git -C linux restore rust/alloc

# Apply this patch.
git -C linux am rust-upgrade.patch

# Get the difference with respect to the new version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > new.patch
git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://rust-for-linux.com/rust-version-policy [1]
Link: https://github.com/Rust-for-Linux/linux/issues/2 [2]
Link: https://lore.kernel.org/rust-for-linux/CANiq72mT3bVDKdHgaea-6WiZazd8Mvurqmqegbe5JZxVyLR8Yg@mail.gmail.com/ [3]
Link: https://github.com/rust-lang/rust/issues/106142 [4]
Link: https://github.com/rust-lang/rust/pull/89891 [5]
Link: https://github.com/rust-lang/rust/pull/98652 [6]
Reviewed-by: Björn Roy Baron <bjorn3_gh@protonmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-By: Martin Rodriguez Reboredo <yakoyoku@gmail.com>
Tested-by: Ariel Miculas <amiculas@cisco.com>
Tested-by: David Gow <davidgow@google.com>
Tested-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20230418214347.324156-4-ojeda@kernel.org
[ Removed `feature(core_ffi_c)` from `uapi` ]
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 3ed03f4d Tue Apr 18 15:43:47 MDT 2023 Miguel Ojeda <ojeda@kernel.org> rust: upgrade to Rust 1.68.2

This is the first upgrade to the Rust toolchain since the initial Rust
merge, from 1.62.0 to 1.68.2 (i.e. the latest).

# Context

The kernel currently supports only a single Rust version [1] (rather
than a minimum) given our usage of some "unstable" Rust features [2]
which do not promise backwards compatibility.

The goal is to reach a point where we can declare a minimum version for
the toolchain. For instance, by waiting for some of the features to be
stabilized. Therefore, the first minimum Rust version that the kernel
will support is "in the future".

# Upgrade policy

Given we will eventually need to reach that minimum version, it would be
ideal to upgrade the compiler from time to time to be as close as
possible to that goal and find any issues sooner. In the extreme, we
could upgrade as soon as a new Rust release is out. Of course, upgrading
so often is in stark contrast to what one normally would need for GCC
and LLVM, especially given the release schedule: 6 weeks for Rust vs.
half a year for LLVM and a year for GCC.

Having said that, there is no particular advantage to updating slowly
either: kernel developers in "stable" distributions are unlikely to be
able to use their distribution-provided Rust toolchain for the kernel
anyway [3]. Instead, by routinely upgrading to the latest instead,
kernel developers using Linux distributions that track the latest Rust
release may be able to use those rather than Rust-provided ones,
especially if their package manager allows to pin / hold back /
downgrade the version for some days during windows where the version may
not match. For instance, Arch, Fedora, Gentoo and openSUSE all provide
and track the latest version of Rust as they get released every 6 weeks.

Then, when the minimum version is reached, we will stop upgrading and
decide how wide the window of support will be. For instance, a year of
Rust versions. We will probably want to start small, and then widen it
over time, just like the kernel did originally for LLVM, see commit
3519c4d6e08e ("Documentation: add minimum clang/llvm version").

# Unstable features stabilized

This upgrade allows us to remove the following unstable features since
they were stabilized:

- `feature(explicit_generic_args_with_impl_trait)` (1.63).
- `feature(core_ffi_c)` (1.64).
- `feature(generic_associated_types)` (1.65).
- `feature(const_ptr_offset_from)` (1.65, *).
- `feature(bench_black_box)` (1.66, *).
- `feature(pin_macro)` (1.68).

The ones marked with `*` apply only to our old `rust` branch, not
mainline yet, i.e. only for code that we may potentially upstream.

With this patch applied, the only unstable feature allowed to be used
outside the `kernel` crate is `new_uninit`, though other code to be
upstreamed may increase the list.

Please see [2] for details.

# Other required changes

Since 1.63, `rustdoc` triggers the `broken_intra_doc_links` lint for
links pointing to exported (`#[macro_export]`) `macro_rules`. An issue
was opened upstream [4], but it turns out it is intended behavior. For
the moment, just add an explicit reference for each link. Later we can
revisit this if `rustdoc` removes the compatibility measure.

Nevertheless, this was helpful to discover a link that was pointing to
the wrong place unintentionally. Since that one was actually wrong, it
is fixed in a previous commit independently.

Another change was the addition of `cfg(no_rc)` and `cfg(no_sync)` in
upstream [5], thus remove our original changes for that.

Similarly, upstream now tests that it compiles successfully with
`#[cfg(not(no_global_oom_handling))]` [6], which allow us to get rid
of some changes, such as an `#[allow(dead_code)]`.

In addition, remove another `#[allow(dead_code)]` due to new uses
within the standard library.

Finally, add `try_extend_trusted` and move the code in `spec_extend.rs`
since upstream moved it for the infallible version.

# `alloc` upgrade and reviewing

There are a large amount of changes, but the vast majority of them are
due to our `alloc` fork being upgraded at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

# Get the difference with respect to the old version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > old.patch
git -C linux restore rust/alloc

# Apply this patch.
git -C linux am rust-upgrade.patch

# Get the difference with respect to the new version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > new.patch
git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://rust-for-linux.com/rust-version-policy [1]
Link: https://github.com/Rust-for-Linux/linux/issues/2 [2]
Link: https://lore.kernel.org/rust-for-linux/CANiq72mT3bVDKdHgaea-6WiZazd8Mvurqmqegbe5JZxVyLR8Yg@mail.gmail.com/ [3]
Link: https://github.com/rust-lang/rust/issues/106142 [4]
Link: https://github.com/rust-lang/rust/pull/89891 [5]
Link: https://github.com/rust-lang/rust/pull/98652 [6]
Reviewed-by: Björn Roy Baron <bjorn3_gh@protonmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-By: Martin Rodriguez Reboredo <yakoyoku@gmail.com>
Tested-by: Ariel Miculas <amiculas@cisco.com>
Tested-by: David Gow <davidgow@google.com>
Tested-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20230418214347.324156-4-ojeda@kernel.org
[ Removed `feature(core_ffi_c)` from `uapi` ]
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 3ed03f4d Tue Apr 18 15:43:47 MDT 2023 Miguel Ojeda <ojeda@kernel.org> rust: upgrade to Rust 1.68.2

This is the first upgrade to the Rust toolchain since the initial Rust
merge, from 1.62.0 to 1.68.2 (i.e. the latest).

# Context

The kernel currently supports only a single Rust version [1] (rather
than a minimum) given our usage of some "unstable" Rust features [2]
which do not promise backwards compatibility.

The goal is to reach a point where we can declare a minimum version for
the toolchain. For instance, by waiting for some of the features to be
stabilized. Therefore, the first minimum Rust version that the kernel
will support is "in the future".

# Upgrade policy

Given we will eventually need to reach that minimum version, it would be
ideal to upgrade the compiler from time to time to be as close as
possible to that goal and find any issues sooner. In the extreme, we
could upgrade as soon as a new Rust release is out. Of course, upgrading
so often is in stark contrast to what one normally would need for GCC
and LLVM, especially given the release schedule: 6 weeks for Rust vs.
half a year for LLVM and a year for GCC.

Having said that, there is no particular advantage to updating slowly
either: kernel developers in "stable" distributions are unlikely to be
able to use their distribution-provided Rust toolchain for the kernel
anyway [3]. Instead, by routinely upgrading to the latest instead,
kernel developers using Linux distributions that track the latest Rust
release may be able to use those rather than Rust-provided ones,
especially if their package manager allows to pin / hold back /
downgrade the version for some days during windows where the version may
not match. For instance, Arch, Fedora, Gentoo and openSUSE all provide
and track the latest version of Rust as they get released every 6 weeks.

Then, when the minimum version is reached, we will stop upgrading and
decide how wide the window of support will be. For instance, a year of
Rust versions. We will probably want to start small, and then widen it
over time, just like the kernel did originally for LLVM, see commit
3519c4d6e08e ("Documentation: add minimum clang/llvm version").

# Unstable features stabilized

This upgrade allows us to remove the following unstable features since
they were stabilized:

- `feature(explicit_generic_args_with_impl_trait)` (1.63).
- `feature(core_ffi_c)` (1.64).
- `feature(generic_associated_types)` (1.65).
- `feature(const_ptr_offset_from)` (1.65, *).
- `feature(bench_black_box)` (1.66, *).
- `feature(pin_macro)` (1.68).

The ones marked with `*` apply only to our old `rust` branch, not
mainline yet, i.e. only for code that we may potentially upstream.

With this patch applied, the only unstable feature allowed to be used
outside the `kernel` crate is `new_uninit`, though other code to be
upstreamed may increase the list.

Please see [2] for details.

# Other required changes

Since 1.63, `rustdoc` triggers the `broken_intra_doc_links` lint for
links pointing to exported (`#[macro_export]`) `macro_rules`. An issue
was opened upstream [4], but it turns out it is intended behavior. For
the moment, just add an explicit reference for each link. Later we can
revisit this if `rustdoc` removes the compatibility measure.

Nevertheless, this was helpful to discover a link that was pointing to
the wrong place unintentionally. Since that one was actually wrong, it
is fixed in a previous commit independently.

Another change was the addition of `cfg(no_rc)` and `cfg(no_sync)` in
upstream [5], thus remove our original changes for that.

Similarly, upstream now tests that it compiles successfully with
`#[cfg(not(no_global_oom_handling))]` [6], which allow us to get rid
of some changes, such as an `#[allow(dead_code)]`.

In addition, remove another `#[allow(dead_code)]` due to new uses
within the standard library.

Finally, add `try_extend_trusted` and move the code in `spec_extend.rs`
since upstream moved it for the infallible version.

# `alloc` upgrade and reviewing

There are a large amount of changes, but the vast majority of them are
due to our `alloc` fork being upgraded at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

# Get the difference with respect to the old version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > old.patch
git -C linux restore rust/alloc

# Apply this patch.
git -C linux am rust-upgrade.patch

# Get the difference with respect to the new version.
git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
cut -d/ -f3- |
grep -Fv README.md |
xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
git -C linux diff --patch-with-stat --summary -R > new.patch
git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://rust-for-linux.com/rust-version-policy [1]
Link: https://github.com/Rust-for-Linux/linux/issues/2 [2]
Link: https://lore.kernel.org/rust-for-linux/CANiq72mT3bVDKdHgaea-6WiZazd8Mvurqmqegbe5JZxVyLR8Yg@mail.gmail.com/ [3]
Link: https://github.com/rust-lang/rust/issues/106142 [4]
Link: https://github.com/rust-lang/rust/pull/89891 [5]
Link: https://github.com/rust-lang/rust/pull/98652 [6]
Reviewed-by: Björn Roy Baron <bjorn3_gh@protonmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-By: Martin Rodriguez Reboredo <yakoyoku@gmail.com>
Tested-by: Ariel Miculas <amiculas@cisco.com>
Tested-by: David Gow <davidgow@google.com>
Tested-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20230418214347.324156-4-ojeda@kernel.org
[ Removed `feature(core_ffi_c)` from `uapi` ]
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 90e53c5e7 Fri Apr 07 18:25:45 MDT 2023 Benno Lossin <benno.lossin@proton.me> rust: add pin-init API core

This API is used to facilitate safe pinned initialization of structs. It
replaces cumbersome `unsafe` manual initialization with elegant safe macro
invocations.

Due to the size of this change it has been split into six commits:
1. This commit introducing the basic public interface: traits and
functions to represent and create initializers.
2. Adds the `#[pin_data]`, `pin_init!`, `try_pin_init!`, `init!` and
`try_init!` macros along with their internal types.
3. Adds the `InPlaceInit` trait that allows using an initializer to create
an object inside of a `Box<T>` and other smart pointers.
4. Adds the `PinnedDrop` trait and adds macro support for it in
the `#[pin_data]` macro.
5. Adds the `stack_pin_init!` macro allowing to pin-initialize a struct on
the stack.
6. Adds the `Zeroable` trait and `init::zeroed` function to initialize
types that have `0x00` in all bytes as a valid bit pattern.

--

In this section the problem that the new pin-init API solves is outlined.
This message describes the entirety of the API, not just the parts
introduced in this commit. For a more granular explanation and additional
information on pinning and this issue, view [1].

Pinning is Rust's way of enforcing the address stability of a value. When a
value gets pinned it will be impossible for safe code to move it to another
location. This is done by wrapping pointers to said object with `Pin<P>`.
This wrapper prevents safe code from creating mutable references to the
object, preventing mutable access, which is needed to move the value.
`Pin<P>` provides `unsafe` functions to circumvent this and allow
modifications regardless. It is then the programmer's responsibility to
uphold the pinning guarantee.

Many kernel data structures require a stable address, because there are
foreign pointers to them which would get invalidated by moving the
structure. Since these data structures are usually embedded in structs to
use them, this pinning property propagates to the container struct.
Resulting in most structs in both Rust and C code needing to be pinned.

So if we want to have a `mutex` field in a Rust struct, this struct also
needs to be pinned, because a `mutex` contains a `list_head`. Additionally
initializing a `list_head` requires already having the final memory
location available, because it is initialized by pointing it to itself. But
this presents another challenge in Rust: values have to be initialized at
all times. There is the `MaybeUninit<T>` wrapper type, which allows
handling uninitialized memory, but this requires using the `unsafe` raw
pointers and a casting the type to the initialized variant.

This problem gets exacerbated when considering encapsulation and the normal
safety requirements of Rust code. The fields of the Rust `Mutex<T>` should
not be accessible to normal driver code. After all if anyone can modify
the fields, there is no way to ensure the invariants of the `Mutex<T>` are
upheld. But if the fields are inaccessible, then initialization of a
`Mutex<T>` needs to be somehow achieved via a function or a macro. Because
the `Mutex<T>` must be pinned in memory, the function cannot return it by
value. It also cannot allocate a `Box` to put the `Mutex<T>` into, because
that is an unnecessary allocation and indirection which would hurt
performance.

The solution in the rust tree (e.g. this commit: [2]) that is replaced by
this API is to split this function into two parts:

1. A `new` function that returns a partially initialized `Mutex<T>`,
2. An `init` function that requires the `Mutex<T>` to be pinned and that
fully initializes the `Mutex<T>`.

Both of these functions have to be marked `unsafe`, since a call to `new`
needs to be accompanied with a call to `init`, otherwise using the
`Mutex<T>` could result in UB. And because calling `init` twice also is not
safe. While `Mutex<T>` initialization cannot fail, other structs might
also have to allocate memory, which would result in conditional successful
initialization requiring even more manual accommodation work.

Combine this with the problem of pin-projections -- the way of accessing
fields of a pinned struct -- which also have an `unsafe` API, pinned
initialization is riddled with `unsafe` resulting in very poor ergonomics.
Not only that, but also having to call two functions possibly multiple
lines apart makes it very easy to forget it outright or during refactoring.

Here is an example of the current way of initializing a struct with two
synchronization primitives (see [3] for the full example):

struct SharedState {
state_changed: CondVar,
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn try_new() -> Result<Arc<Self>> {
let mut state = Pin::from(UniqueArc::try_new(Self {
// SAFETY: `condvar_init!` is called below.
state_changed: unsafe { CondVar::new() },
// SAFETY: `mutex_init!` is called below.
inner: unsafe {
Mutex::new(SharedStateInner { token_count: 0 })
},
})?);

// SAFETY: `state_changed` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.state_changed)
};
kernel::condvar_init!(pinned, "SharedState::state_changed");

// SAFETY: `inner` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.inner)
};
kernel::mutex_init!(pinned, "SharedState::inner");

Ok(state.into())
}
}

The pin-init API of this patch solves this issue by providing a
comprehensive solution comprised of macros and traits. Here is the example
from above using the pin-init API:

#[pin_data]
struct SharedState {
#[pin]
state_changed: CondVar,
#[pin]
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn new() -> impl PinInit<Self> {
pin_init!(Self {
state_changed <- new_condvar!("SharedState::state_changed"),
inner <- new_mutex!(
SharedStateInner { token_count: 0 },
"SharedState::inner",
),
})
}
}

Notably the way the macro is used here requires no `unsafe` and thus comes
with the usual Rust promise of safe code not introducing any memory
violations. Additionally it is now up to the caller of `new()` to decide
the memory location of the `SharedState`. They can choose at the moment
`Arc<T>`, `Box<T>` or the stack.

--

The API has the following architecture:
1. Initializer traits `PinInit<T, E>` and `Init<T, E>` that act like
closures.
2. Macros to create these initializer traits safely.
3. Functions to allow manually writing initializers.

The initializers (an `impl PinInit<T, E>`) receive a raw pointer pointing
to uninitialized memory and their job is to fully initialize a `T` at that
location. If initialization fails, they return an error (`E`) by value.

This way of initializing cannot be safely exposed to the user, since it
relies upon these properties outside of the control of the trait:
- the memory location (slot) needs to be valid memory,
- if initialization fails, the slot should not be read from,
- the value in the slot should be pinned, so it cannot move and the memory
cannot be deallocated until the value is dropped.

This is why using an initializer is facilitated by another trait that
ensures these requirements.

These initializers can be created manually by just supplying a closure that
fulfills the same safety requirements as `PinInit<T, E>`. But this is an
`unsafe` operation. To allow safe initializer creation, the `pin_init!` is
provided along with three other variants: `try_pin_init!`, `try_init!` and
`init!`. These take a modified struct initializer as a parameter and
generate a closure that initializes the fields in sequence.
The macros take great care in upholding the safety requirements:
- A shadowed struct type is used as the return type of the closure instead
of `()`. This is to prevent early returns, as these would prevent full
initialization.
- To ensure every field is only initialized once, a normal struct
initializer is placed in unreachable code. The type checker will emit
errors if a field is missing or specified multiple times.
- When initializing a field fails, the whole initializer will fail and
automatically drop fields that have been initialized earlier.
- Only the correct initializer type is allowed for unpinned fields. You
cannot use a `impl PinInit<T, E>` to initialize a structurally not pinned
field.

To ensure the last point, an additional macro `#[pin_data]` is needed. This
macro annotates the struct itself and the user specifies structurally
pinned and not pinned fields.

Because dropping a pinned struct is also not allowed to break the pinning
invariants, another macro attribute `#[pinned_drop]` is needed. This
macro is introduced in a following commit.

These two macros also have mechanisms to ensure the overall safety of the
API. Additionally, they utilize a combined proc-macro, declarative macro
design: first a proc-macro enables the outer attribute syntax `#[...]` and
does some important pre-parsing. Notably this prepares the generics such
that the declarative macro can handle them using token trees. Then the
actual parsing of the structure and the emission of code is handled by a
declarative macro.

For pin-projections the crates `pin-project` [4] and `pin-project-lite` [5]
had been considered, but were ultimately rejected:
- `pin-project` depends on `syn` [6] which is a very big dependency, around
50k lines of code.
- `pin-project-lite` is a more reasonable 5k lines of code, but contains a
very complex declarative macro to parse generics. On top of that it
would require modification that would need to be maintained
independently.

Link: https://rust-for-linux.com/the-safe-pinned-initialization-problem [1]
Link: https://github.com/Rust-for-Linux/linux/tree/0a04dc4ddd671efb87eef54dde0fb38e9074f4be [2]
Link: https://github.com/Rust-for-Linux/linux/blob/f509ede33fc10a07eba3da14aa00302bd4b5dddd/samples/rust/rust_miscdev.rs [3]
Link: https://crates.io/crates/pin-project [4]
Link: https://crates.io/crates/pin-project-lite [5]
Link: https://crates.io/crates/syn [6]
Co-developed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Benno Lossin <benno.lossin@proton.me>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Wedson Almeida Filho <wedsonaf@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@samsung.com>
Link: https://lore.kernel.org/r/20230408122429.1103522-7-y86-dev@protonmail.com
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 90e53c5e7 Fri Apr 07 18:25:45 MDT 2023 Benno Lossin <benno.lossin@proton.me> rust: add pin-init API core

This API is used to facilitate safe pinned initialization of structs. It
replaces cumbersome `unsafe` manual initialization with elegant safe macro
invocations.

Due to the size of this change it has been split into six commits:
1. This commit introducing the basic public interface: traits and
functions to represent and create initializers.
2. Adds the `#[pin_data]`, `pin_init!`, `try_pin_init!`, `init!` and
`try_init!` macros along with their internal types.
3. Adds the `InPlaceInit` trait that allows using an initializer to create
an object inside of a `Box<T>` and other smart pointers.
4. Adds the `PinnedDrop` trait and adds macro support for it in
the `#[pin_data]` macro.
5. Adds the `stack_pin_init!` macro allowing to pin-initialize a struct on
the stack.
6. Adds the `Zeroable` trait and `init::zeroed` function to initialize
types that have `0x00` in all bytes as a valid bit pattern.

--

In this section the problem that the new pin-init API solves is outlined.
This message describes the entirety of the API, not just the parts
introduced in this commit. For a more granular explanation and additional
information on pinning and this issue, view [1].

Pinning is Rust's way of enforcing the address stability of a value. When a
value gets pinned it will be impossible for safe code to move it to another
location. This is done by wrapping pointers to said object with `Pin<P>`.
This wrapper prevents safe code from creating mutable references to the
object, preventing mutable access, which is needed to move the value.
`Pin<P>` provides `unsafe` functions to circumvent this and allow
modifications regardless. It is then the programmer's responsibility to
uphold the pinning guarantee.

Many kernel data structures require a stable address, because there are
foreign pointers to them which would get invalidated by moving the
structure. Since these data structures are usually embedded in structs to
use them, this pinning property propagates to the container struct.
Resulting in most structs in both Rust and C code needing to be pinned.

So if we want to have a `mutex` field in a Rust struct, this struct also
needs to be pinned, because a `mutex` contains a `list_head`. Additionally
initializing a `list_head` requires already having the final memory
location available, because it is initialized by pointing it to itself. But
this presents another challenge in Rust: values have to be initialized at
all times. There is the `MaybeUninit<T>` wrapper type, which allows
handling uninitialized memory, but this requires using the `unsafe` raw
pointers and a casting the type to the initialized variant.

This problem gets exacerbated when considering encapsulation and the normal
safety requirements of Rust code. The fields of the Rust `Mutex<T>` should
not be accessible to normal driver code. After all if anyone can modify
the fields, there is no way to ensure the invariants of the `Mutex<T>` are
upheld. But if the fields are inaccessible, then initialization of a
`Mutex<T>` needs to be somehow achieved via a function or a macro. Because
the `Mutex<T>` must be pinned in memory, the function cannot return it by
value. It also cannot allocate a `Box` to put the `Mutex<T>` into, because
that is an unnecessary allocation and indirection which would hurt
performance.

The solution in the rust tree (e.g. this commit: [2]) that is replaced by
this API is to split this function into two parts:

1. A `new` function that returns a partially initialized `Mutex<T>`,
2. An `init` function that requires the `Mutex<T>` to be pinned and that
fully initializes the `Mutex<T>`.

Both of these functions have to be marked `unsafe`, since a call to `new`
needs to be accompanied with a call to `init`, otherwise using the
`Mutex<T>` could result in UB. And because calling `init` twice also is not
safe. While `Mutex<T>` initialization cannot fail, other structs might
also have to allocate memory, which would result in conditional successful
initialization requiring even more manual accommodation work.

Combine this with the problem of pin-projections -- the way of accessing
fields of a pinned struct -- which also have an `unsafe` API, pinned
initialization is riddled with `unsafe` resulting in very poor ergonomics.
Not only that, but also having to call two functions possibly multiple
lines apart makes it very easy to forget it outright or during refactoring.

Here is an example of the current way of initializing a struct with two
synchronization primitives (see [3] for the full example):

struct SharedState {
state_changed: CondVar,
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn try_new() -> Result<Arc<Self>> {
let mut state = Pin::from(UniqueArc::try_new(Self {
// SAFETY: `condvar_init!` is called below.
state_changed: unsafe { CondVar::new() },
// SAFETY: `mutex_init!` is called below.
inner: unsafe {
Mutex::new(SharedStateInner { token_count: 0 })
},
})?);

// SAFETY: `state_changed` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.state_changed)
};
kernel::condvar_init!(pinned, "SharedState::state_changed");

// SAFETY: `inner` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.inner)
};
kernel::mutex_init!(pinned, "SharedState::inner");

Ok(state.into())
}
}

The pin-init API of this patch solves this issue by providing a
comprehensive solution comprised of macros and traits. Here is the example
from above using the pin-init API:

#[pin_data]
struct SharedState {
#[pin]
state_changed: CondVar,
#[pin]
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn new() -> impl PinInit<Self> {
pin_init!(Self {
state_changed <- new_condvar!("SharedState::state_changed"),
inner <- new_mutex!(
SharedStateInner { token_count: 0 },
"SharedState::inner",
),
})
}
}

Notably the way the macro is used here requires no `unsafe` and thus comes
with the usual Rust promise of safe code not introducing any memory
violations. Additionally it is now up to the caller of `new()` to decide
the memory location of the `SharedState`. They can choose at the moment
`Arc<T>`, `Box<T>` or the stack.

--

The API has the following architecture:
1. Initializer traits `PinInit<T, E>` and `Init<T, E>` that act like
closures.
2. Macros to create these initializer traits safely.
3. Functions to allow manually writing initializers.

The initializers (an `impl PinInit<T, E>`) receive a raw pointer pointing
to uninitialized memory and their job is to fully initialize a `T` at that
location. If initialization fails, they return an error (`E`) by value.

This way of initializing cannot be safely exposed to the user, since it
relies upon these properties outside of the control of the trait:
- the memory location (slot) needs to be valid memory,
- if initialization fails, the slot should not be read from,
- the value in the slot should be pinned, so it cannot move and the memory
cannot be deallocated until the value is dropped.

This is why using an initializer is facilitated by another trait that
ensures these requirements.

These initializers can be created manually by just supplying a closure that
fulfills the same safety requirements as `PinInit<T, E>`. But this is an
`unsafe` operation. To allow safe initializer creation, the `pin_init!` is
provided along with three other variants: `try_pin_init!`, `try_init!` and
`init!`. These take a modified struct initializer as a parameter and
generate a closure that initializes the fields in sequence.
The macros take great care in upholding the safety requirements:
- A shadowed struct type is used as the return type of the closure instead
of `()`. This is to prevent early returns, as these would prevent full
initialization.
- To ensure every field is only initialized once, a normal struct
initializer is placed in unreachable code. The type checker will emit
errors if a field is missing or specified multiple times.
- When initializing a field fails, the whole initializer will fail and
automatically drop fields that have been initialized earlier.
- Only the correct initializer type is allowed for unpinned fields. You
cannot use a `impl PinInit<T, E>` to initialize a structurally not pinned
field.

To ensure the last point, an additional macro `#[pin_data]` is needed. This
macro annotates the struct itself and the user specifies structurally
pinned and not pinned fields.

Because dropping a pinned struct is also not allowed to break the pinning
invariants, another macro attribute `#[pinned_drop]` is needed. This
macro is introduced in a following commit.

These two macros also have mechanisms to ensure the overall safety of the
API. Additionally, they utilize a combined proc-macro, declarative macro
design: first a proc-macro enables the outer attribute syntax `#[...]` and
does some important pre-parsing. Notably this prepares the generics such
that the declarative macro can handle them using token trees. Then the
actual parsing of the structure and the emission of code is handled by a
declarative macro.

For pin-projections the crates `pin-project` [4] and `pin-project-lite` [5]
had been considered, but were ultimately rejected:
- `pin-project` depends on `syn` [6] which is a very big dependency, around
50k lines of code.
- `pin-project-lite` is a more reasonable 5k lines of code, but contains a
very complex declarative macro to parse generics. On top of that it
would require modification that would need to be maintained
independently.

Link: https://rust-for-linux.com/the-safe-pinned-initialization-problem [1]
Link: https://github.com/Rust-for-Linux/linux/tree/0a04dc4ddd671efb87eef54dde0fb38e9074f4be [2]
Link: https://github.com/Rust-for-Linux/linux/blob/f509ede33fc10a07eba3da14aa00302bd4b5dddd/samples/rust/rust_miscdev.rs [3]
Link: https://crates.io/crates/pin-project [4]
Link: https://crates.io/crates/pin-project-lite [5]
Link: https://crates.io/crates/syn [6]
Co-developed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Benno Lossin <benno.lossin@proton.me>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Wedson Almeida Filho <wedsonaf@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@samsung.com>
Link: https://lore.kernel.org/r/20230408122429.1103522-7-y86-dev@protonmail.com
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 90e53c5e7 Fri Apr 07 18:25:45 MDT 2023 Benno Lossin <benno.lossin@proton.me> rust: add pin-init API core

This API is used to facilitate safe pinned initialization of structs. It
replaces cumbersome `unsafe` manual initialization with elegant safe macro
invocations.

Due to the size of this change it has been split into six commits:
1. This commit introducing the basic public interface: traits and
functions to represent and create initializers.
2. Adds the `#[pin_data]`, `pin_init!`, `try_pin_init!`, `init!` and
`try_init!` macros along with their internal types.
3. Adds the `InPlaceInit` trait that allows using an initializer to create
an object inside of a `Box<T>` and other smart pointers.
4. Adds the `PinnedDrop` trait and adds macro support for it in
the `#[pin_data]` macro.
5. Adds the `stack_pin_init!` macro allowing to pin-initialize a struct on
the stack.
6. Adds the `Zeroable` trait and `init::zeroed` function to initialize
types that have `0x00` in all bytes as a valid bit pattern.

--

In this section the problem that the new pin-init API solves is outlined.
This message describes the entirety of the API, not just the parts
introduced in this commit. For a more granular explanation and additional
information on pinning and this issue, view [1].

Pinning is Rust's way of enforcing the address stability of a value. When a
value gets pinned it will be impossible for safe code to move it to another
location. This is done by wrapping pointers to said object with `Pin<P>`.
This wrapper prevents safe code from creating mutable references to the
object, preventing mutable access, which is needed to move the value.
`Pin<P>` provides `unsafe` functions to circumvent this and allow
modifications regardless. It is then the programmer's responsibility to
uphold the pinning guarantee.

Many kernel data structures require a stable address, because there are
foreign pointers to them which would get invalidated by moving the
structure. Since these data structures are usually embedded in structs to
use them, this pinning property propagates to the container struct.
Resulting in most structs in both Rust and C code needing to be pinned.

So if we want to have a `mutex` field in a Rust struct, this struct also
needs to be pinned, because a `mutex` contains a `list_head`. Additionally
initializing a `list_head` requires already having the final memory
location available, because it is initialized by pointing it to itself. But
this presents another challenge in Rust: values have to be initialized at
all times. There is the `MaybeUninit<T>` wrapper type, which allows
handling uninitialized memory, but this requires using the `unsafe` raw
pointers and a casting the type to the initialized variant.

This problem gets exacerbated when considering encapsulation and the normal
safety requirements of Rust code. The fields of the Rust `Mutex<T>` should
not be accessible to normal driver code. After all if anyone can modify
the fields, there is no way to ensure the invariants of the `Mutex<T>` are
upheld. But if the fields are inaccessible, then initialization of a
`Mutex<T>` needs to be somehow achieved via a function or a macro. Because
the `Mutex<T>` must be pinned in memory, the function cannot return it by
value. It also cannot allocate a `Box` to put the `Mutex<T>` into, because
that is an unnecessary allocation and indirection which would hurt
performance.

The solution in the rust tree (e.g. this commit: [2]) that is replaced by
this API is to split this function into two parts:

1. A `new` function that returns a partially initialized `Mutex<T>`,
2. An `init` function that requires the `Mutex<T>` to be pinned and that
fully initializes the `Mutex<T>`.

Both of these functions have to be marked `unsafe`, since a call to `new`
needs to be accompanied with a call to `init`, otherwise using the
`Mutex<T>` could result in UB. And because calling `init` twice also is not
safe. While `Mutex<T>` initialization cannot fail, other structs might
also have to allocate memory, which would result in conditional successful
initialization requiring even more manual accommodation work.

Combine this with the problem of pin-projections -- the way of accessing
fields of a pinned struct -- which also have an `unsafe` API, pinned
initialization is riddled with `unsafe` resulting in very poor ergonomics.
Not only that, but also having to call two functions possibly multiple
lines apart makes it very easy to forget it outright or during refactoring.

Here is an example of the current way of initializing a struct with two
synchronization primitives (see [3] for the full example):

struct SharedState {
state_changed: CondVar,
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn try_new() -> Result<Arc<Self>> {
let mut state = Pin::from(UniqueArc::try_new(Self {
// SAFETY: `condvar_init!` is called below.
state_changed: unsafe { CondVar::new() },
// SAFETY: `mutex_init!` is called below.
inner: unsafe {
Mutex::new(SharedStateInner { token_count: 0 })
},
})?);

// SAFETY: `state_changed` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.state_changed)
};
kernel::condvar_init!(pinned, "SharedState::state_changed");

// SAFETY: `inner` is pinned when `state` is.
let pinned = unsafe {
state.as_mut().map_unchecked_mut(|s| &mut s.inner)
};
kernel::mutex_init!(pinned, "SharedState::inner");

Ok(state.into())
}
}

The pin-init API of this patch solves this issue by providing a
comprehensive solution comprised of macros and traits. Here is the example
from above using the pin-init API:

#[pin_data]
struct SharedState {
#[pin]
state_changed: CondVar,
#[pin]
inner: Mutex<SharedStateInner>,
}

impl SharedState {
fn new() -> impl PinInit<Self> {
pin_init!(Self {
state_changed <- new_condvar!("SharedState::state_changed"),
inner <- new_mutex!(
SharedStateInner { token_count: 0 },
"SharedState::inner",
),
})
}
}

Notably the way the macro is used here requires no `unsafe` and thus comes
with the usual Rust promise of safe code not introducing any memory
violations. Additionally it is now up to the caller of `new()` to decide
the memory location of the `SharedState`. They can choose at the moment
`Arc<T>`, `Box<T>` or the stack.

--

The API has the following architecture:
1. Initializer traits `PinInit<T, E>` and `Init<T, E>` that act like
closures.
2. Macros to create these initializer traits safely.
3. Functions to allow manually writing initializers.

The initializers (an `impl PinInit<T, E>`) receive a raw pointer pointing
to uninitialized memory and their job is to fully initialize a `T` at that
location. If initialization fails, they return an error (`E`) by value.

This way of initializing cannot be safely exposed to the user, since it
relies upon these properties outside of the control of the trait:
- the memory location (slot) needs to be valid memory,
- if initialization fails, the slot should not be read from,
- the value in the slot should be pinned, so it cannot move and the memory
cannot be deallocated until the value is dropped.

This is why using an initializer is facilitated by another trait that
ensures these requirements.

These initializers can be created manually by just supplying a closure that
fulfills the same safety requirements as `PinInit<T, E>`. But this is an
`unsafe` operation. To allow safe initializer creation, the `pin_init!` is
provided along with three other variants: `try_pin_init!`, `try_init!` and
`init!`. These take a modified struct initializer as a parameter and
generate a closure that initializes the fields in sequence.
The macros take great care in upholding the safety requirements:
- A shadowed struct type is used as the return type of the closure instead
of `()`. This is to prevent early returns, as these would prevent full
initialization.
- To ensure every field is only initialized once, a normal struct
initializer is placed in unreachable code. The type checker will emit
errors if a field is missing or specified multiple times.
- When initializing a field fails, the whole initializer will fail and
automatically drop fields that have been initialized earlier.
- Only the correct initializer type is allowed for unpinned fields. You
cannot use a `impl PinInit<T, E>` to initialize a structurally not pinned
field.

To ensure the last point, an additional macro `#[pin_data]` is needed. This
macro annotates the struct itself and the user specifies structurally
pinned and not pinned fields.

Because dropping a pinned struct is also not allowed to break the pinning
invariants, another macro attribute `#[pinned_drop]` is needed. This
macro is introduced in a following commit.

These two macros also have mechanisms to ensure the overall safety of the
API. Additionally, they utilize a combined proc-macro, declarative macro
design: first a proc-macro enables the outer attribute syntax `#[...]` and
does some important pre-parsing. Notably this prepares the generics such
that the declarative macro can handle them using token trees. Then the
actual parsing of the structure and the emission of code is handled by a
declarative macro.

For pin-projections the crates `pin-project` [4] and `pin-project-lite` [5]
had been considered, but were ultimately rejected:
- `pin-project` depends on `syn` [6] which is a very big dependency, around
50k lines of code.
- `pin-project-lite` is a more reasonable 5k lines of code, but contains a
very complex declarative macro to parse generics. On top of that it
would require modification that would need to be maintained
independently.

Link: https://rust-for-linux.com/the-safe-pinned-initialization-problem [1]
Link: https://github.com/Rust-for-Linux/linux/tree/0a04dc4ddd671efb87eef54dde0fb38e9074f4be [2]
Link: https://github.com/Rust-for-Linux/linux/blob/f509ede33fc10a07eba3da14aa00302bd4b5dddd/samples/rust/rust_miscdev.rs [3]
Link: https://crates.io/crates/pin-project [4]
Link: https://crates.io/crates/pin-project-lite [5]
Link: https://crates.io/crates/syn [6]
Co-developed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Benno Lossin <benno.lossin@proton.me>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Wedson Almeida Filho <wedsonaf@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@samsung.com>
Link: https://lore.kernel.org/r/20230408122429.1103522-7-y86-dev@protonmail.com
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
diff 295d8398 Sat Jan 07 02:18:15 MST 2023 Masahiro Yamada <masahiroy@kernel.org> kbuild: specify output names separately for each emission type from rustc

In Kbuild, two different rules must not write to the same file, but
it happens when compiling rust source files.

For example, set CONFIG_SAMPLE_RUST_MINIMAL=m and run the following:

$ make -j$(nproc) samples/rust/rust_minimal.o samples/rust/rust_minimal.rsi \
samples/rust/rust_minimal.s samples/rust/rust_minimal.ll
[snip]
RUSTC [M] samples/rust/rust_minimal.o
RUSTC [M] samples/rust/rust_minimal.rsi
RUSTC [M] samples/rust/rust_minimal.s
RUSTC [M] samples/rust/rust_minimal.ll
mv: cannot stat 'samples/rust/rust_minimal.d': No such file or directory
make[3]: *** [scripts/Makefile.build:334: samples/rust/rust_minimal.ll] Error 1
make[3]: *** Waiting for unfinished jobs....
mv: cannot stat 'samples/rust/rust_minimal.d': No such file or directory
make[3]: *** [scripts/Makefile.build:309: samples/rust/rust_minimal.o] Error 1
mv: cannot stat 'samples/rust/rust_minimal.d': No such file or directory
make[3]: *** [scripts/Makefile.build:326: samples/rust/rust_minimal.s] Error 1
make[2]: *** [scripts/Makefile.build:504: samples/rust] Error 2
make[1]: *** [scripts/Makefile.build:504: samples] Error 2
make: *** [Makefile:2008: .] Error 2

The reason for the error is that 4 threads running in parallel renames
the same file, samples/rust/rust_minimal.d.

This does not happen when compiling C or assembly files because
-Wp,-MMD,$(depfile) explicitly specifies the dependency filepath.
$(depfile) is a unique path for each target.

Currently, rustc is only given --out-dir and --emit=<list-of-types>
So, all the rust build rules output the dep-info into the default
<CRATE_NAME>.d, which causes the path conflict.

Fortunately, the --emit option is able to specify the output path
individually, with the form --emit=<type>=<path>.

Add --emit=dep-info=$(depfile) to the common part. Also, remove the
redundant --out-dir because the output path is specified for each type.

The code gets much cleaner because we do not need to rename *.d files.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Tested-by: Miguel Ojeda <ojeda@kernel.org>
Reviewed-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
diff c67a85be Fri Oct 14 10:53:02 MDT 2022 Nick Desaulniers <ndesaulniers@google.com> kbuild: add -fno-discard-value-names to cmd_cc_ll_c

When debugging LLVM IR, it can be handy for clang to not discard value
names used for local variables and parameters. Compare the generated IR.

-fdiscard-value-names:
define i32 @core_sys_select(i32 %0, ptr %1, ptr %2, ptr %3, ptr %4) {
%6 = alloca i64
%7 = alloca %struct.poll_wqueues
%8 = alloca [64 x i32]

-fno-discard-value-names:
define i32 @core_sys_select(i32 %n, ptr %inp, ptr %outp, ptr %exp,
ptr %end_time) {
%expire.i = alloca i64
%table.i = alloca %struct.poll_wqueues
%stack_fds = alloca [64 x i32]

The rule for generating human readable LLVM IR (.ll) is only useful as a
debugging feature:

$ make LLVM=1 fs/select.ll

As Fangrui notes:
A LLVM_ENABLE_ASSERTIONS=off build of Clang defaults to
-fdiscard-value-names.

A LLVM_ENABLE_ASSERTIONS=on build of Clang defaults to
-fno-discard-value-names.

Explicitly enable -fno-discard-value-names so that the IR always contains
value names regardless of whether assertions were enabled or not.
Assertions generally are not enabled in releases of clang packaged by
distributions.

Link: https://github.com/ClangBuiltLinux/linux/issues/1467
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Fangrui Song <maskray@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
diff c6031b1d Fri May 27 04:01:53 MDT 2022 Masahiro Yamada <masahiroy@kernel.org> kbuild: make *.mod rule robust against too long argument error

Like built-in.a, the command length of the *.mod rule scales with
the depth of the directory times the number of objects in the Makefile.

Add $(obj)/ by the shell command (awk) instead of by Make's builtin
function.

In-tree modules still have some room to the limit (ARG_MAX=2097152),
but this is more future-proof for big modules in a deep directory.

For example, you can build i915 as a module (CONFIG_DRM_I915=m) and
compare drivers/gpu/drm/i915/.i915.mod.cmd with/without this commit.

The issue is more critical for external modules because the M= path
can be very long as Jeff Johnson reported before [1].

[1] https://lore.kernel.org/linux-kbuild/4c02050c4e95e4cb8cc04282695f8404@codeaurora.org/

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
diff 4ab7674f Mon Apr 18 10:50:39 MDT 2022 Josh Poimboeuf <jpoimboe@redhat.com> objtool: Make jump label hack optional

Objtool secretly does a jump label hack to overcome the limitations of
the toolchain. Make the hack explicit (and optional for other arches)
by turning it into a cmdline option and kernel config option.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
Link: https://lkml.kernel.org/r/3bdcbfdd27ecb01ddec13c04bdf756a583b13d24.1650300597.git.jpoimboe@redhat.com
/linux-master/
H A DMakefilediff 4cece764 Sun Mar 24 15:10:05 MDT 2024 Linus Torvalds <torvalds@linux-foundation.org> Linux 6.9-rc1
diff 50a33998 Fri Mar 01 04:21:07 MST 2024 Masahiro Yamada <masahiroy@kernel.org> kbuild: fix inconsistent indentation in top Makefile

Commit 3b9ab248bc45 ("kbuild: use 4-space indentation when followed
by conditionals") introduced inconsistent indentation because it
deliberately touched only the conditional directives to minimize the
change set.

This commit reformats some blocks in the top Makefile so they are
consistently indented with 4 spaces.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
diff 50a33998 Fri Mar 01 04:21:07 MST 2024 Masahiro Yamada <masahiroy@kernel.org> kbuild: fix inconsistent indentation in top Makefile

Commit 3b9ab248bc45 ("kbuild: use 4-space indentation when followed
by conditionals") introduced inconsistent indentation because it
deliberately touched only the conditional directives to minimize the
change set.

This commit reformats some blocks in the top Makefile so they are
consistently indented with 4 spaces.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
diff 3b9ab248 Thu Feb 01 18:31:42 MST 2024 Masahiro Yamada <masahiroy@kernel.org> kbuild: use 4-space indentation when followed by conditionals

GNU Make manual [1] clearly forbids a tab at the beginning of the
conditional directive line:
"Extra spaces are allowed and ignored at the beginning of the
conditional directive line, but a tab is not allowed."

This will not work for the next release of GNU Make, hence commit
82175d1f9430 ("kbuild: Replace tabs with spaces when followed by
conditionals") replaced the inappropriate tabs with 8 spaces.

However, the 8-space indentation cannot be visually distinguished.
Linus suggested 2-4 spaces for those nested if-statements. [2]

This commit redoes the replacement with 4 spaces.

[1]: https://www.gnu.org/software/make/manual/make.html#Conditional-Syntax
[2]: https://lore.kernel.org/all/CAHk-=whJKZNZWsa-VNDKafS_VfY4a5dAjG-r8BZgWk_a-xSepw@mail.gmail.com/

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
diff 3b9ab248 Thu Feb 01 18:31:42 MST 2024 Masahiro Yamada <masahiroy@kernel.org> kbuild: use 4-space indentation when followed by conditionals

GNU Make manual [1] clearly forbids a tab at the beginning of the
conditional directive line:
"Extra spaces are allowed and ignored at the beginning of the
conditional directive line, but a tab is not allowed."

This will not work for the next release of GNU Make, hence commit
82175d1f9430 ("kbuild: Replace tabs with spaces when followed by
conditionals") replaced the inappropriate tabs with 8 spaces.

However, the 8-space indentation cannot be visually distinguished.
Linus suggested 2-4 spaces for those nested if-statements. [2]

This commit redoes the replacement with 4 spaces.

[1]: https://www.gnu.org/software/make/manual/make.html#Conditional-Syntax
[2]: https://lore.kernel.org/all/CAHk-=whJKZNZWsa-VNDKafS_VfY4a5dAjG-r8BZgWk_a-xSepw@mail.gmail.com/

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
diff 3b9ab248 Thu Feb 01 18:31:42 MST 2024 Masahiro Yamada <masahiroy@kernel.org> kbuild: use 4-space indentation when followed by conditionals

GNU Make manual [1] clearly forbids a tab at the beginning of the
conditional directive line:
"Extra spaces are allowed and ignored at the beginning of the
conditional directive line, but a tab is not allowed."

This will not work for the next release of GNU Make, hence commit
82175d1f9430 ("kbuild: Replace tabs with spaces when followed by
conditionals") replaced the inappropriate tabs with 8 spaces.

However, the 8-space indentation cannot be visually distinguished.
Linus suggested 2-4 spaces for those nested if-statements. [2]

This commit redoes the replacement with 4 spaces.

[1]: https://www.gnu.org/software/make/manual/make.html#Conditional-Syntax
[2]: https://lore.kernel.org/all/CAHk-=whJKZNZWsa-VNDKafS_VfY4a5dAjG-r8BZgWk_a-xSepw@mail.gmail.com/

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
diff 66242cfa Mon Nov 20 11:37:19 MST 2023 Heiko Carstens <hca@linux.ibm.com> checkstack: allow to pass MINSTACKSIZE parameter

The checkstack script omits all functions with a stack usage of less than
100 bytes. However the script already has support for a parameter which
allows to override the default, but it cannot be set with

$ make checkstack

Add a MINSTACKSIZE parameter which allows to change the default. This might
be useful in order to print the stack usage of all functions, or only those
with large stack usage:

$ make checkstack MINSTACKSIZE=0
$ make checkstack MINSTACKSIZE=800

Link: https://lkml.kernel.org/r/20231120183719.2188479-4-hca@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Cc: Maninder Singh <maninder1.s@samsung.com>
Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: Vaneet Narang <v.narang@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
diff 4e3feaad Tue Jan 24 09:19:28 MST 2023 Nathan Chancellor <nathan@kernel.org> powerpc/vdso: Filter clang's auto var init zero enabler when linking

After commit 8d9acfce3332 ("kbuild: Stop using '-Qunused-arguments' with
clang"), the PowerPC vDSO shows the following error with clang-13 and
older when CONFIG_INIT_STACK_ALL_ZERO is enabled:

clang: error: argument unused during compilation: '-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang' [-Werror,-Wunused-command-line-argument]

clang-14 added a change to make sure this flag never triggers
-Wunused-command-line-argument, so it is fixed with newer releases. For
older releases that the kernel still supports building with, just filter
out this flag, as has been done for other flags.

Fixes: f0a42fbab447 ("powerpc/vdso: Improve linker flags")
Fixes: 8d9acfce3332 ("kbuild: Stop using '-Qunused-arguments' with clang")
Link: https://github.com/llvm/llvm-project/commit/ca6d5813d17598cd180995fb3bdfca00f364475f
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
diff 4ec5183e Sun Feb 05 14:13:28 MST 2023 Linus Torvalds <torvalds@linux-foundation.org> Linux 6.2-rc7
diff 4bf73588 Mon Dec 05 14:48:19 MST 2022 Dmitry Goncharov <dgoncharov@users.sf.net> kbuild: Port silent mode detection to future gnu make.

Port silent mode detection to the future (post make-4.4) versions of gnu make.

Makefile contains the following piece of make code to detect if option -s is
specified on the command line.

ifneq ($(findstring s,$(filter-out --%,$(MAKEFLAGS))),)

This code is executed by make at parse time and assumes that MAKEFLAGS
does not contain command line variable definitions.
Currently if the user defines a=s on the command line, then at build only
time MAKEFLAGS contains " -- a=s".
However, starting with commit dc2d963989b96161472b2cd38cef5d1f4851ea34
MAKEFLAGS contains command line definitions at both parse time and
build time.

This '-s' detection code then confuses a command line variable
definition which contains letter 's' with option -s.

$ # old make
$ make net/wireless/ocb.o a=s
CALL scripts/checksyscalls.sh
DESCEND objtool
$ # this a new make which defines makeflags at parse time
$ ~/src/gmake/make/l64/make net/wireless/ocb.o a=s
$

We can see here that the letter 's' from 'a=s' was confused with -s.

This patch checks for presence of -s using a method recommended by the
make manual here
https://www.gnu.org/software/make/manual/make.html#Testing-Flags.

Link: https://lists.gnu.org/archive/html/bug-make/2022-11/msg00190.html
Reported-by: Jan Palus <jpalus+gnu@fastmail.com>
Signed-off-by: Dmitry Goncharov <dgoncharov@users.sf.net>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>

Completed in 1577 milliseconds