History log of /netbsd-current/sys/dev/random.c
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 1.10 28-Dec-2021 riastradh

sys: Use preempt_point and preempt_needed, not open-coded versions.


Revision tags: thorpej-i2c-spi-conf2-base thorpej-futex2-base thorpej-cfargs2-base cjep_sun2x-base1 cjep_sun2x-base cjep_staticlib_x-base1 cjep_staticlib_x-base thorpej-i2c-spi-conf-base thorpej-cfargs-base thorpej-futex-base
# 1.9 13-Jan-2021 riastradh

/dev/random: Fix nonblocking read.

The flag passed to cdev_read is IO_NDELAY, not FNONBLOCK/O_NONBLOCK,
and although the latter two coincide, IO_NDELAY has a different
value.


# 1.8 14-Aug-2020 riastradh

branches: 1.8.2;
New system call getrandom() compatible with Linux and others.

Three ways to call:

getrandom(p, n, 0) Blocks at boot until full entropy.
Returns up to n bytes at p; guarantees
up to 256 bytes even if interrupted
after blocking. getrandom(0,0,0)
serves as an entropy barrier: return
only after system has full entropy.

getrandom(p, n, GRND_INSECURE) Never blocks. Guarantees up to 256
bytes even if interrupted. Equivalent
to /dev/urandom. Safe only after
successful getrandom(...,0),
getrandom(...,GRND_RANDOM), or read
from /dev/random.

getrandom(p, n, GRND_RANDOM) May block at any time. Returns up to n
bytes at p, but no guarantees about how
many -- may return as short as 1 byte.
Equivalent to /dev/random. Legacy.
Provided only for source compatibility
with Linux.

Can also use flags|GRND_NONBLOCK to fail with EWOULDBLOCK/EAGAIN
without producing any output instead of blocking.

- The combination GRND_INSECURE|GRND_NONBLOCK is the same as
GRND_INSECURE, since GRND_INSECURE never blocks anyway.

- The combinations GRND_INSECURE|GRND_RANDOM and
GRND_INSECURE|GRND_RANDOM|GRND_NONBLOCK are nonsensical and fail
with EINVAL.

As proposed on tech-userlevel, tech-crypto, tech-security, and
tech-kern, and subsequently adopted by core (minus the getentropy part
of the proposal, because other operating systems and participants in
the discussion couldn't come to an agreement about getentropy and
blocking semantics):

https://mail-index.netbsd.org/tech-userlevel/2020/05/02/msg012333.html


# 1.7 08-May-2020 riastradh

Omit needless comment.

We've already committed part of the write, so ERESTART is definitely
not appropriate at this point.


# 1.6 08-May-2020 riastradh

Simplify loops by putting interrupt test at end.


# 1.5 08-May-2020 riastradh

No need for a private pool cache. kmem serves just fine.


# 1.4 08-May-2020 riastradh

Simplify /dev/random without reference to entropy_depletion.


# 1.3 07-May-2020 riastradh

Consolidate entropy on RNDADDDATA and writes to /dev/random.

The man page for some time has advertised:

Writing to either /dev/random or /dev/urandom influences subsequent
output of both devices, guaranteed to take effect at next open.

So let's make that true again.

It is a conscious choice _not_ to consolidate entropy frequently.
For example, if you have a _slow_ HWRNG, which provides 32 bits of
entropy every few seconds, and you reveal a hash that to the
adversary before any more comes in, the adversary can in principle
just keep guessing the intermediate state by a brute force search
over ~2^32 possibilities.

To mitigate this, the kernel generally tries to avoid consolidating
entropy from the per-CPU pools until doing so would bring us from
zero entropy to full entropy.

However, there are various _possible_ sources of entropy which are
just hard to give honest estimates for that are valid on ~all
machines -- like interrupt timings. The time at which we read a seed
in, which usually happens via /etc/rc.d/random_seed early in
userland, is a reasonable time to gather this up. An operator or
system engineer who knows another opportune moment can always issue
`sysctl -w kern.entropy.consolidate=1'.

Prompted by a suggestion from nia@ to consolidate entropy at the
first transition to userland. I chose not to do that because it
would likely cause warning fatigue on systems that are perfectly fine
with a random seed -- doing it this way instead lets rndctl -L
trigger the consolidation automatically. A subsequent commit will
reorder the operations in rndctl again to make it work out better.


# 1.2 30-Apr-2020 riastradh

Missed a spot -- need <sys/atomic.h> to use atomic_*.


# 1.1 30-Apr-2020 riastradh

Rewrite entropy subsystem.

Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources. Hardware RNG devices should have hardware-specific
health tests. For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
knob kern.entropy.depletion; otherwise it is disabled, and once the
system reaches full entropy it is assumed to stay there as far as
modern cryptography is concerned.

- No `entropy estimation' based on sample values. Such `entropy
estimation' is a contradiction in terms, dishonest to users, and a
potential source of side channels. It is the responsibility of the
driver author to study the entropy of the process that generates
the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
it's ready, if we've never reached full entropy, and with a rate
limit afterward. Operators can force consolidation now by running
sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
whenever entropy is consolidated into the global pool.
. Usage: Cache entropy_epoch() when you seed. If entropy_epoch()
has changed when you're about to use whatever you seeded, reseed.
. Epoch is never zero, so initialize cache to 0 if you want to reseed
on first use.
. Epoch is -1 iff we have never reached full entropy -- in other
words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
but it is better if you check for changes rather than for -1, so
that if the system estimated its own entropy incorrectly, entropy
consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
happening:
. kern.entropy.needed - bits of entropy short of full entropy
. kern.entropy.pending - bits known to be pending in per-CPU pools,
can be consolidated with sysctl -w kern.entropy.consolidate=1
. kern.entropy.epoch - number of times consolidation has happened,
never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
Hash_DRBGs. There are only two in the system: user_cprng for
/dev/urandom and sysctl kern.?random, and kern_cprng for kernel
users which may need to operate in interrupt context up to IPL_VM.

(Calling cprng_strong in interrupt context does not strike me as a
particularly good idea, so I added an event counter to see whether
anything actually does.)

- Event counters provide operator visibility into when reseeding
happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.


# 1.9 13-Jan-2021 riastradh

/dev/random: Fix nonblocking read.

The flag passed to cdev_read is IO_NDELAY, not FNONBLOCK/O_NONBLOCK,
and although the latter two coincide, IO_NDELAY has a different
value.


Revision tags: thorpej-futex-base
# 1.8 14-Aug-2020 riastradh

New system call getrandom() compatible with Linux and others.

Three ways to call:

getrandom(p, n, 0) Blocks at boot until full entropy.
Returns up to n bytes at p; guarantees
up to 256 bytes even if interrupted
after blocking. getrandom(0,0,0)
serves as an entropy barrier: return
only after system has full entropy.

getrandom(p, n, GRND_INSECURE) Never blocks. Guarantees up to 256
bytes even if interrupted. Equivalent
to /dev/urandom. Safe only after
successful getrandom(...,0),
getrandom(...,GRND_RANDOM), or read
from /dev/random.

getrandom(p, n, GRND_RANDOM) May block at any time. Returns up to n
bytes at p, but no guarantees about how
many -- may return as short as 1 byte.
Equivalent to /dev/random. Legacy.
Provided only for source compatibility
with Linux.

Can also use flags|GRND_NONBLOCK to fail with EWOULDBLOCK/EAGAIN
without producing any output instead of blocking.

- The combination GRND_INSECURE|GRND_NONBLOCK is the same as
GRND_INSECURE, since GRND_INSECURE never blocks anyway.

- The combinations GRND_INSECURE|GRND_RANDOM and
GRND_INSECURE|GRND_RANDOM|GRND_NONBLOCK are nonsensical and fail
with EINVAL.

As proposed on tech-userlevel, tech-crypto, tech-security, and
tech-kern, and subsequently adopted by core (minus the getentropy part
of the proposal, because other operating systems and participants in
the discussion couldn't come to an agreement about getentropy and
blocking semantics):

https://mail-index.netbsd.org/tech-userlevel/2020/05/02/msg012333.html


# 1.7 08-May-2020 riastradh

Omit needless comment.

We've already committed part of the write, so ERESTART is definitely
not appropriate at this point.


# 1.6 08-May-2020 riastradh

Simplify loops by putting interrupt test at end.


# 1.5 08-May-2020 riastradh

No need for a private pool cache. kmem serves just fine.


# 1.4 08-May-2020 riastradh

Simplify /dev/random without reference to entropy_depletion.


# 1.3 07-May-2020 riastradh

Consolidate entropy on RNDADDDATA and writes to /dev/random.

The man page for some time has advertised:

Writing to either /dev/random or /dev/urandom influences subsequent
output of both devices, guaranteed to take effect at next open.

So let's make that true again.

It is a conscious choice _not_ to consolidate entropy frequently.
For example, if you have a _slow_ HWRNG, which provides 32 bits of
entropy every few seconds, and you reveal a hash that to the
adversary before any more comes in, the adversary can in principle
just keep guessing the intermediate state by a brute force search
over ~2^32 possibilities.

To mitigate this, the kernel generally tries to avoid consolidating
entropy from the per-CPU pools until doing so would bring us from
zero entropy to full entropy.

However, there are various _possible_ sources of entropy which are
just hard to give honest estimates for that are valid on ~all
machines -- like interrupt timings. The time at which we read a seed
in, which usually happens via /etc/rc.d/random_seed early in
userland, is a reasonable time to gather this up. An operator or
system engineer who knows another opportune moment can always issue
`sysctl -w kern.entropy.consolidate=1'.

Prompted by a suggestion from nia@ to consolidate entropy at the
first transition to userland. I chose not to do that because it
would likely cause warning fatigue on systems that are perfectly fine
with a random seed -- doing it this way instead lets rndctl -L
trigger the consolidation automatically. A subsequent commit will
reorder the operations in rndctl again to make it work out better.


# 1.2 30-Apr-2020 riastradh

Missed a spot -- need <sys/atomic.h> to use atomic_*.


# 1.1 30-Apr-2020 riastradh

Rewrite entropy subsystem.

Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources. Hardware RNG devices should have hardware-specific
health tests. For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
knob kern.entropy.depletion; otherwise it is disabled, and once the
system reaches full entropy it is assumed to stay there as far as
modern cryptography is concerned.

- No `entropy estimation' based on sample values. Such `entropy
estimation' is a contradiction in terms, dishonest to users, and a
potential source of side channels. It is the responsibility of the
driver author to study the entropy of the process that generates
the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
it's ready, if we've never reached full entropy, and with a rate
limit afterward. Operators can force consolidation now by running
sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
whenever entropy is consolidated into the global pool.
. Usage: Cache entropy_epoch() when you seed. If entropy_epoch()
has changed when you're about to use whatever you seeded, reseed.
. Epoch is never zero, so initialize cache to 0 if you want to reseed
on first use.
. Epoch is -1 iff we have never reached full entropy -- in other
words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
but it is better if you check for changes rather than for -1, so
that if the system estimated its own entropy incorrectly, entropy
consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
happening:
. kern.entropy.needed - bits of entropy short of full entropy
. kern.entropy.pending - bits known to be pending in per-CPU pools,
can be consolidated with sysctl -w kern.entropy.consolidate=1
. kern.entropy.epoch - number of times consolidation has happened,
never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
Hash_DRBGs. There are only two in the system: user_cprng for
/dev/urandom and sysctl kern.?random, and kern_cprng for kernel
users which may need to operate in interrupt context up to IPL_VM.

(Calling cprng_strong in interrupt context does not strike me as a
particularly good idea, so I added an event counter to see whether
anything actually does.)

- Event counters provide operator visibility into when reseeding
happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.


# 1.8 14-Aug-2020 riastradh

New system call getrandom() compatible with Linux and others.

Three ways to call:

getrandom(p, n, 0) Blocks at boot until full entropy.
Returns up to n bytes at p; guarantees
up to 256 bytes even if interrupted
after blocking. getrandom(0,0,0)
serves as an entropy barrier: return
only after system has full entropy.

getrandom(p, n, GRND_INSECURE) Never blocks. Guarantees up to 256
bytes even if interrupted. Equivalent
to /dev/urandom. Safe only after
successful getrandom(...,0),
getrandom(...,GRND_RANDOM), or read
from /dev/random.

getrandom(p, n, GRND_RANDOM) May block at any time. Returns up to n
bytes at p, but no guarantees about how
many -- may return as short as 1 byte.
Equivalent to /dev/random. Legacy.
Provided only for source compatibility
with Linux.

Can also use flags|GRND_NONBLOCK to fail with EWOULDBLOCK/EAGAIN
without producing any output instead of blocking.

- The combination GRND_INSECURE|GRND_NONBLOCK is the same as
GRND_INSECURE, since GRND_INSECURE never blocks anyway.

- The combinations GRND_INSECURE|GRND_RANDOM and
GRND_INSECURE|GRND_RANDOM|GRND_NONBLOCK are nonsensical and fail
with EINVAL.

As proposed on tech-userlevel, tech-crypto, tech-security, and
tech-kern, and subsequently adopted by core (minus the getentropy part
of the proposal, because other operating systems and participants in
the discussion couldn't come to an agreement about getentropy and
blocking semantics):

https://mail-index.netbsd.org/tech-userlevel/2020/05/02/msg012333.html


# 1.7 08-May-2020 riastradh

Omit needless comment.

We've already committed part of the write, so ERESTART is definitely
not appropriate at this point.


# 1.6 08-May-2020 riastradh

Simplify loops by putting interrupt test at end.


# 1.5 08-May-2020 riastradh

No need for a private pool cache. kmem serves just fine.


# 1.4 08-May-2020 riastradh

Simplify /dev/random without reference to entropy_depletion.


# 1.3 07-May-2020 riastradh

Consolidate entropy on RNDADDDATA and writes to /dev/random.

The man page for some time has advertised:

Writing to either /dev/random or /dev/urandom influences subsequent
output of both devices, guaranteed to take effect at next open.

So let's make that true again.

It is a conscious choice _not_ to consolidate entropy frequently.
For example, if you have a _slow_ HWRNG, which provides 32 bits of
entropy every few seconds, and you reveal a hash that to the
adversary before any more comes in, the adversary can in principle
just keep guessing the intermediate state by a brute force search
over ~2^32 possibilities.

To mitigate this, the kernel generally tries to avoid consolidating
entropy from the per-CPU pools until doing so would bring us from
zero entropy to full entropy.

However, there are various _possible_ sources of entropy which are
just hard to give honest estimates for that are valid on ~all
machines -- like interrupt timings. The time at which we read a seed
in, which usually happens via /etc/rc.d/random_seed early in
userland, is a reasonable time to gather this up. An operator or
system engineer who knows another opportune moment can always issue
`sysctl -w kern.entropy.consolidate=1'.

Prompted by a suggestion from nia@ to consolidate entropy at the
first transition to userland. I chose not to do that because it
would likely cause warning fatigue on systems that are perfectly fine
with a random seed -- doing it this way instead lets rndctl -L
trigger the consolidation automatically. A subsequent commit will
reorder the operations in rndctl again to make it work out better.


# 1.2 30-Apr-2020 riastradh

Missed a spot -- need <sys/atomic.h> to use atomic_*.


# 1.1 30-Apr-2020 riastradh

Rewrite entropy subsystem.

Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources. Hardware RNG devices should have hardware-specific
health tests. For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
knob kern.entropy.depletion; otherwise it is disabled, and once the
system reaches full entropy it is assumed to stay there as far as
modern cryptography is concerned.

- No `entropy estimation' based on sample values. Such `entropy
estimation' is a contradiction in terms, dishonest to users, and a
potential source of side channels. It is the responsibility of the
driver author to study the entropy of the process that generates
the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
it's ready, if we've never reached full entropy, and with a rate
limit afterward. Operators can force consolidation now by running
sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
whenever entropy is consolidated into the global pool.
. Usage: Cache entropy_epoch() when you seed. If entropy_epoch()
has changed when you're about to use whatever you seeded, reseed.
. Epoch is never zero, so initialize cache to 0 if you want to reseed
on first use.
. Epoch is -1 iff we have never reached full entropy -- in other
words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
but it is better if you check for changes rather than for -1, so
that if the system estimated its own entropy incorrectly, entropy
consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
happening:
. kern.entropy.needed - bits of entropy short of full entropy
. kern.entropy.pending - bits known to be pending in per-CPU pools,
can be consolidated with sysctl -w kern.entropy.consolidate=1
. kern.entropy.epoch - number of times consolidation has happened,
never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
Hash_DRBGs. There are only two in the system: user_cprng for
/dev/urandom and sysctl kern.?random, and kern_cprng for kernel
users which may need to operate in interrupt context up to IPL_VM.

(Calling cprng_strong in interrupt context does not strike me as a
particularly good idea, so I added an event counter to see whether
anything actually does.)

- Event counters provide operator visibility into when reseeding
happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.


# 1.7 08-May-2020 riastradh

Omit needless comment.

We've already committed part of the write, so ERESTART is definitely
not appropriate at this point.


# 1.6 08-May-2020 riastradh

Simplify loops by putting interrupt test at end.


# 1.5 08-May-2020 riastradh

No need for a private pool cache. kmem serves just fine.


# 1.4 08-May-2020 riastradh

Simplify /dev/random without reference to entropy_depletion.


# 1.3 07-May-2020 riastradh

Consolidate entropy on RNDADDDATA and writes to /dev/random.

The man page for some time has advertised:

Writing to either /dev/random or /dev/urandom influences subsequent
output of both devices, guaranteed to take effect at next open.

So let's make that true again.

It is a conscious choice _not_ to consolidate entropy frequently.
For example, if you have a _slow_ HWRNG, which provides 32 bits of
entropy every few seconds, and you reveal a hash that to the
adversary before any more comes in, the adversary can in principle
just keep guessing the intermediate state by a brute force search
over ~2^32 possibilities.

To mitigate this, the kernel generally tries to avoid consolidating
entropy from the per-CPU pools until doing so would bring us from
zero entropy to full entropy.

However, there are various _possible_ sources of entropy which are
just hard to give honest estimates for that are valid on ~all
machines -- like interrupt timings. The time at which we read a seed
in, which usually happens via /etc/rc.d/random_seed early in
userland, is a reasonable time to gather this up. An operator or
system engineer who knows another opportune moment can always issue
`sysctl -w kern.entropy.consolidate=1'.

Prompted by a suggestion from nia@ to consolidate entropy at the
first transition to userland. I chose not to do that because it
would likely cause warning fatigue on systems that are perfectly fine
with a random seed -- doing it this way instead lets rndctl -L
trigger the consolidation automatically. A subsequent commit will
reorder the operations in rndctl again to make it work out better.


# 1.2 30-Apr-2020 riastradh

Missed a spot -- need <sys/atomic.h> to use atomic_*.


# 1.1 30-Apr-2020 riastradh

Rewrite entropy subsystem.

Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources. Hardware RNG devices should have hardware-specific
health tests. For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
knob kern.entropy.depletion; otherwise it is disabled, and once the
system reaches full entropy it is assumed to stay there as far as
modern cryptography is concerned.

- No `entropy estimation' based on sample values. Such `entropy
estimation' is a contradiction in terms, dishonest to users, and a
potential source of side channels. It is the responsibility of the
driver author to study the entropy of the process that generates
the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
it's ready, if we've never reached full entropy, and with a rate
limit afterward. Operators can force consolidation now by running
sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
whenever entropy is consolidated into the global pool.
. Usage: Cache entropy_epoch() when you seed. If entropy_epoch()
has changed when you're about to use whatever you seeded, reseed.
. Epoch is never zero, so initialize cache to 0 if you want to reseed
on first use.
. Epoch is -1 iff we have never reached full entropy -- in other
words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
but it is better if you check for changes rather than for -1, so
that if the system estimated its own entropy incorrectly, entropy
consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
happening:
. kern.entropy.needed - bits of entropy short of full entropy
. kern.entropy.pending - bits known to be pending in per-CPU pools,
can be consolidated with sysctl -w kern.entropy.consolidate=1
. kern.entropy.epoch - number of times consolidation has happened,
never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
Hash_DRBGs. There are only two in the system: user_cprng for
/dev/urandom and sysctl kern.?random, and kern_cprng for kernel
users which may need to operate in interrupt context up to IPL_VM.

(Calling cprng_strong in interrupt context does not strike me as a
particularly good idea, so I added an event counter to see whether
anything actually does.)

- Event counters provide operator visibility into when reseeding
happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.


# 1.3 07-May-2020 riastradh

Consolidate entropy on RNDADDDATA and writes to /dev/random.

The man page for some time has advertised:

Writing to either /dev/random or /dev/urandom influences subsequent
output of both devices, guaranteed to take effect at next open.

So let's make that true again.

It is a conscious choice _not_ to consolidate entropy frequently.
For example, if you have a _slow_ HWRNG, which provides 32 bits of
entropy every few seconds, and you reveal a hash that to the
adversary before any more comes in, the adversary can in principle
just keep guessing the intermediate state by a brute force search
over ~2^32 possibilities.

To mitigate this, the kernel generally tries to avoid consolidating
entropy from the per-CPU pools until doing so would bring us from
zero entropy to full entropy.

However, there are various _possible_ sources of entropy which are
just hard to give honest estimates for that are valid on ~all
machines -- like interrupt timings. The time at which we read a seed
in, which usually happens via /etc/rc.d/random_seed early in
userland, is a reasonable time to gather this up. An operator or
system engineer who knows another opportune moment can always issue
`sysctl -w kern.entropy.consolidate=1'.

Prompted by a suggestion from nia@ to consolidate entropy at the
first transition to userland. I chose not to do that because it
would likely cause warning fatigue on systems that are perfectly fine
with a random seed -- doing it this way instead lets rndctl -L
trigger the consolidation automatically. A subsequent commit will
reorder the operations in rndctl again to make it work out better.


# 1.2 30-Apr-2020 riastradh

Missed a spot -- need <sys/atomic.h> to use atomic_*.


# 1.1 30-Apr-2020 riastradh

Rewrite entropy subsystem.

Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources. Hardware RNG devices should have hardware-specific
health tests. For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
knob kern.entropy.depletion; otherwise it is disabled, and once the
system reaches full entropy it is assumed to stay there as far as
modern cryptography is concerned.

- No `entropy estimation' based on sample values. Such `entropy
estimation' is a contradiction in terms, dishonest to users, and a
potential source of side channels. It is the responsibility of the
driver author to study the entropy of the process that generates
the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
it's ready, if we've never reached full entropy, and with a rate
limit afterward. Operators can force consolidation now by running
sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
whenever entropy is consolidated into the global pool.
. Usage: Cache entropy_epoch() when you seed. If entropy_epoch()
has changed when you're about to use whatever you seeded, reseed.
. Epoch is never zero, so initialize cache to 0 if you want to reseed
on first use.
. Epoch is -1 iff we have never reached full entropy -- in other
words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
but it is better if you check for changes rather than for -1, so
that if the system estimated its own entropy incorrectly, entropy
consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
happening:
. kern.entropy.needed - bits of entropy short of full entropy
. kern.entropy.pending - bits known to be pending in per-CPU pools,
can be consolidated with sysctl -w kern.entropy.consolidate=1
. kern.entropy.epoch - number of times consolidation has happened,
never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
Hash_DRBGs. There are only two in the system: user_cprng for
/dev/urandom and sysctl kern.?random, and kern_cprng for kernel
users which may need to operate in interrupt context up to IPL_VM.

(Calling cprng_strong in interrupt context does not strike me as a
particularly good idea, so I added an event counter to see whether
anything actually does.)

- Event counters provide operator visibility into when reseeding
happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.