History log of /linux-master/crypto/shash.c
Revision Date Author Comments
# 6a8dbd71 12-Mar-2024 Herbert Xu <herbert@gondor.apana.org.au>

Revert "crypto: remove CONFIG_CRYPTO_STATS"

This reverts commit 2beb81fbf0c01a62515a1bcef326168494ee2bd0.

While removing CONFIG_CRYPTO_STATS is a worthy goal, this also
removed unrelated infrastructure such as crypto_comp_alg_common.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2beb81fb 23-Feb-2024 Eric Biggers <ebiggers@google.com>

crypto: remove CONFIG_CRYPTO_STATS

Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.

Covering each of these points in detail:

1. Feature is not being used

Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1

The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.

It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.

Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).

Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.

There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.

2. CONFIG_CRYPTO_STATS significantly reduces performance

Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.

It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.

It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.

3. CONFIG_CRYPTO_STATS is a large maintenance burden

There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.

Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# fea845fd 28-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: shash - don't exclude async statuses from error stats

EINPROGRESS and EBUSY have special meaning for async operations.
However, shash is always synchronous, so these statuses have no special
meaning for shash and don't need to be excluded when handling errors.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2f1f34c1 22-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: ahash - optimize performance when wrapping shash

The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.

Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.

This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.

It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# ecf889b7 22-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: hash - move "ahash wrapping shash" functions to ahash.c

The functions that are involved in implementing the ahash API on top of
an shash algorithm belong better in ahash.c, not in shash.c where they
currently are. Move them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# c626910f 22-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: ahash - remove support for nonzero alignmask

Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.

This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.

The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.

In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.

This follows the same change that was made to shash.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 345bfa3c 18-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: shash - remove support for nonzero alignmask

Currently, the shash API checks the alignment of all message, key, and
digest buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.

This is virtually useless, however. In the case of the message buffer,
cryptographic hash functions internally operate on fixed-size blocks, so
implementations end up needing to deal with byte-aligned data anyway
because the length(s) passed to ->update might not be divisible by the
block size. Word-alignment of the message can theoretically be helpful
for CRCs, like what was being done in crc32c-sparc64. But in practice
it's better for the algorithms to use unaligned accesses or align the
message themselves. A similar argument applies to the key and digest.

In any case, no shash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from shash. The benefit is
that all the code to handle "misaligned" buffers in the shash API goes
away, reducing the overhead of the shash API.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 08debaa5 18-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: shash - eliminate indirect call for default import and export

Most shash algorithms don't have custom ->import and ->export functions,
resulting in the memcpy() based default being used. Yet,
crypto_shash_import() and crypto_shash_export() still make an indirect
call, which is expensive. Therefore, change how the default import and
export are called to make it so that crypto_shash_import() and
crypto_shash_export() don't do an indirect call in this case.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2e02c25a 09-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest()

Fold shash_digest_unaligned() into its only remaining caller. Also,
avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
crypto_shash_init() with shash->init(desc). Finally, replace
shash_update_unaligned() + shash_final_unaligned() with
shash_finup_unaligned() which does exactly that.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 313a4074 09-Oct-2023 Eric Biggers <ebiggers@google.com>

crypto: shash - optimize the default digest and finup

For an shash algorithm that doesn't implement ->digest, currently
crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
bytes, then the rest), then 1 to ->final. This is true even if the
algorithm implements ->finup. This is caused by an unnecessary fallback
to code meant to handle unaligned inputs. In fact,
crypto_shash_digest() already does the needed alignment check earlier.
Therefore, optimize the number of indirect calls for aligned inputs to 3
when the algorithm implements ->finup. It remains at 5 when the
algorithm implements neither ->finup nor ->digest.

Similarly, for an shash algorithm that doesn't implement ->finup,
currently crypto_shash_finup() with aligned input makes 4 indirect
calls: 1 to shash_finup_unaligned(), 2 to ->update, and
1 to ->final. Optimize this to 3 calls.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# b7be31b0 19-May-2023 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Allow cloning on algorithms with no init_tfm

Some shash algorithms are so simple that they don't have an init_tfm
function. These can be cloned trivially. Check this before failing
in crypto_clone_shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# b8969a1b 02-May-2023 Ondrej Mosnacek <omosnace@redhat.com>

crypto: api - Fix CRYPTO_USER checks for report function

Checking the config via ifdef incorrectly compiles out the report
functions when CRYPTO_USER is set to =m. Fix it by using IS_ENABLED()
instead.

Fixes: c0f9e01dd266 ("crypto: api - Check CRYPTO_USER instead of NET for report")
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# ed3630b8 13-Apr-2023 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Add crypto_clone_ahash/shash

This patch adds the helpers crypto_clone_ahash and crypto_clone_shash.
They are the hash-specific counterparts of crypto_clone_tfm.

This allows code paths that cannot otherwise allocate a hash tfm
object to do so. Once a new tfm has been obtained its key could
then be changed without impacting other users.

Note that only algorithms that implement clone_tfm can be cloned.
However, all keyless hashes can be cloned by simply reusing the
tfm object.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 9697b328 27-Mar-2023 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Remove maximum statesize limit

Remove the HASH_MAX_STATESIZE limit now that it is unused.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# c0f9e01d 16-Feb-2023 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Check CRYPTO_USER instead of NET for report

The report function is currently conditionalised on CONFIG_NET.
As it's only used by CONFIG_CRYPTO_USER, conditionalising on that
instead of CONFIG_NET makes more sense.

This gets rid of a rarely used code-path.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 42808e5d 16-Feb-2023 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Count error stats differently

Move all stat code specific to hash into the hash code.

While we're at it, change the stats so that bytes and counts
are always incremented even in case of error. This allows the
reference counting to be removed as we can now increment the
counters prior to the operation.

After the operation we simply increase the error count if necessary.
This is safe as errors can only occur synchronously (or rather,
the existing code already ignored asynchronous errors which are
only visible to the callback function).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# aa969515 13-Dec-2022 Ard Biesheuvel <ardb@kernel.org>

crypto: scatterwalk - use kmap_local() not kmap_atomic()

kmap_atomic() is used to create short-lived mappings of pages that may
not be accessible via the kernel direct map. This is only needed on
32-bit architectures that implement CONFIG_HIGHMEM, but it can be used
on 64-bit other architectures too, where the returned mapping is simply
the kernel direct address of the page.

However, kmap_atomic() does not support migration on CONFIG_HIGHMEM
configurations, due to the use of per-CPU kmap slots, and so it disables
preemption on all architectures, not just the 32-bit ones. This implies
that all scatterwalk based crypto routines essentially execute with
preemption disabled all the time, which is less than ideal.

So let's switch scatterwalk_map/_unmap and the shash/ahash routines to
kmap_local() instead, which serves a similar purpose, but without the
resulting impact on preemption on architectures that have no need for
CONFIG_HIGHMEM.

Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Elliott, Robert (Servers)" <elliott@hpe.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 1c799571 24-Nov-2022 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Increase MAX_ALGAPI_ALIGNMASK to 127

Previously we limited the maximum alignment mask to 63. This
is mostly due to stack usage for shash. This patch introduces
a separate limit for shash algorithms and increases the general
limit to 127 which is the value that we need for DMA allocations
on arm64.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# c060e16d 18-Nov-2022 Eric Biggers <ebiggers@google.com>

Revert "crypto: shash - avoid comparing pointers to exported functions under CFI"

This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI
no longer breaks cross-module function address equality, so
crypto_shash_alg_has_setkey() can now be an inline function like before.

This commit should not be backported to kernels that don't have the new
CFI implementation.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 85cc4243 27-Jun-2022 Hannes Reinecke <hare@suse.de>

crypto: add crypto_has_shash()

Add helper function to determine if a given synchronous hash is supported.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>


# 22ca9f4a 10-Jun-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: shash - avoid comparing pointers to exported functions under CFI

crypto_shash_alg_has_setkey() is implemented by testing whether the
.setkey() member of a struct shash_alg points to the default version,
called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static
inline, this requires shash_no_setkey() to be exported to modules.

Unfortunately, when building with CFI, function pointers are routed
via CFI stubs which are private to each module (or to the kernel proper)
and so this function pointer comparison may fail spuriously.

Let's fix this by turning crypto_shash_alg_has_setkey() into an out of
line function.

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 453431a5 07-Aug-2020 Waiman Long <longman@redhat.com>

mm, treewide: rename kzfree() to kfree_sensitive()

As said by Linus:

A symmetric naming is only helpful if it implies symmetries in use.
Otherwise it's actively misleading.

In "kzalloc()", the z is meaningful and an important part of what the
caller wants.

In "kzfree()", the z is actively detrimental, because maybe in the
future we really _might_ want to use that "memfill(0xdeadbeef)" or
something. The "zero" part of the interface isn't even _relevant_.

The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.

Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.

The renaming is done by using the command sequence:

git grep -w --name-only kzfree |\
xargs sed -i 's/kzfree/kfree_sensitive/'

followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.

[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 822a98b8 01-May-2020 Eric Biggers <ebiggers@google.com>

crypto: hash - introduce crypto_shash_tfm_digest()

Currently the simplest use of the shash API is to use
crypto_shash_digest() to digest a whole buffer. However, this still
requires allocating a hash descriptor (struct shash_desc). Many users
don't really want to preallocate one and instead just use a one-off
descriptor on the stack like the following:

{
SHASH_DESC_ON_STACK(desc, tfm);
int err;

desc->tfm = tfm;

err = crypto_shash_digest(desc, data, len, out);

shash_desc_zero(desc);
}

Wrap this in a new helper function crypto_shash_tfm_digest() that can be
used instead of the above.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# d4fdc2df 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - enforce that all instances have a ->free() method

All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered. To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# a24a1fd7 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove crypto_template::{alloc,free}()

Now that all templates provide a ->create() method which creates an
instance, installs a strongly-typed ->free() method directly to it, and
registers it, the older ->alloc() and ->free() methods in
'struct crypto_template' are no longer used. Remove them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# a39c66cc 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: shash - convert shash_free_instance() to new style

Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself. This replaces the weakly-typed method crypto_template::free().

This will allow removing support for the old way of freeing instances.

Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 48fb3e57 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: hash - add support for new way of freeing instances

Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself. These methods are more
strongly-typed than crypto_template::free(), which they replace.

This will allow removing support for the old way of freeing instances.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 629f1afc 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove obsoleted instance creation helpers

Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:

- crypto_get_attr_alg() and similar functions looked up an inner
algorithm directly from a template parameter. These were replaced
with getting the algorithm's name, then calling crypto_grab_*().

- crypto_init_spawn2() and similar functions initialized a spawn, given
an algorithm. Similarly, these were replaced with crypto_grab_*().

- crypto_alloc_instance() and similar functions allocated an instance
with a single spawn, given the inner algorithm. These aren't useful
anymore since crypto_grab_*() need the instance allocated first.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# fdfad1ff 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: shash - introduce crypto_grab_shash()

Currently, shash spawns are initialized by using shash_attr_alg() or
crypto_alg_mod_lookup() to look up the shash algorithm, then calling
crypto_init_shash_spawn().

This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.

The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.

So, let's introduce crypto_grab_shash() so that we can convert all
templates to the same way of initializing their spawns.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# c6d633a9 15-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - make unregistration functions return void

Some of the algorithm unregistration functions return -ENOENT when asked
to unregister a non-registered algorithm, while others always return 0
or always return void. But no users check the return value, except for
two of the bulk unregistration functions which print a message on error
but still always return 0 to their caller, and crypto_del_alg() which
calls crypto_unregister_instance() which always returns 0.

Since unregistering a non-registered algorithm is always a kernel bug
but there isn't anything callers should do to handle this situation at
runtime, let's simplify things by making all the unregistration
functions return void, and moving the error message into
crypto_unregister_alg() and upgrading it to a WARN().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# fbce6be5 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add init_tfm/exit_tfm and verify descsize

The shash interface supports a dynamic descsize field because of
the presence of fallbacks (it's just padlock-sha actually, perhaps
we can remove it one day). As it is the API does not verify the
setting of descsize at all. It is up to the individual algorithms
to ensure that descsize does not exceed the specified maximum value
of HASH_MAX_DESCSIZE (going above would cause stack corruption).

In order to allow the API to impose this limit directly, this patch
adds init_tfm/exit_tfm hooks to the shash_alg structure. We can
then verify the descsize setting in the API directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# c2881789 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithms

The essiv and hmac templates refuse to use any hash algorithm that has a
->setkey() function, which includes not just algorithms that always need
a key, but also algorithms that optionally take a key.

Previously the only optionally-keyed hash algorithms in the crypto API
were non-cryptographic algorithms like crc32, so this didn't really
matter. But that's changed with BLAKE2 support being added. BLAKE2
should work with essiv and hmac, just like any other cryptographic hash.

Fix this by allowing the use of both algorithms without a ->setkey()
function and algorithms that have the OPTIONAL_KEY flag set.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2874c5fd 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 877b5691 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove shash_desc::flags

The flags field in 'struct shash_desc' never actually does anything.
The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
However, no shash algorithm ever sleeps, making this flag a no-op.

With this being the case, inevitably some users who can't sleep wrongly
pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
actually started sleeping. For example, the shash_ahash_*() functions,
which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
from the ahash API to the shash API. However, the shash functions are
called under kmap_atomic(), so actually they're assumed to never sleep.

Even if it turns out that some users do need preemption points while
hashing large buffers, we could easily provide a helper function
crypto_shash_update_large() which divides the data into smaller chunks
and calls crypto_shash_update() and cond_resched() for each chunk. It's
not necessary to have a flag in 'struct shash_desc', nor is it necessary
to make individual shash algorithms aware of this at all.

Therefore, remove shash_desc::flags, and document that the
crypto_shash_*() functions can be called from any context.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 54fe792b 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove useless crypto_yield() in shash_ahash_digest()

The crypto_yield() in shash_ahash_digest() occurs after the entire
digest operation already happened, so there's no real point. Remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 67cb60e4 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - fix missed optimization in shash_ahash_digest()

shash_ahash_digest(), which is the ->digest() method for ahash tfms that
use an shash algorithm, has an optimization where crypto_shash_digest()
is called if the data is in a single page. But an off-by-one error
prevented this path from being taken unless the user happened to provide
extra data in the scatterlist. Fix it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2b091e32 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove pointless checks of shash_alg::{export,import}

crypto_init_shash_ops_async() only gives the ahash tfm non-NULL
->export() and ->import() if the underlying shash alg has these
non-NULL. This doesn't make sense because when an shash algorithm is
registered, shash_prepare_alg() sets a default ->export() and ->import()
if the implementor didn't provide them. And elsewhere it's assumed that
all shash algs and ahash tfms have non-NULL ->export() and ->import().

Therefore, remove these unnecessary, always-true conditions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 41a2e94f 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - require neither or both ->export() and ->import()

Prevent registering shash algorithms that implement ->export() but not
->import(), or ->import() but not ->export(). Such cases don't make
sense and could confuse the check that shash_prepare_alg() does for just
->export().

I don't believe this affects any existing algorithms; this is just
preventing future mistakes.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# ba7d7433 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() fails

Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.

It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.

Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
->setkey() for those must nevertheless be atomic. That's fine for now
since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
not intended that OPTIONAL_KEY be used much.

[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]

Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 37db69e0 03-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: user - clean up report structure copying

There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks. Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:

- https://lore.kernel.org/patchwork/patch/954991/
- https://patchwork.kernel.org/patch/10434351/

Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace. A fix was proposed, but it was
originally incomplete.

Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:

- Start by memsetting the report structure to 0. This guarantees it's
always initialized, regardless of what happens later.
- Initialize all strings using strscpy(). This is safe after the
memset, ensures null termination of long strings, avoids unnecessary
work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type). This is more robust against
copy+paste errors.

For simplicity, also reuse the -EMSGSIZE return value from nla_put().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# f3569fd6 07-Aug-2018 Kees Cook <keescook@chromium.org>

crypto: shash - Remove VLA usage in unaligned hashing

In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# b68a7ec1 07-Aug-2018 Kees Cook <keescook@chromium.org>

crypto: hash - Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.

A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 9fa68f62 03-Jan-2018 Eric Biggers <ebiggers@google.com>

crypto: hash - prevent using keyed hashes without setting key

Currently, almost none of the keyed hash algorithms check whether a key
has been set before proceeding. Some algorithms are okay with this and
will effectively just use a key of all 0's or some other bogus default.
However, others will severely break, as demonstrated using
"hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
via a (potentially exploitable) stack buffer overflow.

A while ago, this problem was solved for AF_ALG by pairing each hash
transform with a 'has_key' bool. However, there are still other places
in the kernel where userspace can specify an arbitrary hash algorithm by
name, and the kernel uses it as unkeyed hash without checking whether it
is really unkeyed. Examples of this include:

- KEYCTL_DH_COMPUTE, via the KDF extension
- dm-verity
- dm-crypt, via the ESSIV support
- dm-integrity, via the "internal hash" mode with no key given
- drbd (Distributed Replicated Block Device)

This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
privileges to call.

Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
->crt_flags of each hash transform that indicates whether the transform
still needs to be keyed or not. Then, make the hash init, import, and
digest functions return -ENOKEY if the key is still needed.

The new flag also replaces the 'has_key' bool which algif_hash was
previously using, thereby simplifying the algif_hash implementation.

Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# af3ff804 28-Nov-2017 Eric Biggers <ebiggers@google.com>

crypto: hmac - require that the underlying hash algorithm is unkeyed

Because the HMAC template didn't check that its underlying hash
algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))"
through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC
being used without having been keyed, resulting in sha3_update() being
called without sha3_init(), causing a stack buffer overflow.

This is a very old bug, but it seems to have only started causing real
problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3)
because the innermost hash's state is ->import()ed from a zeroed buffer,
and it just so happens that other hash algorithms are fine with that,
but SHA-3 is not. However, there could be arch or hardware-dependent
hash algorithms also affected; I couldn't test everything.

Fix the bug by introducing a function crypto_shash_alg_has_setkey()
which tests whether a shash algorithm is keyed. Then update the HMAC
template to require that its underlying hash algorithm is unkeyed.

Here is a reproducer:

#include <linux/if_alg.h>
#include <sys/socket.h>

int main()
{
int algfd;
struct sockaddr_alg addr = {
.salg_type = "hash",
.salg_name = "hmac(hmac(sha3-512-generic))",
};
char key[4096] = { 0 };

algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (const struct sockaddr *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
}

Here was the KASAN report from syzbot:

BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline]
BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044

CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:53
print_address_description+0x73/0x250 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351 [inline]
kasan_report+0x25b/0x340 mm/kasan/report.c:409
check_memory_region_inline mm/kasan/kasan.c:260 [inline]
check_memory_region+0x137/0x190 mm/kasan/kasan.c:267
memcpy+0x37/0x50 mm/kasan/kasan.c:303
memcpy include/linux/string.h:341 [inline]
sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
crypto_shash_update+0xcb/0x220 crypto/shash.c:109
shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151
crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
hmac_finup+0x182/0x330 crypto/hmac.c:152
crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172
crypto_shash_digest+0xc4/0x120 crypto/shash.c:186
hmac_setkey+0x36a/0x690 crypto/hmac.c:66
crypto_shash_setkey+0xad/0x190 crypto/shash.c:64
shash_async_setkey+0x47/0x60 crypto/shash.c:207
crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200
hash_setkey+0x40/0x90 crypto/algif_hash.c:446
alg_setkey crypto/af_alg.c:221 [inline]
alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254
SYSC_setsockopt net/socket.c:1851 [inline]
SyS_setsockopt+0x189/0x360 net/socket.c:1830
entry_SYSCALL_64_fastpath+0x1f/0x96

Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# b61907bb 09-Oct-2017 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Fix zero-length shash ahash digest crash

The shash ahash digest adaptor function may crash if given a
zero-length input together with a null SG list. This is because
it tries to read the SG list before looking at the length.

This patch fixes it by checking the length first.

Cc: <stable@vger.kernel.org>
Reported-by: Stephan Müller<smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Stephan Müller <smueller@chronox.de>


# 9039f3ef 02-Oct-2017 Jia-Ju Bai <baijiaju1990@163.com>

crypto: shash - Fix a sleep-in-atomic bug in shash_setkey_unaligned

The SCTP program may sleep under a spinlock, and the function call path is:
sctp_generate_t3_rtx_event (acquire the spinlock)
sctp_do_sm
sctp_side_effects
sctp_cmd_interpreter
sctp_make_init_ack
sctp_pack_cookie
crypto_shash_setkey
shash_setkey_unaligned
kmalloc(GFP_KERNEL)

For the same reason, the orinoco driver may sleep in interrupt handler,
and the function call path is:
orinoco_rx_isr_tasklet
orinoco_rx
orinoco_mic
crypto_shash_setkey
shash_setkey_unaligned
kmalloc(GFP_KERNEL)

To fix it, GFP_KERNEL is replaced with GFP_ATOMIC.
This bug is found by my static analysis tool and my code review.

Signed-off-by: Jia-Ju Bai <baijiaju1990@163.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# d8c34b94 31-Dec-2016 Gideon Israel Dsouza <gidisrael@gmail.com>

crypto: Replaced gcc specific attributes with macros from compiler.h

Continuing from this commit: 52f5684c8e1e
("kernel: use macros from compiler.h instead of __attribute__((...))")

I submitted 4 total patches. They are part of task I've taken up to
increase compiler portability in the kernel. I've cleaned up the
subsystems under /kernel /mm /block and /security, this patch targets
/crypto.

There is <linux/compiler.h> which provides macros for various gcc specific
constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
instances of gcc specific attributes with the right macros for the crypto
subsystem.

I had to make one additional change into compiler-gcc.h for the case when
one wants to use this: __attribute__((aligned) and not specify an alignment
factor. From the gcc docs, this will result in the largest alignment for
that data type on the target machine so I've named the macro
__aligned_largest. Please advise if another name is more appropriate.

Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 89654509 01-Feb-2016 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Remove crypto_hash interface

This patch removes all traces of the crypto_hash interface, now
that everyone has switched over to shash or ahash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 00420a65 26-Jan-2016 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Fix has_key setting

The has_key logic is wrong for shash algorithms as they always
have a setkey function. So we should instead be testing against
shash_no_setkey.

Fixes: a5596d633278 ("crypto: hash - Add crypto_ahash_has_setkey")
Cc: stable@vger.kernel.org
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Stephan Mueller <smueller@chronox.de>


# a5596d63 08-Jan-2016 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Add crypto_ahash_has_setkey

This patch adds a way for ahash users to determine whether a key
is required by a crypto_ahash transform.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# ac611680 19-Apr-2015 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Use crypto_alg_extsize helper

This patch replaces crypto_shash_extsize function with
crypto_alg_extsize.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 66d8ea57 20-Nov-2012 Mark Charlebois <charlebm@gmail.com>

crypto: LLVMLinux: aligned-attribute.patch

__attribute__((aligned)) applies the default alignment for the largest scalar
type for the target ABI. gcc allows it to be applied inline to a defined type.
Clang only allows it to be applied to a type definition (PR11071).

Making it into 2 lines makes it more readable and works with both compilers.

Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>


# 9a5467bf 05-Feb-2013 Mathias Krause <minipli@googlemail.com>

crypto: user - fix info leaks in report API

Three errors resulting in kernel memory disclosure:

1/ The structures used for the netlink based crypto algorithm report API
are located on the stack. As snprintf() does not fill the remainder of
the buffer with null bytes, those stack bytes will be disclosed to users
of the API. Switch to strncpy() to fix this.

2/ crypto_report_one() does not initialize all field of struct
crypto_user_alg. Fix this to fix the heap info leak.

3/ For the module name we should copy only as many bytes as
module_name() returns -- not as much as the destination buffer could
hold. But the current code does not and therefore copies random data
from behind the end of the module name, as the module name is always
shorter than CRYPTO_MAX_ALG_NAME.

Also switch to use strncpy() to copy the algorithm's name and
driver_name. They are strings, after all.

Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 50fc3e8d 11-Jul-2012 Jussi Kivilinna <jussi.kivilinna@mbnet.fi>

crypto: add crypto_[un]register_shashes for [un]registering multiple shash entries at once

Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 6662df33 01-Apr-2012 David S. Miller <davem@davemloft.net>

crypto: Stop using NLA_PUT*().

These macros contain a hidden goto, and are thus extremely error
prone and make code hard to audit.

Signed-off-by: David S. Miller <davem@davemloft.net>


# f0dfc0b0 25-Nov-2011 Cong Wang <amwang@redhat.com>

crypto: remove the second argument of k[un]map_atomic()

Signed-off-by: Cong Wang <amwang@redhat.com>


# 3acc8473 03-Nov-2011 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algapi - Fix build problem with NET disabled

The report functions use NLA_PUT so we need to ensure that NET
is enabled.

Reported-by: Luis Henriques <henrix@camandro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# f4d663ce 26-Sep-2011 Steffen Klassert <steffen.klassert@secunet.com>

crypto: Add userspace report for shash type algorithms

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 90246e79 04-Nov-2010 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Fix async import on shash algorithm

The function shash_async_import did not initialise the descriptor
correctly prior to calling the underlying shash import function.

This patch adds the required initialisation.

Reported-by: Miloslav Trmac <mitr@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 18eb8ea6 18-May-2010 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Remove usage of CRYPTO_MINALIGN

The macro CRYPTO_MINALIGN is not meant to be used directly. This
patch replaces it with crypto_tfm_ctx_alignment.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 0044f3ed 23-Jul-2009 Steffen Klassert <steffen.klassert@secunet.com>

crypto: shash - Test for the algorithms import function before exporting it

crypto_init_shash_ops_async() tests for setkey and not for import
before exporting the algorithms import function to ahash.
This patch fixes this.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# f592682f 21-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Require all algorithms to support export/import

This patch provides a default export/import function for all
shash algorithms. It simply copies the descriptor context as
is done by sha1_generic.

This in essence means that all existing shash algorithms now
support export/import. This is something that will be depended
upon in implementations such as hmac. Therefore all new shash
and ahash implementations must support export/import.

For those that cannot obtain a partial result, padlock-sha's
fallback model should be used so that a partial result is always
available.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# cbc86b91 15-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Fix async finup handling of null digest

When shash_ahash_finup encounters a null request, we end up not
calling the underlying final function. This patch fixes that.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 66f6ce5e 14-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: ahash - Add unaligned handling and default operations

This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 0e2d3a12 14-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Fix alignment in unaligned operations

When we encounter an unaligned pointer we are supposed to copy
it to a temporary aligned location. However the temporary buffer
isn't aligned properly. This patch fixes that.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 8c32c516 14-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Zap unaligned buffers

Some unaligned buffers on the stack weren't zapped properly which
may cause secret data to be leaked. This patch fixes them by doing
a zero memset.

It is also possible for us to place random kernel stack contents
in the digest buffer if a digest operation fails. This is fixed
by only copying if the operation succeeded.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 500b3e3c 14-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: ahash - Remove old_ahash_alg

Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 88056ec3 13-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: ahash - Convert to new style algorithms

This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2ca33da1 13-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Remove frontend argument from extsize/init_tfm

As the extsize and init_tfm functions belong to the frontend the
frontend argument is superfluous.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 7eddf95e 12-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Export async functions

This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 113adefc 13-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Make descsize a run-time attribute

This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 57cfe44b 11-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Move null setkey check to registration time

This patch moves the run-time null setkey check to shash_prepare_alg
just like we did for finup/digest.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 8267adab 09-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Move finup/digest null checks to registration time

This patch moves the run-time null finup/digest checks to the
shash_prepare_alg function which is run at registration time.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 99d27e1c 09-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Export/import hash state only

This patch replaces the full descriptor export with an export of
the partial hash state. This allows the use of a consistent export
format across all implementations of a given algorithm.

This is useful because a number of cases require the use of the
partial hash state, e.g., PadLock can use the SHA1 hash state
to get around the fact that it can only hash contiguous data
chunks.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# deee2289 08-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Propagate reinit return value

This patch fixes crypto_shash_import to propagate the value returned
by reinit.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# f88ad8de 08-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Use finup in default digest

This patch simplifies the default digest function by using finup.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 619a6ebd 08-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add shash_register_instance

This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 7d6f5640 08-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add shash_attr_alg2 helper

This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 94296999 08-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add spawn support

This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 2e4fddd8 07-Jul-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add shash_instance

This patch adds shash_instance and the associated alloc/free
functions. This is meant to be an instance that with a shash
algorithm under it. Note that the instance itself doesn't have
to be shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# f4f68993 26-Mar-2009 Yehuda Sadeh <yehuda@hq.newdream.net>

crypto: shash - Fix unaligned calculation with short length

When the total length is shorter than the calculated number of unaligned bytes, the call to shash->update breaks. For example, calling crc32c on unaligned buffer with length of 1 can result in a system crash.

Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 3f683d61 18-Feb-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention

This is based on a report and patch by Geert Uytterhoeven.

The functions crypto_alloc_tfm and create_create_tfm return a
pointer that needs to be adjusted by the caller when successful
and otherwise an error value. This means that the caller has
to check for the error and only perform the adjustment if the
pointer returned is valid.

Since all callers want to make the adjustment and we know how
to adjust it ourselves, it's much easier to just return adjusted
pointer directly.

The only caveat is that we have to return a void * instead of
struct crypto_tfm *. However, this isn't that bad because both
of these functions are for internal use only (by types code like
shash.c, not even algorithms code).

This patch also moves crypto_alloc_tfm into crypto/internal.h
(crypto_create_tfm is already there) to reflect this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 1693531e 13-Jan-2009 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Remove superfluous check in init_tfm

We're currently checking the frontend type in init_tfm. This is
completely pointless because the fact that we're called at all
means that the frontend is ours so the type must match as well.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 4abfd73e 04-Feb-2009 Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>

crypto: shash - Fix module refcount

Module reference counting for shash is incorrect: when
a new shash transformation is created the refcount is not
increased as it should.

Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 3751f402 07-Nov-2008 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Make setkey optional

Since most cryptographic hash algorithms have no keys, this patch
makes the setkey function optional for ahash and shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 5f7082ed 31-Aug-2008 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Export shash through hash

This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# dec8b786 02-Nov-2008 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Add import/export interface

It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.

The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 3b2f6df0 31-Aug-2008 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Export shash through ahash

This patch allows shash algorithms to be used through the ahash
interface. This is required before we can convert digest algorithms
over to shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>


# 7b5a080b 30-Aug-2008 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hash - Add shash interface

The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.

The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.

All existing hash users will be converted to shash once the
algorithms have been completely converted.

There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.

This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>