History log of /linux-master/crypto/
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
44a7d444 30-Aug-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"Algorithms:

- Add AES-NI/AVX/x86_64 implementation of SM4.

Drivers:

- Add Arm SMCCC TRNG based driver"

[ And obviously a lot of random fixes and updates - Linus]

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (84 commits)
crypto: sha512 - remove imaginary and mystifying clearing of variables
crypto: aesni - xts_crypt() return if walk.nbytes is 0
padata: Remove repeated verbose license text
crypto: ccp - Add support for new CCP/PSP device ID
crypto: x86/sm4 - add AES-NI/AVX2/x86_64 implementation
crypto: x86/sm4 - export reusable AESNI/AVX functions
crypto: rmd320 - remove rmd320 in Makefile
crypto: skcipher - in_irq() cleanup
crypto: hisilicon - check _PS0 and _PR0 method
crypto: hisilicon - change parameter passing of debugfs function
crypto: hisilicon - support runtime PM for accelerator device
crypto: hisilicon - add runtime PM ops
crypto: hisilicon - using 'debugfs_create_file' instead of 'debugfs_create_regset32'
crypto: tcrypt - add GCM/CCM mode test for SM4 algorithm
crypto: testmgr - Add GCM/CCM mode test of SM4 algorithm
crypto: tcrypt - Fix missing return value check
crypto: hisilicon/sec - modify the hardware endian configuration
crypto: hisilicon/sec - fix the abnormal exiting process
crypto: qat - store vf.compatible flag
crypto: qat - do not export adf_iov_putmsg()
...


6ae51ffe 21-Aug-2021 Lukas Bulwahn <lukas.bulwahn@gmail.com>

crypto: sha512 - remove imaginary and mystifying clearing of variables

The function sha512_transform() assigns all local variables to 0 before
returning to its caller with the intent to erase sensitive data.

However, make clang-analyzer warns that all these assignments are dead
stores, and as commit 7a4295f6c9d5 ("crypto: lib/sha256 - Don't clear
temporary variables") already points out for sha256_transform():

The assignments to clear a through h and t1/t2 are optimized out by the
compiler because they are unused after the assignments.

Clearing individual scalar variables is unlikely to be useful, as they
may have been assigned to registers, and even if stack spilling was
required, there may be compiler-generated temporaries that are
impossible to clear in any case.

This applies here again as well. Drop meaningless clearing of local
variables and avoid this way that the code suggests that data is erased,
which simply does not happen.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5b2efa2b 17-Aug-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: x86/sm4 - add AES-NI/AVX2/x86_64 implementation

Like the implementation of AESNI/AVX, this patch adds an accelerated
implementation of AESNI/AVX2. In terms of code implementation, by
reusing AESNI/AVX mode-related codes, the amount of code is greatly
reduced. From the benchmark data, it can be seen that when the block
size is 1024, compared to AVX acceleration, the performance achieved
by AVX2 has increased by about 70%, it is also 7.7 times of the pure
software implementation of sm4-generic.

The main algorithm implementation comes from SM4 AES-NI work by
libgcrypt and Markku-Juhani O. Saarinen at:
https://github.com/mjosaarinen/sm4ni

This optimization supports the four modes of SM4, ECB, CBC, CFB,
and CTR. Since CBC and CFB do not support multiple block parallel
encryption, the optimization effect is not obvious.

Benchmark on Intel i5-6200U 2.30GHz, performance data of three
implementation methods, pure software sm4-generic, aesni/avx
acceleration, and aesni/avx2 acceleration, the data comes from
the 218 mode and 518 mode of tcrypt. The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:

block-size | 16 64 128 256 1024 1420 4096
sm4-generic
ECB enc | 60.94 70.41 72.27 73.02 73.87 73.58 73.59
ECB dec | 61.87 70.53 72.15 73.09 73.89 73.92 73.86
CBC enc | 56.71 66.31 68.05 69.84 70.02 70.12 70.24
CBC dec | 54.54 65.91 68.22 69.51 70.63 70.79 70.82
CFB enc | 57.21 67.24 69.10 70.25 70.73 70.52 71.42
CFB dec | 57.22 64.74 66.31 67.24 67.40 67.64 67.58
CTR enc | 59.47 68.64 69.91 71.02 71.86 71.61 71.95
CTR dec | 59.94 68.77 69.95 71.00 71.84 71.55 71.95
sm4-aesni-avx
ECB enc | 44.95 177.35 292.06 316.98 339.48 322.27 330.59
ECB dec | 45.28 178.66 292.31 317.52 339.59 322.52 331.16
CBC enc | 57.75 67.68 69.72 70.60 71.48 71.63 71.74
CBC dec | 44.32 176.83 284.32 307.24 328.61 312.61 325.82
CFB enc | 57.81 67.64 69.63 70.55 71.40 71.35 71.70
CFB dec | 43.14 167.78 282.03 307.20 328.35 318.24 325.95
CTR enc | 42.35 163.32 279.11 302.93 320.86 310.56 317.93
CTR dec | 42.39 162.81 278.49 302.37 321.11 310.33 318.37
sm4-aesni-avx2
ECB enc | 45.19 177.41 292.42 316.12 339.90 322.53 330.54
ECB dec | 44.83 178.90 291.45 317.31 339.85 322.55 331.07
CBC enc | 57.66 67.62 69.73 70.55 71.58 71.66 71.77
CBC dec | 44.34 176.86 286.10 501.68 559.58 483.87 527.46
CFB enc | 57.43 67.60 69.61 70.52 71.43 71.28 71.65
CFB dec | 43.12 167.75 268.09 499.33 558.35 490.36 524.73
CTR enc | 42.42 163.39 256.17 493.95 552.45 481.58 517.19
CTR dec | 42.49 163.11 256.36 493.34 552.62 481.49 516.83

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ff1469a2 16-Aug-2021 Lukas Bulwahn <lukas.bulwahn@gmail.com>

crypto: rmd320 - remove rmd320 in Makefile

Commit 93f64202926f ("crypto: rmd320 - remove RIPE-MD 320 hash algorithm")
removes the Kconfig and code, but misses to adjust the Makefile.

Hence, ./scripts/checkkconfigsymbols.py warns:

CRYPTO_RMD320
Referencing files: crypto/Makefile

Remove the missing piece of this code removal.

Fixes: 93f64202926f ("crypto: rmd320 - remove RIPE-MD 320 hash algorithm")
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a4aed36e 29-Jun-2021 Stefan Berger <stefanb@linux.ibm.com>

certs: Add support for using elliptic curve keys for signing modules

Add support for using elliptic curve keys for signing modules. It uses
a NIST P384 (secp384r1) key if the user chooses an elliptic curve key
and will have ECDSA support built into the kernel.

Note: A developer choosing an ECDSA key for signing modules should still
delete the signing key (rm certs/signing_key.*) when building an older
version of a kernel that only supports RSA keys. Unless kbuild automati-
cally detects and generates a new kernel module key, ECDSA-signed kernel
modules will fail signature verification.

Cc: David Howells <dhowells@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Tested-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>

abfc7fad 13-Aug-2021 Changbin Du <changbin.du@intel.com>

crypto: skcipher - in_irq() cleanup

Replace the obsolete and ambiguos macro in_irq() with new
macro in_hardirq().

Signed-off-by: Changbin Du <changbin.du@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

357a753f 13-Aug-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: tcrypt - add GCM/CCM mode test for SM4 algorithm

tcrypt supports GCM/CCM mode, CMAC, CBCMAC, and speed test of
SM4 algorithm.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

68039d60 13-Aug-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: testmgr - Add GCM/CCM mode test of SM4 algorithm

The GCM/CCM mode of the SM4 algorithm is defined in the rfc 8998
specification, and the test case data also comes from rfc 8998.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7b3d5268 13-Aug-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: tcrypt - Fix missing return value check

There are several places where the return value check of crypto_aead_setkey
and crypto_aead_setauthsize were lost. It is necessary to add these checks.

At the same time, move the crypto_aead_setauthsize() call out of the loop,
and only need to call it once after load transform.

Fixee: 53f52d7aecb4 ("crypto: tcrypt - Added speed tests for AEAD crypto alogrithms in tcrypt test suite")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9491923e 08-Aug-2021 Randy Dunlap <rdunlap@infradead.org>

crypto: wp512 - correct a non-kernel-doc comment

Don't use "/**" to begin a comment that is not kernel-doc notation.

crypto/wp512.c:779: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
* The core Whirlpool transform.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0469dede 21-Jul-2021 Mian Yousaf Kaukab <ykaukab@suse.de>

crypto: ecc - handle unaligned input buffer in ecc_swap_digits

ecdsa_set_pub_key() makes an u64 pointer at 1 byte offset of the key.
This results in an unaligned u64 pointer. This pointer is passed to
ecc_swap_digits() which assumes natural alignment.

This causes a kernel crash on an armv7 platform:
[ 0.409022] Unhandled fault: alignment exception (0x001) at 0xc2a0a6a9
...
[ 0.416982] PC is at ecdsa_set_pub_key+0xdc/0x120
...
[ 0.491492] Backtrace:
[ 0.492059] [<c07c266c>] (ecdsa_set_pub_key) from [<c07c75d4>] (test_akcipher_one+0xf4/0x6c0)

Handle unaligned input buffer in ecc_swap_digits() by replacing
be64_to_cpu() to get_unaligned_be64(). Change type of in pointer to
void to reflect it doesn’t necessarily need to be aligned.

Fixes: 4e6602916bc6 ("crypto: ecdsa - Add support for ECDSA signature verification")
Reported-by: Guillaume Gardet <guillaume.gardet@arm.com>
Suggested-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a7fc80bb 19-Jul-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: tcrypt - add the asynchronous speed test for SM4

tcrypt supports testing of SM4 cipher algorithms that use avx
instruction set acceleration. The implementation of sm4 instruction
set acceleration supports up to 8 blocks in parallel encryption and
decryption, which is 128 bytes. Therefore, the 128-byte block size
is also added to block_sizes.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a7ee22ee 19-Jul-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: x86/sm4 - add AES-NI/AVX/x86_64 implementation

This patch adds AES-NI/AVX/x86_64 assembler implementation of SM4
block cipher. Through two affine transforms, we can use the AES S-Box
to simulate the SM4 S-Box to achieve the effect of instruction
acceleration.

The main algorithm implementation comes from SM4 AES-NI work by
libgcrypt and Markku-Juhani O. Saarinen at:
https://github.com/mjosaarinen/sm4ni

This optimization supports the four modes of SM4, ECB, CBC, CFB, and
CTR. Since CBC and CFB do not support multiple block parallel
encryption, the optimization effect is not obvious.

Benchmark on Intel Xeon Cascadelake, the data comes from the 218 mode
and 518 mode of tcrypt. The abscissas are blocks of different lengths.
The data is tabulated and the unit is Mb/s:

sm4-generic | 16 64 128 256 1024 1420 4096
ECB enc | 40.99 46.50 48.05 48.41 49.20 49.25 49.28
ECB dec | 41.07 46.99 48.15 48.67 49.20 49.25 49.29
CBC enc | 37.71 45.28 46.77 47.60 48.32 48.37 48.40
CBC dec | 36.48 44.82 46.43 47.45 48.23 48.30 48.36
CFB enc | 37.94 44.84 46.12 46.94 47.57 47.46 47.68
CFB dec | 37.50 42.84 43.74 44.37 44.85 44.80 44.96
CTR enc | 39.20 45.63 46.75 47.49 48.09 47.85 48.08
CTR dec | 39.64 45.70 46.72 47.47 47.98 47.88 48.06
sm4-aesni-avx
ECB enc | 33.75 134.47 221.64 243.43 264.05 251.58 258.13
ECB dec | 34.02 134.92 223.11 245.14 264.12 251.04 258.33
CBC enc | 38.85 46.18 47.67 48.34 49.00 48.96 49.14
CBC dec | 33.54 131.29 223.88 245.27 265.50 252.41 263.78
CFB enc | 38.70 46.10 47.58 48.29 49.01 48.94 49.19
CFB dec | 32.79 128.40 223.23 244.87 265.77 253.31 262.79
CTR enc | 32.58 122.23 220.29 241.16 259.57 248.32 256.69
CTR dec | 32.81 122.47 218.99 241.54 258.42 248.58 256.61

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c59de48e 19-Jul-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic

SM4 library is abstracted from sm4-generic algorithm, sm4-ce can depend on
the SM4 library instead of sm4-generic, and some functions in sm4-generic
do not need to be exported.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2b31277a 19-Jul-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: sm4 - create SM4 library based on sm4 generic code

Take the existing small footprint and mostly time invariant C code
and turn it into a SM4 library that can be used for non-performance
critical, casual use of SM4, and as a fallback for, e.g., SIMD code
that needs a secondary path that can be taken in contexts where the
SIMD unit is off limits.

Secondly, some codes have been optimized, such as unrolling small
times loop, removing unnecessary memory shifts, exporting sbox, fk,
ck arrays, and basic encryption and decryption functions.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5261cdf4 29-Jun-2021 Stephan Mueller <smueller@chronox.de>

crypto: drbg - select SHA512

With the swtich to use HMAC(SHA-512) as the default DRBG type, the
configuration must now also select SHA-512.

Fixes: 9b7b94683a9b "crypto: DRBG - switch to HMAC SHA512 DRBG as default
DRBG"
Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: Stephan Mueller <smueller@chronox.com>
Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d8dc121e 09-Jul-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

- Regression fix in drbg due to missing self-test for new default
algorithm

- Add ratelimit on user-triggerable message in qat

- Fix build failure due to missing dependency in sl3516

- Remove obsolete PageSlab checks

- Fix bogus hardware register writes on Kunpeng920 in hisilicon/sec

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: hisilicon/sec - fix the process of disabling sva prefetching
crypto: sl3516 - Add dependency on ARCH_GEMINI
crypto: sl3516 - Typo s/Stormlink/Storlink/
crypto: drbg - self test for HMAC(SHA-512)
crypto: omap - Drop obsolete PageSlab check
crypto: scatterwalk - Remove obsolete PageSlab check
crypto: qat - ratelimit invalid ioctl message and print the invalid cmd


6159c49e 28-Jun-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"Algorithms:

- Fix rmmod crash with x86/curve25519

- Add ECDH NIST P384

- Generate assembly files at build-time with perl scripts on arm

- Switch to HMAC SHA512 DRBG as default DRBG

Drivers:

- Add sl3516 crypto engine

- Add ECDH NIST P384 support in hisilicon/hpre

- Add {ofb,cfb,ctr} over {aes,sm4} in hisilicon/sec

- Add {ccm,gcm} over {aes,sm4} in hisilicon/sec

- Enable omap hwrng driver for TI K3 family

- Add support for AEAD algorithms in qce"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (142 commits)
crypto: sl3516 - depends on HAS_IOMEM
crypto: hisilicon/qm - implement for querying hardware tasks status.
crypto: sl3516 - Fix build warning without CONFIG_PM
MAINTAINERS: update caam crypto driver maintainers list
crypto: nx - Fix numerous sparse byte-order warnings
crypto: nx - Fix RCU warning in nx842_OF_upd_status
crypto: api - Move crypto attr definitions out of crypto.h
crypto: nx - Fix memcpy() over-reading in nonce
crypto: hisilicon/sec - Fix spelling mistake "fallbcak" -> "fallback"
crypto: sa2ul - Remove unused auth_len variable
crypto: sl3516 - fix duplicated inclusion
crypto: hisilicon/zip - adds the max shaper type rate
crypto: hisilicon/hpre - adds the max shaper type rate
crypto: hisilicon/sec - adds the max shaper type rate
crypto: hisilicon/qm - supports to inquiry each function's QoS
crypto: hisilicon/qm - add pf ping single vf function
crypto: hisilicon/qm - merges the work initialization process into a single function
crypto: hisilicon/qm - add the "alg_qos" file node
crypto: hisilicon/qm - supports writing QoS int the host
crypto: api - remove CRYPTOA_U32 and related functions
...


8833272d 24-Jun-2021 Stephan Müller <smueller@chronox.de>

crypto: drbg - self test for HMAC(SHA-512)

Considering that the HMAC(SHA-512) DRBG is the default DRBG now, a self
test is to be provided.

The test vector is obtained from a successful NIST ACVP test run.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5163ab50 17-Jun-2021 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Move crypto attr definitions out of crypto.h

The definitions for crypto_attr-related types and enums are not
needed by most Crypto API users. This patch moves them out of
crypto.h and into algapi.h/internal.h depending on the extent of
their use.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

10ff9976 10-Jun-2021 Liu Shixin <liushixin2@huawei.com>

crypto: api - remove CRYPTOA_U32 and related functions

According to the advice of Eric and Herbert, type CRYPTOA_U32
has been unused for over a decade, so remove the code related to
CRYPTOA_U32.

After removing CRYPTOA_U32, the type of the variable attrs can be
changed from union to struct.

Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

22ca9f4a 10-Jun-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: shash - avoid comparing pointers to exported functions under CFI

crypto_shash_alg_has_setkey() is implemented by testing whether the
.setkey() member of a struct shash_alg points to the default version,
called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static
inline, this requires shash_no_setkey() to be exported to modules.

Unfortunately, when building with CFI, function pointers are routed
via CFI stubs which are private to each module (or to the kernel proper)
and so this function pointer comparison may fail spuriously.

Let's fix this by turning crypto_shash_alg_has_setkey() into an out of
line function.

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5cd259ca 04-Jun-2021 Hongbo Li <herberthbli@tencent.com>

crypto: sm2 - fix a memory leak in sm2

SM2 module alloc ec->Q in sm2_set_pub_key(), when doing alg test in
test_akcipher_one(), it will set public key for every test vector,
and don't free ec->Q. This will cause a memory leak.

This patch alloc ec->Q in sm2_ec_ctx_init().

Fixes: ea7ecb66440b ("crypto: sm2 - introduce OSCCA SM2 asymmetric cipher algorithm")
Signed-off-by: Hongbo Li <herberthbli@tencent.com>
Reviewed-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9be148e4 28-May-2021 Xiao Ni <xni@redhat.com>

async_xor: check src_offs is not NULL before updating it

When PAGE_SIZE is greater than 4kB, multiple stripes may share the same
page. Thus, src_offs is added to async_xor_offs() with array of offsets.
However, async_xor() passes NULL src_offs to async_xor_offs(). In such
case, src_offs should not be updated. Add a check before the update.

Fixes: ceaf2966ab08(async_xor: increase src_offs when dropping destination page)
Cc: stable@vger.kernel.org # v5.10+
Reported-by: Oleksandr Shchirskyi <oleksandr.shchirskyi@linux.intel.com>
Tested-by: Oleksandr Shchirskyi <oleksandr.shchirskyi@intel.com>
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Song Liu <song@kernel.org>

7551a074 25-May-2021 Wu Bo <wubo40@huawei.com>

crypto: af_alg - use DIV_ROUND_UP helper macro for calculations

Replace open coded divisor calculations with the DIV_ROUND_UP kernel
macro for better readability.

Signed-off-by: Wu Bo <wubo40@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8e568fc2 21-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: ecdh - add test suite for NIST P384

Add test vector params for NIST P384, add test vector for
NIST P384 on vector of tests.

Vector param from:
https://datatracker.ietf.org/doc/html/rfc5903#section-3.1

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

81541325 21-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: ecdh - register NIST P384 tfm

Add ecdh_nist_p384_init_tfm and register and unregister P384 tfm.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8fd28fa5 21-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: ecdh - fix 'ecdh_init'

NIST P192 is not unregistered if failed to register NIST P256,
actually it need to unregister the algorithms already registered.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6889fc21 21-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: ecdh - fix ecdh-nist-p192's entry in testmgr

Add a comment that p192 will fail to register in FIPS mode.

Fix ecdh-nist-p192's entry in testmgr by removing the ifdefs
and not setting fips_allowed.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9b7b9468 20-May-2021 Stephan Müller <smueller@chronox.de>

crypto: DRBG - switch to HMAC SHA512 DRBG as default DRBG

The default DRBG is the one that has the highest priority. The priority
is defined based on the order of the list drbg_cores[] where the highest
priority is given to the last entry by drbg_fill_array.

With this patch the default DRBG is switched from HMAC SHA256 to HMAC
SHA512 to support compliance with SP800-90B and SP800-90C (current
draft).

The user of the crypto API is completely unaffected by the change.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Acked-by: simo Sorce <simo@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aa22cd7f 19-May-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - enable tests for xxhash and blake2

Fill some of the recently freed up slots in tcrypt with xxhash64 and
blake2b/blake2s, so we can easily benchmark their kernel implementations
from user space.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

30836548 18-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: khazad,wp512 - remove leading spaces before tabs

There are a few leading spaces before tabs and remove it by running the
following commard:

$ find . -name '*.c' | xargs sed -r -i 's/^[ ]+\t/\t/'

At the same time, fix two warning by running checkpatch.pl:
WARNING: suspect code indent for conditional statements (16, 16)
WARNING: braces {} are not necessary for single statement blocks

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c5ae16f5 10-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: ecdh - extend 'cra_driver_name' with curve name

Currently, 'cra_driver_name' cannot be used to specify ecdh algorithm
with a special curve, so extending it with curve name.

Although using 'cra_name' can also specify a special curve, but ecdh
generic driver cannot be specified when vendor hardware accelerator
has registered.

Fixes: 6763f5ea2d9a ("crypto: ecdh - move curve_id of ECDH from ...")
Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2d016672 10-May-2021 Hui Tang <tanghui20@huawei.com>

crypto: testmgr - fix initialization of 'secret_size'

Actual data length of the 'secret' is not equal to the 'secret_size'.

Since the 'curve_id' has removed in the 'secret', the 'secret_size'
should subtract the length of the 'curve_id'.

Fixes: 6763f5ea2d9a ("crypto: ecdh - move curve_id of ECDH from ...")
Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fc058606 28-Apr-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'for-5.13/drivers-2021-04-27' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:

- MD changes via Song:
- raid5 POWER fix
- raid1 failure fix
- UAF fix for md cluster
- mddev_find_or_alloc() clean up
- Fix NULL pointer deref with external bitmap
- Performance improvement for raid10 discard requests
- Fix missing information of /proc/mdstat

- rsxx const qualifier removal (Arnd)

- Expose allocated brd pages (Calvin)

- rnbd via Gioh Kim:
- Change maintainer
- Change domain address of maintainers' email
- Add polling IO mode and document update
- Fix memory leak and some bug detected by static code analysis
tools
- Code refactoring

- Series of floppy cleanups/fixes (Denis)

- s390 dasd fixes (Julian)

- kerneldoc fixes (Lee)

- null_blk double free (Lv)

- null_blk virtual boundary addition (Max)

- Remove xsysace driver (Michal)

- umem driver removal (Davidlohr)

- ataflop fixes (Dan)

- Revalidate disk removal (Christoph)

- Bounce buffer cleanups (Christoph)

- Mark lightnvm as deprecated (Christoph)

- mtip32xx init cleanups (Shixin)

- Various fixes (Tian, Gustavo, Coly, Yang, Zhang, Zhiqiang)

* tag 'for-5.13/drivers-2021-04-27' of git://git.kernel.dk/linux-block: (143 commits)
async_xor: increase src_offs when dropping destination page
drivers/block/null_blk/main: Fix a double free in null_init.
md/raid1: properly indicate failure when ending a failed write request
md-cluster: fix use-after-free issue when removing rdev
nvme: introduce generic per-namespace chardev
nvme: cleanup nvme_configure_apst
nvme: do not try to reconfigure APST when the controller is not live
nvme: add 'kato' sysfs attribute
nvme: sanitize KATO setting
nvmet: avoid queuing keep-alive timer if it is disabled
brd: expose number of allocated pages in debugfs
ataflop: fix off by one in ataflop_probe()
ataflop: potential out of bounds in do_format()
drbd: Fix fall-through warnings for Clang
block/rnbd: Use strscpy instead of strlcpy
block/rnbd-clt-sysfs: Remove copy buffer overlap in rnbd_clt_get_path_name
block/rnbd-clt: Remove max_segment_size
block/rnbd-clt: Generate kobject_uevent when the rnbd device state changes
block/rnbd-srv: Remove unused arguments of rnbd_srv_rdma_ev
Documentation/ABI/rnbd-clt: Add description for nr_poll_queues
...


ceaf2966 25-Apr-2021 Xiao Ni <xni@redhat.com>

async_xor: increase src_offs when dropping destination page

Now we support sharing one page if PAGE_SIZE is not equal stripe size. To
support this, it needs to support calculating xor value with different
offsets for each r5dev. One offset array is used to record those offsets.

In RMW mode, parity page is used as a source page. It sets
ASYNC_TX_XOR_DROP_DST before calculating xor value in ops_run_prexor5.
So it needs to add src_list and src_offs at the same time. Now it only
needs src_list. So the xor value which is calculated is wrong. It can
cause data corruption problem.

I can reproduce this problem 100% on a POWER8 machine. The steps are:

mdadm -CR /dev/md0 -l5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --size=3G
mkfs.xfs /dev/md0
mount /dev/md0 /mnt/test
mount: /mnt/test: mount(2) system call failed: Structure needs cleaning.

Fixes: 29bcff787a25 ("md/raid5: add new xor function to support different page offset")
Cc: stable@vger.kernel.org # v5.10+
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Song Liu <song@kernel.org>

a4a78bc8 26-Apr-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:

- crypto_destroy_tfm now ignores errors as well as NULL pointers

Algorithms:

- Add explicit curve IDs in ECDH algorithm names

- Add NIST P384 curve parameters

- Add ECDSA

Drivers:

- Add support for Green Sardine in ccp

- Add ecdh/curve25519 to hisilicon/hpre

- Add support for AM64 in sa2ul"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (184 commits)
fsverity: relax build time dependency on CRYPTO_SHA256
fscrypt: relax Kconfig dependencies for crypto API algorithms
crypto: camellia - drop duplicate "depends on CRYPTO"
crypto: s5p-sss - consistently use local 'dev' variable in probe()
crypto: s5p-sss - remove unneeded local variable initialization
crypto: s5p-sss - simplify getting of_device_id match data
ccp: ccp - add support for Green Sardine
crypto: ccp - Make ccp_dev_suspend and ccp_dev_resume void functions
crypto: octeontx2 - add support for OcteonTX2 98xx CPT block.
crypto: chelsio/chcr - Remove useless MODULE_VERSION
crypto: ux500/cryp - Remove duplicate argument
crypto: chelsio - remove unused function
crypto: sa2ul - Add support for AM64
crypto: sa2ul - Support for per channel coherency
dt-bindings: crypto: ti,sa2ul: Add new compatible for AM64
crypto: hisilicon - enable new error types for QM
crypto: hisilicon - add new error type for SEC
crypto: hisilicon - support new error types for ZIP
crypto: hisilicon - dynamic configuration 'err_info'
crypto: doc - fix kernel-doc notation in chacha.c and af_alg.c
...


d17d9227 17-Apr-2021 Randy Dunlap <rdunlap@infradead.org>

crypto: camellia - drop duplicate "depends on CRYPTO"

All 5 CAMELLIA crypto driver Kconfig symbols have a duplicate
"depends on CRYPTO" line but they are inside an
"if CRYPTO"/"endif # if CRYPTO" block, so drop the duplicate "depends"
lines.

These 5 symbols still depend on CRYPTO.

Fixes: 584fffc8b196 ("[CRYPTO] kconfig: Ordering cleanup")
Fixes: 0b95ec56ae19 ("crypto: camellia - add assembler implementation for x86_64")
Fixes: d9b1d2e7e10d ("crypto: camellia - add AES-NI/AVX/x86_64 assembler implementation of camellia cipher")
Fixes: f3f935a76aa0 ("crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher")
Fixes: c5aac2df6577 ("sparc64: Add DES driver making use of the new des opcodes.")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Sebastian Siewior <sebastian@breakpoint.cc>
Cc: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b2a4411a 11-Apr-2021 Randy Dunlap <rdunlap@infradead.org>

crypto: doc - fix kernel-doc notation in chacha.c and af_alg.c

Fix function name in chacha.c kernel-doc comment to remove a warning.

Convert af_alg.c to kernel-doc notation to eliminate many kernel-doc
warnings.

../lib/crypto/chacha.c:77: warning: expecting prototype for chacha_block(). Prototype was for chacha_block_generic() instead
chacha.c:104: warning: Excess function parameter 'out' description in 'hchacha_block_generic'

af_alg.c:498: warning: Function parameter or member 'sk' not described in 'af_alg_alloc_tsgl'
../crypto/af_alg.c:539: warning: expecting prototype for aead_count_tsgl(). Prototype was for af_alg_count_tsgl() instead
../crypto/af_alg.c:596: warning: expecting prototype for aead_pull_tsgl(). Prototype was for af_alg_pull_tsgl() instead
af_alg.c:663: warning: Function parameter or member 'areq' not described in 'af_alg_free_areq_sgls'
af_alg.c:700: warning: Function parameter or member 'sk' not described in 'af_alg_wait_for_wmem'
af_alg.c:700: warning: Function parameter or member 'flags' not described in 'af_alg_wait_for_wmem'
af_alg.c:731: warning: Function parameter or member 'sk' not described in 'af_alg_wmem_wakeup'
af_alg.c:757: warning: Function parameter or member 'sk' not described in 'af_alg_wait_for_data'
af_alg.c:757: warning: Function parameter or member 'flags' not described in 'af_alg_wait_for_data'
af_alg.c:757: warning: Function parameter or member 'min' not described in 'af_alg_wait_for_data'
af_alg.c:796: warning: Function parameter or member 'sk' not described in 'af_alg_data_wakeup'
af_alg.c:832: warning: Function parameter or member 'sock' not described in 'af_alg_sendmsg'
af_alg.c:832: warning: Function parameter or member 'msg' not described in 'af_alg_sendmsg'
af_alg.c:832: warning: Function parameter or member 'size' not described in 'af_alg_sendmsg'
af_alg.c:832: warning: Function parameter or member 'ivsize' not described in 'af_alg_sendmsg'
af_alg.c:985: warning: Function parameter or member 'sock' not described in 'af_alg_sendpage'
af_alg.c:985: warning: Function parameter or member 'page' not described in 'af_alg_sendpage'
af_alg.c:985: warning: Function parameter or member 'offset' not described in 'af_alg_sendpage'
af_alg.c:985: warning: Function parameter or member 'size' not described in 'af_alg_sendpage'
af_alg.c:985: warning: Function parameter or member 'flags' not described in 'af_alg_sendpage'
af_alg.c:1040: warning: Function parameter or member 'areq' not described in 'af_alg_free_resources'
af_alg.c:1059: warning: Function parameter or member '_req' not described in 'af_alg_async_cb'
af_alg.c:1059: warning: Function parameter or member 'err' not described in 'af_alg_async_cb'
af_alg.c:1083: warning: Function parameter or member 'file' not described in 'af_alg_poll'
af_alg.c:1083: warning: Function parameter or member 'sock' not described in 'af_alg_poll'
af_alg.c:1083: warning: Function parameter or member 'wait' not described in 'af_alg_poll'
af_alg.c:1114: warning: Function parameter or member 'sk' not described in 'af_alg_alloc_areq'
af_alg.c:1114: warning: Function parameter or member 'areqlen' not described in 'af_alg_alloc_areq'
af_alg.c:1146: warning: Function parameter or member 'sk' not described in 'af_alg_get_rsgl'
af_alg.c:1146: warning: Function parameter or member 'msg' not described in 'af_alg_get_rsgl'
af_alg.c:1146: warning: Function parameter or member 'flags' not described in 'af_alg_get_rsgl'
af_alg.c:1146: warning: Function parameter or member 'areq' not described in 'af_alg_get_rsgl'
af_alg.c:1146: warning: Function parameter or member 'maxsize' not described in 'af_alg_get_rsgl'
af_alg.c:1146: warning: Function parameter or member 'outlen' not described in 'af_alg_get_rsgl'

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0f049f7d 10-Apr-2021 Christophe JAILLET <christophe.jaillet@wanadoo.fr>

crypto: crc32-generic - Use SPDX-License-Identifier

Use SPDX-License-Identifier: GPL-2.0-only, instead of hand writing it.

This also removes a reference to http://www.xyratex.com which seems to be
down.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fa07c1a3 05-Apr-2021 Meng Yu <yumeng18@huawei.com>

crypto: ecc - delete a useless function declaration

This function declaration has been added in 'ecc_curve.h',
delete it in 'crypto/ecc.h'.

Fixes: 4e6602916bc6(crypto: ecdsa - Add support for ECDSA ...)
Signed-off-by: Meng Yu <yumeng18@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5c083eb3 26-Mar-2021 Milan Djurovic <mdjurovic@zohomail.com>

crypto: fcrypt - Remove 'do while(0)' loop for single statement macro

Remove the 'do while(0)' loop in the macro, as it is not needed for single
statement macros. Condense into one line.

Signed-off-by: Milan Djurovic <mdjurovic@zohomail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c29da970 26-Mar-2021 Milan Djurovic <mdjurovic@zohomail.com>

crypto: keywrap - Remove else after break statement

Remove the else because the if statement has a break statement. Fix the
checkpatch.pl warning.

Signed-off-by: Milan Djurovic <mdjurovic@zohomail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

30d0f6a9 21-Mar-2021 Eric Biggers <ebiggers@google.com>

crypto: rng - fix crypto_rng_reset() refcounting when !CRYPTO_STATS

crypto_stats_get() is a no-op when the kernel is compiled without
CONFIG_CRYPTO_STATS, so pairing it with crypto_alg_put() unconditionally
(as crypto_rng_reset() does) is wrong.

Fix this by moving the call to crypto_stats_get() to just before the
actual algorithm operation which might need it. This makes it always
paired with crypto_stats_rng_seed().

Fixes: eed74b3eba9e ("crypto: rng - Fix a refcounting bug in crypto_rng_reset()")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0193b32f 19-Mar-2021 Meng Yu <yumeng18@huawei.com>

crypto: ecc - Correct an error in the comments

Remove repeated word 'bit' in comments.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

36c25011 16-Mar-2021 Milan Djurovic <mdjurovic@zohomail.com>

crypto: jitterentropy - Put constants on the right side of the expression

This patch fixes the following checkpatch.pl warnings:

crypto/jitterentropy.c:600: WARNING: Comparisons should place the constant on the right side of the test
crypto/jitterentropy.c:681: WARNING: Comparisons should place the constant on the right side of the test
crypto/jitterentropy.c:772: WARNING: Comparisons should place the constant on the right side of the test
crypto/jitterentropy.c:829: WARNING: Comparisons should place the constant on the right side of the test

Signed-off-by: Milan Djurovic <mdjurovic@zohomail.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3877869d 26-Mar-2021 Herbert Xu <herbert@gondor.apana.org.au>

Merge branch 'ecc'

This pulls in the NIST P384/256/192 x509 changes.


2a8e6154 16-Mar-2021 Saulo Alessandre <saulo.alessandre@tse.jus.br>

x509: Add OID for NIST P384 and extend parser for it

Prepare the x509 parser to accept NIST P384 certificates and add the
OID for ansip384r1, which is the identifier for NIST P384.

Summary of changes:

* crypto/asymmetric_keys/x509_cert_parser.c
- prepare x509 parser to load NIST P384

* include/linux/oid_registry.h
- add OID_ansip384r1

Signed-off-by: Saulo Alessandre <saulo.alessandre@tse.jus.br>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

299f561a 16-Mar-2021 Stefan Berger <stefanb@linux.ibm.com>

x509: Add support for parsing x509 certs with ECDSA keys

Add support for parsing of x509 certificates that contain ECDSA keys,
such as NIST P256, that have been signed by a CA using any of the
current SHA hash algorithms.

Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d1a303e8 16-Mar-2021 Stefan Berger <stefanb@linux.ibm.com>

x509: Detect sm2 keys by their parameters OID

Detect whether a key is an sm2 type of key by its OID in the parameters
array rather than assuming that everything under OID_id_ecPublicKey
is sm2, which is not the case.

Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c12d448b 16-Mar-2021 Saulo Alessandre <saulo.alessandre@tse.jus.br>

crypto: ecdsa - Register NIST P384 and extend test suite

Register NIST P384 as an akcipher and extend the testmgr with
NIST P384-specific test vectors.

Summary of changes:

* crypto/ecdsa.c
- add ecdsa_nist_p384_init_tfm
- register and unregister P384 tfm

* crypto/testmgr.c
- add test vector for P384 on vector of tests

* crypto/testmgr.h
- add test vector params for P384(sha1, sha224, sha256, sha384
and sha512)

Signed-off-by: Saulo Alessandre <saulo.alessandre@tse.jus.br>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

149ca161 16-Mar-2021 Saulo Alessandre <saulo.alessandre@tse.jus.br>

crypto: ecc - Add math to support fast NIST P384

Add the math needed for NIST P384 and adapt certain functions'
parameters so that the ecc_curve is passed to vli_mmod_fast. This
allows to identify the curve by its name prefix and the appropriate
function for fast mmod calculation can be used.

Summary of changes:

* crypto/ecc.c
- add vli_mmod_fast_384
- change some routines to pass ecc_curve forward until vli_mmod_fast

* crypto/ecc.h
- add ECC_CURVE_NIST_P384_DIGITS
- change ECC_MAX_DIGITS to P384 size

Signed-off-by: Saulo Alessandre <saulo.alessandre@tse.jus.br>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

703c748d 16-Mar-2021 Saulo Alessandre <saulo.alessandre@tse.jus.br>

crypto: ecc - Add NIST P384 curve parameters

Add the parameters for the NIST P384 curve and define a new curve ID
for it. Make the curve available in ecc_get_curve.

Summary of changes:

* crypto/ecc_curve_defs.h
- add nist_p384 params

* include/crypto/ecdh.h
- add ECC_CURVE_NIST_P384

* crypto/ecc.c
- change ecc_get_curve to accept nist_p384

Signed-off-by: Saulo Alessandre <saulo.alessandre@tse.jus.br>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4e660291 16-Mar-2021 Stefan Berger <stefanb@linux.ibm.com>

crypto: ecdsa - Add support for ECDSA signature verification

Add support for parsing the parameters of a NIST P256 or NIST P192 key.
Enable signature verification using these keys. The new module is
enabled with CONFIG_ECDSA:
Elliptic Curve Digital Signature Algorithm (NIST P192, P256 etc.)
is A NIST cryptographic standard algorithm. Only signature verification
is implemented.

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

09149997 07-Mar-2021 Herbert Xu <herbert@gondor.apana.org.au>

crypto: aegis128 - Move simd prototypes into aegis.h

This patch fixes missing prototype warnings in crypto/aegis128-neon.c.

Fixes: a4397635afea ("crypto: aegis128 - provide a SIMD...")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8fb9340e 03-Mar-2021 Meng Yu <yumeng18@huawei.com>

crypto: ecc - add curve25519 params and expose them

1. Add curve 25519 parameters in 'crypto/ecc_curve_defs.h';
2. Add curve25519 interface 'ecc_get_curve25519_param' in
'include/crypto/ecc_curve.h', to make its parameters be
exposed to everyone in kernel tree.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

14bb7676 03-Mar-2021 Meng Yu <yumeng18@huawei.com>

crypto: ecc - expose ecc curves

Move 'ecc_get_curve' to 'include/crypto/ecc_curve.h', so everyone
in kernel tree can easily get ecc curve params;

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6763f5ea 03-Mar-2021 Meng Yu <yumeng18@huawei.com>

crypto: ecdh - move curve_id of ECDH from the key to algorithm name

1. crypto and crypto/atmel-ecc:
Move curve id of ECDH from the key into the algorithm name instead
in crypto and atmel-ecc, so ECDH algorithm name change form 'ecdh'
to 'ecdh-nist-pxxx', and we cannot use 'curve_id' in 'struct ecdh';
2. crypto/testmgr and net/bluetooth:
Modify 'testmgr.c', 'testmgr.h' and 'net/bluetooth' to adapt
the modification.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

83681f2b 02-Mar-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: api - check for ERR pointers in crypto_destroy_tfm()

Given that crypto_alloc_tfm() may return ERR pointers, and to avoid
crashes on obscure error paths where such pointers are presented to
crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there
before dereferencing the second argument as a struct crypto_tfm
pointer.

[0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/

Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6c810cf2 02-Mar-2021 Maciej W. Rozycki <macro@orcam.me.uk>

crypto: mips/poly1305 - enable for all MIPS processors

The MIPS Poly1305 implementation is generic MIPS code written such as to
support down to the original MIPS I and MIPS III ISA for the 32-bit and
64-bit variant respectively. Lift the current limitation then to enable
code for MIPSr1 ISA or newer processors only and have it available for
all MIPS processors.

Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Fixes: a11d055e7a64 ("crypto: mips/poly1305 - incorporate OpenSSL/CRYPTOGAMS optimized implementation")
Cc: stable@vger.kernel.org # v5.5+
Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>

e40ff6f3 22-Feb-2021 Kai Ye <yekai13@huawei.com>

crypto: testmgr - delete some redundant code

Delete sg_data function, because sg_data function definition same as
sg_virt(), so need to delete it and use sg_virt() replace to sg_data().

Signed-off-by: Kai Ye <yekai13@huawei.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4ab6093b 10-Feb-2021 Herbert Xu <herbert@gondor.apana.org.au>

crypto: serpent - Fix sparse byte order warnings

This patch fixes the byte order markings in serpent.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # arm64 big-endian
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c03c21ba 23-Feb-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'keys-misc-20210126' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

Pull keyring updates from David Howells:
"Here's a set of minor keyrings fixes/cleanups that I've collected from
various people for the upcoming merge window.

A couple of them might, in theory, be visible to userspace:

- Make blacklist_vet_description() reject uppercase letters as they
don't match the all-lowercase hex string generated for a blacklist
search.

This may want reconsideration in the future, but, currently, you
can't add to the blacklist keyring from userspace and the only
source of blacklist keys generates lowercase descriptions.

- Fix blacklist_init() to use a new KEY_ALLOC_* flag to indicate that
it wants KEY_FLAG_KEEP to be set rather than passing KEY_FLAG_KEEP
into keyring_alloc() as KEY_FLAG_KEEP isn't a valid alloc flag.

This isn't currently a problem as the blacklist keyring isn't
currently writable by userspace.

The rest of the patches are cleanups and I don't think they should
have any visible effect"

* tag 'keys-misc-20210126' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
watch_queue: rectify kernel-doc for init_watch()
certs: Replace K{U,G}IDT_INIT() with GLOBAL_ROOT_{U,G}ID
certs: Fix blacklist flag type confusion
PKCS#7: Fix missing include
certs: Fix blacklisted hexadecimal hash string check
certs/blacklist: fix kernel doc interface issue
crypto: public_key: Remove redundant header file from public_key.h
keys: remove trailing semicolon in macro definition
crypto: pkcs7: Use match_string() helper to simplify the code
PKCS#7: drop function from kernel-doc pkcs7_validate_trust_one
encrypted-keys: Replace HTTP links with HTTPS ones
crypto: asymmetric_keys: fix some comments in pkcs7_parser.h
KEYS: remove redundant memset
security: keys: delete repeated words in comments
KEYS: asymmetric: Fix kerneldoc
security/keys: use kvfree_sensitive()
watch_queue: Drop references to /dev/watch_queue
keys: Remove outdated __user annotations
security: keys: Fix fall-through warnings for Clang


31caf8b2 21-Feb-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
"API:
- Restrict crypto_cipher to internal API users only.

Algorithms:
- Add x86 aesni acceleration for cts.
- Improve x86 aesni acceleration for xts.
- Remove x86 acceleration of some uncommon algorithms.
- Remove RIPE-MD, Tiger and Salsa20.
- Remove tnepres.
- Add ARM acceleration for BLAKE2s and BLAKE2b.

Drivers:
- Add Keem Bay OCS HCU driver.
- Add Marvell OcteonTX2 CPT PF driver.
- Remove PicoXcell driver.
- Remove mediatek driver"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (154 commits)
hwrng: timeriomem - Use device-managed registration API
crypto: hisilicon/qm - fix printing format issue
crypto: hisilicon/qm - do not reset hardware when CE happens
crypto: hisilicon/qm - update irqflag
crypto: hisilicon/qm - fix the value of 'QM_SQC_VFT_BASE_MASK_V2'
crypto: hisilicon/qm - fix request missing error
crypto: hisilicon/qm - removing driver after reset
crypto: octeontx2 - fix -Wpointer-bool-conversion warning
crypto: hisilicon/hpre - enable Elliptic curve cryptography
crypto: hisilicon - PASID fixed on Kunpeng 930
crypto: hisilicon/qm - fix use of 'dma_map_single'
crypto: hisilicon/hpre - tiny fix
crypto: hisilicon/hpre - adapt the number of clusters
crypto: cpt - remove casting dma_alloc_coherent
crypto: keembay-ocs-aes - Fix 'q' assignment during CCM B0 generation
crypto: xor - Fix typo of optimization
hwrng: optee - Use device-managed registration API
crypto: arm64/crc-t10dif - move NEON yield to C code
crypto: arm64/aes-ce-mac - simplify NEON yield
crypto: arm64/aes-neonbs - remove NEON yield calls
...


40d32b59 04-Jan-2021 Andrew Zaborowski <andrew.zaborowski@intel.com>

keys: Update comment for restrict_link_by_key_or_keyring_chain

Add the bit of information that makes
restrict_link_by_key_or_keyring_chain different from
restrict_link_by_key_or_keyring to the inline docs comment.

Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>

cfb28fde 03-Feb-2021 Bhaskar Chowdhury <unixbhaskar@gmail.com>

crypto: xor - Fix typo of optimization

s/optimzation/optimization/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a53ab94e 03-Feb-2021 Daniele Alessandrelli <daniele.alessandrelli@intel.com>

crypto: ecdh_helper - Ensure 'len >= secret.len' in decode_key()

The length ('len' parameter) passed to crypto_ecdh_decode_key() is never
checked against the length encoded in the passed buffer ('buf'
parameter). This could lead to an out-of-bounds access when the passed
length is less than the encoded length.

Add a check to prevent that.

Fixes: 3c4b23901a0c7 ("crypto: ecdh - Add ECDH software support")
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

af1050a4 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: twofish - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the
Twofish input and output blocks, which propagates to mode drivers, and
results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e9cbaef5 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: fcrypt - drop unneeded alignmask

The fcrypt implementation uses memcpy() to access the input and output
buffers so there is no need to set an alignmask.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

80879dd9 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: cast6 - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the
CAST6 input and output blocks, which propagates to mode drivers, and
results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

24a2ee44 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: cast5 - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the
CAST5 input and output blocks, which propagates to mode drivers, and
results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

83385415 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: camellia - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the
Camellia input and output blocks, which propagates to mode drivers, and
results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

50a3a9fa 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: blowfish - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of
the Blowfish input and output blocks, which propagates to mode drivers,
and results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

81d091a2 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: serpent - use unaligned accessors instead of alignmask

Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the
Serpent input and output blocks, which propagates to mode drivers, and
results in pointless copying on architectures that don't care about
alignment, use the unaligned accessors, which will do the right thing on
each respective architecture, avoiding the need for double buffering.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

784506a1 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: serpent - get rid of obsolete tnepres variant

It is not trivial to trace back why exactly the tnepres variant of
serpent was added ~17 years ago - Google searches come up mostly empty,
but it seems to be related with the 'kerneli' version, which was based
on an incorrect interpretation of the serpent spec.

In other words, nobody is likely to care anymore today, so let's get rid
of it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e1b2d980 01-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: michael_mic - fix broken misalignment handling

The Michael MIC driver uses the cra_alignmask to ensure that pointers
presented to its update and finup/final methods are 32-bit aligned.
However, due to the way the shash API works, this is no guarantee that
the 32-bit reads occurring in the update method are also aligned, as the
size of the buffer presented to update may be of uneven length. For
instance, an update() of 3 bytes followed by a misaligned update() of 4
or more bytes will result in a misaligned access using an accessor that
is not suitable for this.

On most architectures, this does not matter, and so setting the
cra_alignmask is pointless. On architectures where this does matter,
setting the cra_alignmask does not actually solve the problem.

So let's get rid of the cra_alignmask, and use unaligned accessors
instead, where appropriate.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

663f63ee 21-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: salsa20 - remove Salsa20 stream cipher algorithm

Salsa20 is not used anywhere in the kernel, is not suitable for disk
encryption, and widely considered to have been superseded by ChaCha20.
So let's remove it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

87cd723f 21-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: tgr192 - remove Tiger 128/160/192 hash algorithms

Tiger is never referenced anywhere in the kernel, and unlikely
to be depended upon by userspace via AF_ALG. So let's remove it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

93f64202 21-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: rmd320 - remove RIPE-MD 320 hash algorithm

RIPE-MD 320 is never referenced anywhere in the kernel, and unlikely
to be depended upon by userspace via AF_ALG. So let's remove it

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c15d4167 21-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: rmd256 - remove RIPE-MD 256 hash algorithm

RIPE-MD 256 is never referenced anywhere in the kernel, and unlikely
to be depended upon by userspace via AF_ALG. So let's remove it

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b21b9a5e 21-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: rmd128 - remove RIPE-MD 128 hash algorithm

RIPE-MD 128 is never referenced anywhere in the kernel, and unlikely
to be depended upon by userspace via AF_ALG. So let's remove it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3c0940c4 02-Sep-2020 YueHaibing <yuehaibing@huawei.com>

crypto: pkcs7: Use match_string() helper to simplify the code

match_string() returns the array index of a matching string.
Use it instead of the open-coded implementation.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Ben Boeckel <mathstuf@gmail.com>

d13fc874 13-Nov-2020 Alex Shi <alexs@kernel.org>

PKCS#7: drop function from kernel-doc pkcs7_validate_trust_one

The function is a static function, so no needs add into kernel-doc. and
we could avoid warning:
crypto/asymmetric_keys/pkcs7_trust.c:25: warning: Function parameter or
member 'pkcs7' not described in 'pkcs7_validate_trust_one'
crypto/asymmetric_keys/pkcs7_trust.c:25: warning: Function parameter or
member 'sinfo' not described in 'pkcs7_validate_trust_one'
crypto/asymmetric_keys/pkcs7_trust.c:25: warning: Function parameter or
member 'trust_keyring' not described in 'pkcs7_validate_trust_one'

Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Ben Boeckel <mathstuf@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: keyrings@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

1539dd78 19-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: asymmetric_keys: fix some comments in pkcs7_parser.h

Drop the doubled word "the" in a comment.
Change "THis" to "This".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Ben Boeckel <mathstuf@gmail.com>
Cc: keyrings@vger.kernel.org

60f0f0b3 29-Oct-2020 Krzysztof Kozlowski <krzk@kernel.org>

KEYS: asymmetric: Fix kerneldoc

Fix W=1 compile warnings (invalid kerneldoc):

crypto/asymmetric_keys/asymmetric_type.c:160: warning: Function parameter or member 'kid1' not described in 'asymmetric_key_id_same'
crypto/asymmetric_keys/asymmetric_type.c:160: warning: Function parameter or member 'kid2' not described in 'asymmetric_key_id_same'
crypto/asymmetric_keys/asymmetric_type.c:160: warning: Excess function parameter 'kid_1' description in 'asymmetric_key_id_same'
crypto/asymmetric_keys/asymmetric_type.c:160: warning: Excess function parameter 'kid_2' description in 'asymmetric_key_id_same'

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Ben Boeckel <mathstuf@gmail.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@iki.fi>

7178a107 18-Jan-2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

X.509: Fix crash caused by NULL pointer

On the following call path, `sig->pkey_algo` is not assigned
in asymmetric_key_verify_signature(), which causes runtime
crash in public_key_verify_signature().

keyctl_pkey_verify
asymmetric_key_verify_signature
verify_signature
public_key_verify_signature

This patch simply check this situation and fixes the crash
caused by NULL pointer.

Fixes: 215525639631 ("X.509: support OSCCA SM2-with-SM3 certificate verification")
Reported-by: Tobias Markus <tobias@markus-regensburg.de>
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-and-tested-by: Toke Høiland-Jørgensen <toke@redhat.com>
Tested-by: João Fonseca <jpedrofonseca@ua.pt>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Cc: stable@vger.kernel.org # v5.10+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

fd3958ea 18-Jan-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:
"A Kconfig dependency issue with omap-sham and a divide by zero in xor
on some platforms"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: omap-sham - Fix link error without crypto-engine
crypto: xor - Fix divide error in do_xor_speed()


64ca771c 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86 - remove glue helper module

All dependencies on the x86 glue helper module have been replaced by
local instantiations of the new ECB/CBC preprocessor helper macros, so
the glue helper module can be retired.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

165f3573 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/twofish - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ea55cfc3 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/cast6 - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9ad58b46 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/serpent - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

407d409a 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/camellia - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c0a64926 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/blowfish - drop CTR mode implementation

Blowfish in counter mode is never used in the kernel, so there
is no point in keeping an accelerated implementation around.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

768db5fe 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/des - drop CTR mode implementation

DES or Triple DES in counter mode is never used in the kernel, so there
is no point in keeping an accelerated implementation around.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f43dcaf2 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/twofish - drop CTR mode implementation

Twofish in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7a6623cc 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/cast6 - drop CTR mode implementation

CAST6 in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e2d60e2f 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/cast5 - drop CTR mode implementation

CAST5 in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2e9440ae 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/serpent - drop CTR mode implementation

Serpent in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a1f91ecf 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/camellia - drop CTR mode implementation

Camellia in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

da4df93a 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/twofish - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Twofish in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9ec0af8a 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/serpent- switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Serpent in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2cc0fedb 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/cast6 - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement CAST6 in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

55a7e88f 05-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/camellia - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Camellia in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e07cd2f3 10-Jan-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'char-misc-5.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver fixes from Greg KH:
"Here are some small char and misc driver fixes for 5.11-rc3.

The majority here are fixes for the habanalabs drivers, but also in
here are:

- crypto driver fix

- pvpanic driver fix

- updated font file

- interconnect driver fixes

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-5.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (26 commits)
Fonts: font_ter16x32: Update font with new upstream Terminus release
misc: pvpanic: Check devm_ioport_map() for NULL
speakup: Add github repository URL and bug tracker
MAINTAINERS: Update Georgi's email address
crypto: asym_tpm: correct zero out potential secrets
habanalabs: Fix memleak in hl_device_reset
interconnect: imx8mq: Use icc_sync_state
interconnect: imx: Remove a useless test
interconnect: imx: Add a missing of_node_put after of_device_is_available
interconnect: qcom: fix rpmh link failures
habanalabs: fix order of status check
habanalabs: register to pci shutdown callback
habanalabs: add validation cs counter, fix misplaced counters
habanalabs/gaudi: retry loading TPC f/w on -EINTR
habanalabs: adjust pci controller init to new firmware
habanalabs: update comment in hl_boot_if.h
habanalabs/gaudi: enhance reset message
habanalabs: full FW hard reset support
habanalabs/gaudi: disable CGM at HW initialization
habanalabs: Revise comment to align with mirror list name
...


2481104f 31-Dec-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helper

The AES-NI driver implements XTS via the glue helper, which consumes
a struct with sets of function pointers which are invoked on chunks
of input data of the appropriate size, as annotated in the struct.

Let's get rid of this indirection, so that we can perform direct calls
to the assembler helpers. Instead, let's adopt the arm64 strategy, i.e.,
provide a helper which can consume inputs of any size, provided that the
penultimate, full block is passed via the last call if ciphertext stealing
needs to be applied.

This also allows us to enable the XTS mode for i386.

Tested-by: Eric Biggers <ebiggers@google.com> # x86_64
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3c02e04f 30-Dec-2020 Kirill Tkhai <ktkhai@virtuozzo.com>

crypto: xor - Fix divide error in do_xor_speed()

crypto: Fix divide error in do_xor_speed()

From: Kirill Tkhai <ktkhai@virtuozzo.com>

Latest (but not only latest) linux-next panics with divide
error on my QEMU setup.

The patch at the bottom of this message fixes the problem.

xor: measuring software checksum speed
divide error: 0000 [#1] PREEMPT SMP KASAN
PREEMPT SMP KASAN
CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.10.0-next-20201223+ #2177
RIP: 0010:do_xor_speed+0xbb/0xf3
Code: 41 ff cc 75 b5 bf 01 00 00 00 e8 3d 23 8b fe 65 8b 05 f6 49 83 7d 85 c0 75 05 e8
84 70 81 fe b8 00 00 50 c3 31 d2 48 8d 7b 10 <f7> f5 41 89 c4 e8 58 07 a2 fe 44 89 63 10 48 8d 7b 08
e8 cb 07 a2
RSP: 0000:ffff888100137dc8 EFLAGS: 00010246
RAX: 00000000c3500000 RBX: ffffffff823f0160 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000808 RDI: ffffffff823f0170
RBP: 0000000000000000 R08: ffffffff8109c50f R09: ffffffff824bb6f7
R10: fffffbfff04976de R11: 0000000000000001 R12: 0000000000000000
R13: ffff888101997000 R14: ffff888101994000 R15: ffffffff823f0178
FS: 0000000000000000(0000) GS:ffff8881f7780000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000000220e000 CR4: 00000000000006a0
Call Trace:
calibrate_xor_blocks+0x13c/0x1c4
? do_xor_speed+0xf3/0xf3
do_one_initcall+0xc1/0x1b7
? start_kernel+0x373/0x373
? unpoison_range+0x3a/0x60
kernel_init_freeable+0x1dd/0x238
? rest_init+0xc6/0xc6
kernel_init+0x8/0x10a
ret_from_fork+0x1f/0x30
---[ end trace 5bd3c1d0b77772da ]---

Fixes: c055e3eae0f1 ("crypto: xor - use ktime for template benchmarking")
Cc: <stable@vger.kernel.org>
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0cdc438e 23-Dec-2020 Eric Biggers <ebiggers@google.com>

crypto: blake2b - update file comment

The file comment for blake2b_generic.c makes it sound like it's the
reference implementation of BLAKE2b with only minor changes. But it's
actually been changed a lot. Update the comment to make this clearer.

Reviewed-by: David Sterba <dsterba@suse.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

28dcca4c 23-Dec-2020 Eric Biggers <ebiggers@google.com>

crypto: blake2b - sync with blake2s implementation

Sync the BLAKE2b code with the BLAKE2s code as much as possible:

- Move a lot of code into new headers <crypto/blake2b.h> and
<crypto/internal/blake2b.h>, and adjust it to be like the
corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and
<crypto/internal/blake2s.h>.

- Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE.

- Use a macro BLAKE2B_ALG() to define the shash_alg structs.

- Export blake2b_compress_generic() for use as a fallback.

This makes it much easier to add optimized implementations of BLAKE2b,
as optimized implementations can use the helper functions
crypto_blake2b_{setkey,init,update,final}() and
blake2b_compress_generic(). The ARM implementation will use these.

But this change is also helpful because it eliminates unnecessary
differences between the BLAKE2b and BLAKE2s code, so that the same
improvements can easily be made to both. (The two algorithms are
basically identical, except for the word size and constants.) It also
makes it straightforward to add a library API for BLAKE2b in the future
if/when it's needed.

This change does make the BLAKE2b code slightly more complicated than it
needs to be, as it doesn't actually provide a library API yet. For
example, __blake2b_update() doesn't really need to exist yet; it could
just be inlined into crypto_blake2b_update(). But I believe this is
outweighed by the benefits of keeping the code in sync.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8c4a93a1 23-Dec-2020 Eric Biggers <ebiggers@google.com>

crypto: blake2s - share the "shash" API boilerplate code

Add helper functions for shash implementations of BLAKE2s to
include/crypto/internal/blake2s.h, taking advantage of
__blake2s_update() and __blake2s_final() that were added by the previous
patch to share more code between the library and shash implementations.

crypto_blake2s_setkey() and crypto_blake2s_init() are usable as
shash_alg::setkey and shash_alg::init directly, while
crypto_blake2s_update() and crypto_blake2s_final() take an extra
'blake2s_compress_t' function pointer parameter. This allows the
implementation of the compression function to be overridden, which is
the only part that optimized implementations really care about.

The new functions are inline functions (similar to those in sha1_base.h,
sha256_base.h, and sm3_base.h) because this avoids needing to add a new
module blake2s_helpers.ko, they aren't *too* long, and this avoids
indirect calls which are expensive these days. Note that they can't go
in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S
from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency.

Finally, use these new helper functions in the x86 implementation of
BLAKE2s. (This part should be a separate patch, but unfortunately the
x86 implementation used the exact same function names like
"crypto_blake2s_update()", so it had to be updated at the same time.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

df412e7e 23-Dec-2020 Eric Biggers <ebiggers@google.com>

crypto: blake2s - remove unneeded includes

It doesn't make sense for the generic implementation of BLAKE2s to
include <crypto/internal/simd.h> and <linux/jump_label.h>, as these are
things that would only be useful in an architecture-specific
implementation. Remove these unnecessary includes.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0d396058 23-Dec-2020 Eric Biggers <ebiggers@google.com>

crypto: blake2s - define shash_alg structs using macros

The shash_alg structs for the four variants of BLAKE2s are identical
except for the algorithm name, driver name, and digest size. So, avoid
code duplication by using a macro to define these structs.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0eb76ba2 11-Dec-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: remove cipher routines from public crypto API

The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

303fd3e1 08-Dec-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - avoid signed overflow in byte count

The signed long type used for printing the number of bytes processed in
tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient
to cover the performance of common accelerated ciphers such as AES-NI
when benchmarked with sec=1. So switch to u64 instead.

While at it, fix up a missing printk->pr_cont conversion in the AEAD
benchmark.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0aa171e9 02-Jan-2021 Ard Biesheuvel <ardb@kernel.org>

crypto: ecdh - avoid buffer overflow in ecdh_set_secret()

Pavel reports that commit 17858b140bf4 ("crypto: ecdh - avoid unaligned
accesses in ecdh_set_secret()") fixes one problem but introduces another:
the unconditional memcpy() introduced by that commit may overflow the
target buffer if the source data is invalid, which could be the result of
intentional tampering.

So check params.key_size explicitly against the size of the target buffer
before validating the key further.

Fixes: 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()")
Reported-by: Pavel Machek <pavel@denx.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f93274ef 04-Dec-2020 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

crypto: asym_tpm: correct zero out potential secrets

The function derive_pub_key() should be calling memzero_explicit()
instead of memset() in case the complier decides to optimize away the
call to memset() because it "knows" no one is going to touch the memory
anymore.

Cc: stable <stable@vger.kernel.org>
Reported-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com>
Tested-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://lore.kernel.org/r/X8ns4AfwjKudpyfe@kroah.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

0464e0ef 30-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - avoid spurious references crypto_aegis128_update_simd

Geert reports that builds where CONFIG_CRYPTO_AEGIS128_SIMD is not set
may still emit references to crypto_aegis128_update_simd(), which
cannot be satisfied and therefore break the build. These references
only exist in functions that can be optimized away, but apparently,
the compiler is not always able to prove this.

So add some explicit checks for CONFIG_CRYPTO_AEGIS128_SIMD to help the
compiler figure this out.

Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1069e976 27-Nov-2020 Tom Rix <trix@redhat.com>

crypto: seed - remove trailing semicolon in macro definition

The macro use will already have a semicolon.

Signed-off-by: Tom Rix <trix@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

17858b14 24-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()

ecdh_set_secret() casts a void* pointer to a const u64* in order to
feed it into ecc_is_key_valid(). This is not generally permitted by
the C standard, and leads to actual misalignment faults on ARMv6
cores. In some cases, these are fixed up in software, but this still
leads to performance hits that are entirely avoidable.

So let's copy the key into the ctx buffer first, which we will do
anyway in the common case, and which guarantees correct alignment.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ad6d66bc 19-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks

WireGuard and IPsec both typically operate on input blocks that are
~1420 bytes in size, given the default Ethernet MTU of 1500 bytes and
the overhead of the VPN metadata.

Many aead and sckipher implementations are optimized for power-of-2
block sizes, and whether they perform well when operating on 1420
byte blocks cannot be easily extrapolated from the performance on
power-of-2 block size. So let's add 1420 bytes explicitly, and round
it up to the next blocksize multiple of the algo in question if it
does not support 1420 byte blocks.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

00ea27f1 19-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - permit tcrypt.ko to be builtin

When working on crypto algorithms, being able to run tcrypt quickly
without booting an entire Linux installation can be very useful. For
instance, QEMU/kvm can be used to boot a kernel from the command line,
and having tcrypt.ko builtin would allow tcrypt to be executed to run
benchmarks, or to run tests for algorithms that need to be instantiated
from templates, without the need to make it past the point where the
rootfs is mounted.

So let's relax the requirement that tcrypt can only be built as a module
when CONFIG_EXPERT is enabled.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

08a7e33c 19-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - don't initialize at subsys_initcall time

Commit c4741b2305979 ("crypto: run initcalls for generic implementations
earlier") converted tcrypt.ko's module_init() to subsys_initcall(), but
this was unintentional: tcrypt.ko currently cannot be built into the core
kernel, and so the subsys_initcall() gets converted into module_init()
under the hood. Given that tcrypt.ko does not implement a generic version
of a crypto algorithm that has to be available early during boot, there
is no point in running the tcrypt init code earlier than implied by
module_init().

However, for crypto development purposes, we will lift the restriction
that tcrypt.ko must be built as a module, and when builtin, it makes sense
for tcrypt.ko (which does its work inside the module init function) to run
as late as possible. So let's switch to late_initcall() instead.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ac50aec4 17-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - expose SIMD code path as separate driver

Wiring the SIMD code into the generic driver has the unfortunate side
effect that the tcrypt testing code cannot distinguish them, and will
therefore not use the latter to fuzz test the former, as it does for
other algorithms.

So let's refactor the code a bit so we can register two implementations:
aegis128-generic and aegis128-simd.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

97b70180 17-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128/neon - move final tag check to SIMD domain

Instead of calculating the tag and returning it to the caller on
decryption, use a SIMD compare and min across vector to perform
the comparison. This is slightly more efficient, and removes the
need on the caller's part to wipe the tag from memory if the
decryption failed.

While at it, switch to unsigned int when passing cryptlen and
assoclen - we don't support input sizes where it matters anyway.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ad00d41b 17-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128/neon - optimize tail block handling

Avoid copying the tail block via a stack buffer if the total size
exceeds a single AEGIS block. In this case, we can use overlapping
loads and stores and NEON permutation instructions instead, which
leads to a modest performance improvement on some cores (< 5%),
and is slightly cleaner. Note that we still need to use a stack
buffer if the entire input is smaller than 16 bytes, given that
we cannot use 16 byte NEON loads and stores safely in this case.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

02685906 17-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - wipe plaintext and tag if decryption fails

The AEGIS spec mentions explicitly that the security guarantees hold
only if the resulting plaintext and tag of a failed decryption are
withheld. So ensure that we abide by this.

While at it, drop the unused struct aead_request *req parameter from
crypto_aegis128_process_crypt().

Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a24d22b2 12-Nov-2020 Eric Biggers <ebiggers@google.com>

crypto: sha - split sha.h into sha1.h and sha2.h

Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2,
and <crypto/sha3.h> contains declarations for SHA-3.

This organization is inconsistent, but more importantly SHA-1 is no
longer considered to be cryptographically secure. So to the extent
possible, SHA-1 shouldn't be grouped together with any of the other SHA
versions, and usage of it should be phased out.

Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and
<crypto/sha2.h>, and make everyone explicitly specify whether they want
the declarations for SHA-1, SHA-2, or both.

This avoids making the SHA-1 declarations visible to files that don't
want anything to do with SHA-1. It also prepares for potentially moving
sha1.h into a new insecure/ or dangerous/ directory.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6569e309 02-Nov-2020 Jason A. Donenfeld <Jason@zx2c4.com>

crypto: Kconfig - CRYPTO_MANAGER_EXTRA_TESTS requires the manager

The extra tests in the manager actually require the manager to be
selected too. Otherwise the linker gives errors like:

ld: arch/x86/crypto/chacha_glue.o: in function `chacha_simd_stream_xor':
chacha_glue.c:(.text+0x422): undefined reference to `crypto_simd_disabled_for_test'

Fixes: 2343d1529aff ("crypto: Kconfig - allow tests to be disabled when manager is disabled")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

92eb6c30 26-Oct-2020 Eric Biggers <ebiggers@google.com>

crypto: af_alg - avoid undefined behavior accessing salg_name

Commit 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm
names") made the kernel start accepting arbitrarily long algorithm names
in sockaddr_alg. However, the actual length of the salg_name field
stayed at the original 64 bytes.

This is broken because the kernel can access indices >= 64 in salg_name,
which is undefined behavior -- even though the memory that is accessed
is still located within the sockaddr structure. It would only be
defined behavior if the array were properly marked as arbitrary-length
(either by making it a flexible array, which is the recommended way
these days, or by making it an array of length 0 or 1).

We can't simply change salg_name into a flexible array, since that would
break source compatibility with userspace programs that embed
sockaddr_alg into another struct, or (more commonly) declare a
sockaddr_alg like 'struct sockaddr_alg sa = { .salg_name = "foo" };'.

One solution would be to change salg_name into a flexible array only
when '#ifdef __KERNEL__'. However, that would keep userspace without an
easy way to actually use the longer algorithm names.

Instead, add a new structure 'sockaddr_alg_new' that has the flexible
array field, and expose it to both userspace and the kernel.
Make the kernel use it correctly in alg_bind().

This addresses the syzbot report
"UBSAN: array-index-out-of-bounds in alg_bind"
(https://syzkaller.appspot.com/bug?extid=92ead4eb8e26a26d465e).

Reported-by: syzbot+92ead4eb8e26a26d465e@syzkaller.appspotmail.com
Fixes: 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

09a5ef96 26-Oct-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - WARN on test failure

Currently, by default crypto self-test failures only result in a
pr_warn() message and an "unknown" status in /proc/crypto. Both of
these are easy to miss. There is also an option to panic the kernel
when a test fails, but that can't be the default behavior.

A crypto self-test failure always indicates a kernel bug, however, and
there's already a standard way to report (recoverable) kernel bugs --
the WARN() family of macros. WARNs are noisier and harder to miss, and
existing test systems already know to look for them in dmesg or via
/proc/sys/kernel/tainted.

Therefore, call WARN() when an algorithm fails its self-tests.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6e5972fa 26-Oct-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - always print the actual skcipher driver name

When alg_test() is called from tcrypt.ko rather than from the algorithm
registration code, "driver" is actually the algorithm name, not the
driver name. So it shouldn't be used in places where a driver name is
wanted, e.g. when reporting a test failure or when checking whether the
driver is the generic driver or not.

Fix this for the skcipher algorithm tests by getting the driver name
from the crypto_skcipher that actually got allocated.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2257f471 26-Oct-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - always print the actual AEAD driver name

When alg_test() is called from tcrypt.ko rather than from the algorithm
registration code, "driver" is actually the algorithm name, not the
driver name. So it shouldn't be used in places where a driver name is
wanted, e.g. when reporting a test failure or when checking whether the
driver is the generic driver or not.

Fix this for the AEAD algorithm tests by getting the driver name from
the crypto_aead that actually got allocated.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

79cafe9a 26-Oct-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - always print the actual hash driver name

When alg_test() is called from tcrypt.ko rather than from the algorithm
registration code, "driver" is actually the algorithm name, not the
driver name. So it shouldn't be used in places where a driver name is
wanted, e.g. when reporting a test failure or when checking whether the
driver is the generic driver or not.

Fix this for the hash algorithm tests by getting the driver name from
the crypto_ahash or crypto_shash that actually got allocated.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1bc608b4 15-Oct-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: sm2 - remove unnecessary reset operations

This is an algorithm optimization. The reset operation when
setting the public key is repeated and redundant, so remove it.
At the same time, `sm2_ecc_os2ec()` is optimized to make the
function more simpler and more in line with the Linux code style.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7cd4ecd9 13-Oct-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'drivers-5.10-2020-10-12' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
"Here are the driver updates for 5.10.

A few SCSI updates in here too, in coordination with Martin as they
depend on core block changes for the shared tag bitmap.

This contains:

- NVMe pull requests via Christoph:
- fix keep alive timer modification (Amit Engel)
- order the PCI ID list more sensibly (Andy Shevchenko)
- cleanup the open by controller helper (Chaitanya Kulkarni)
- use an xarray for the CSE log lookup (Chaitanya Kulkarni)
- support ZNS in nvmet passthrough mode (Chaitanya Kulkarni)
- fix nvme_ns_report_zones (Christoph Hellwig)
- add a sanity check to nvmet-fc (James Smart)
- fix interrupt allocation when too many polled queues are
specified (Jeffle Xu)
- small nvmet-tcp optimization (Mark Wunderlich)
- fix a controller refcount leak on init failure (Chaitanya
Kulkarni)
- misc cleanups (Chaitanya Kulkarni)
- major refactoring of the scanning code (Christoph Hellwig)

- MD updates via Song:
- Bug fixes in bitmap code, from Zhao Heming
- Fix a work queue check, from Guoqing Jiang
- Fix raid5 oops with reshape, from Song Liu
- Clean up unused code, from Jason Yan
- Discard improvements, from Xiao Ni
- raid5/6 page offset support, from Yufen Yu

- Shared tag bitmap for SCSI/hisi_sas/null_blk (John, Kashyap,
Hannes)

- null_blk open/active zone limit support (Niklas)

- Set of bcache updates (Coly, Dongsheng, Qinglang)"

* tag 'drivers-5.10-2020-10-12' of git://git.kernel.dk/linux-block: (78 commits)
md/raid5: fix oops during stripe resizing
md/bitmap: fix memory leak of temporary bitmap
md: fix the checking of wrong work queue
md/bitmap: md_bitmap_get_counter returns wrong blocks
md/bitmap: md_bitmap_read_sb uses wrong bitmap blocks
md/raid0: remove unused function is_io_in_chunk_boundary()
nvme-core: remove extra condition for vwc
nvme-core: remove extra variable
nvme: remove nvme_identify_ns_list
nvme: refactor nvme_validate_ns
nvme: move nvme_validate_ns
nvme: query namespace identifiers before adding the namespace
nvme: revalidate zone bitmaps in nvme_update_ns_info
nvme: remove nvme_update_formats
nvme: update the known admin effects
nvme: set the queue limits in nvme_update_ns_info
nvme: remove the 0 lba_shift check in nvme_update_ns_info
nvme: clean up the check for too large logic block sizes
nvme: freeze the queue over ->lba_shift updates
nvme: factor out a nvme_configure_metadata helper
...


39a5101f 13-Oct-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Allow DRBG testing through user-space af_alg
- Add tcrypt speed testing support for keyed hashes
- Add type-safe init/exit hooks for ahash

Algorithms:
- Mark arc4 as obsolete and pending for future removal
- Mark anubis, khazad, sead and tea as obsolete
- Improve boot-time xor benchmark
- Add OSCCA SM2 asymmetric cipher algorithm and use it for integrity

Drivers:
- Fixes and enhancement for XTS in caam
- Add support for XIP8001B hwrng in xiphera-trng
- Add RNG and hash support in sun8i-ce/sun8i-ss
- Allow imx-rngc to be used by kernel entropy pool
- Use crypto engine in omap-sham
- Add support for Ingenic X1830 with ingenic"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (205 commits)
X.509: Fix modular build of public_key_sm2
crypto: xor - Remove unused variable count in do_xor_speed
X.509: fix error return value on the failed path
crypto: bcm - Verify GCM/CCM key length in setkey
crypto: qat - drop input parameter from adf_enable_aer()
crypto: qat - fix function parameters descriptions
crypto: atmel-tdes - use semicolons rather than commas to separate statements
crypto: drivers - use semicolons rather than commas to separate statements
hwrng: mxc-rnga - use semicolons rather than commas to separate statements
hwrng: iproc-rng200 - use semicolons rather than commas to separate statements
hwrng: stm32 - use semicolons rather than commas to separate statements
crypto: xor - use ktime for template benchmarking
crypto: xor - defer load time benchmark to a later time
crypto: hisilicon/zip - fix the uninitalized 'curr_qm_qp_num'
crypto: hisilicon/zip - fix the return value when device is busy
crypto: hisilicon/zip - fix zero length input in GZIP decompress
crypto: hisilicon/zip - fix the uncleared debug registers
lib/mpi: Fix unused variable warnings
crypto: x86/poly1305 - Remove assignments with no effect
hwrng: npcm - modify readl to readb
...


3093e7c1 07-Oct-2020 Herbert Xu <herbert@gondor.apana.org.au>

X.509: Fix modular build of public_key_sm2

The sm2 code was split out of public_key.c in a way that breaks
modular builds. This patch moves the code back into the same file
as the original motivation was to minimise ifdefs and that has
nothing to do with splitting the code out.

Fixes: 215525639631 ("X.509: support OSCCA SM2-with-SM3...")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

10b0f78a 06-Oct-2020 Nathan Chancellor <nathan@kernel.org>

crypto: xor - Remove unused variable count in do_xor_speed

Clang warns:

crypto/xor.c:101:4: warning: variable 'count' is uninitialized when used
here [-Wuninitialized]
count++;
^~~~~
crypto/xor.c:86:17: note: initialize the variable 'count' to silence
this warning
int i, j, count;
^
= 0
1 warning generated.

After the refactoring to use ktime that happened in this function, count
is only assigned, never read. Just remove the variable to get rid of the
warning.

Fixes: c055e3eae0f1 ("crypto: xor - use ktime for template benchmarking")
Link: https://github.com/ClangBuiltLinux/linux/issues/1171
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4f28945d 05-Oct-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

X.509: fix error return value on the failed path

When memory allocation fails, an appropriate return value
should be set.

Fixes: 215525639631 ("X.509: support OSCCA SM2-with-SM3 certificate verification")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c055e3ea 25-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: xor - use ktime for template benchmarking

Currently, we use the jiffies counter as a time source, by staring at
it until a HZ period elapses, and then staring at it again and perform
as many XOR operations as we can at the same time until another HZ
period elapses, so that we can calculate the throughput. This takes
longer than necessary, and depends on HZ, which is undesirable, since
HZ is system dependent.

Let's use the ktime interface instead, and use it to time a fixed
number of XOR operations, which can be done much faster, and makes
the time spent depend on the performance level of the system itself,
which is much more reasonable. To ensure that we have the resolution
we need even on systems with 32 kHz time sources, while not spending too
much time in the benchmark on a slow CPU, let's switch to 3 attempts of
800 repetitions each: that way, we will only misidentify algorithms that
perform within 10% of each other as the fastest if they are faster than
10 GB/s to begin with, which is not expected to occur on systems with
such coarse clocks.

On ThunderX2, I get the following results:

Before:

[72625.956765] xor: measuring software checksum speed
[72625.993104] 8regs : 10169.000 MB/sec
[72626.033099] 32regs : 12050.000 MB/sec
[72626.073095] arm64_neon: 11100.000 MB/sec
[72626.073097] xor: using function: 32regs (12050.000 MB/sec)

After:

[72599.650216] xor: measuring software checksum speed
[72599.651188] 8regs : 10491 MB/sec
[72599.652006] 32regs : 12345 MB/sec
[72599.652871] arm64_neon : 11402 MB/sec
[72599.652873] xor: using function: 32regs (12345 MB/sec)

Link: https://lore.kernel.org/linux-crypto/20200923182230.22715-3-ardb@kernel.org/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

524ccdbd 25-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: xor - defer load time benchmark to a later time

Currently, the XOR module performs its boot time benchmark at core
initcall time when it is built-in, to ensure that the RAID code can
make use of it when it is built-in as well.

Let's defer this to a later stage during the boot, to avoid impacting
the overall boot time of the system. Instead, just pick an arbitrary
implementation from the list, and use that as the preliminary default.

Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

21552563 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

X.509: support OSCCA SM2-with-SM3 certificate verification

The digital certificate format based on SM2 crypto algorithm as
specified in GM/T 0015-2012. It was published by State Encryption
Management Bureau, China.

The method of generating Other User Information is defined as
ZA=H256(ENTLA || IDA || a || b || xG || yG || xA || yA), it also
specified in https://tools.ietf.org/html/draft-shen-sm2-ecdsa-02.

The x509 certificate supports SM2-with-SM3 type certificate
verification. Because certificate verification requires ZA
in addition to tbs data, ZA also depends on elliptic curve
parameters and public key data, so you need to access tbs in sig
and calculate ZA. Finally calculate the digest of the
signature and complete the verification work. The calculation
process of ZA is declared in specifications GM/T 0009-2012
and GM/T 0003.2-2012.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

254f84f5 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

X.509: support OSCCA certificate parse

The digital certificate format based on SM2 crypto algorithm as
specified in GM/T 0015-2012. It was published by State Encryption
Management Bureau, China.

This patch adds the OID object identifier defined by OSCCA. The
x509 certificate supports SM2-with-SM3 type certificate parsing.
It uses the standard elliptic curve public key, and the sm2
algorithm signs the hash generated by sm3.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8b805b97 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: sm2 - add SM2 test vectors to testmgr

Add testmgr test vectors for SM2 algorithm. These vectors come
from `openssl pkeyutl -sign` and libgcrypt.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2b403867 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: testmgr - Fix potential memory leak in test_akcipher_one()

When the 'key' allocation fails, the 'req' will not be released,
which will cause memory leakage on this path. This patch adds a
'free_req' tag used to solve this problem, and two new err values
are added to reflect the real reason of the error.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a1f62c21 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: testmgr - support test with different ciphertext per encryption

Some asymmetric algorithms will get different ciphertext after
each encryption, such as SM2, and let testmgr support the testing
of such algorithms.

In struct akcipher_testvec, set c and c_size to be empty, skip
the comparison of the ciphertext, and compare the decrypted
plaintext with m to achieve the test purpose.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ea7ecb66 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: sm2 - introduce OSCCA SM2 asymmetric cipher algorithm

This new module implement the SM2 public key algorithm. It was
published by State Encryption Management Bureau, China.
List of specifications for SM2 elliptic curve public key cryptography:

* GM/T 0003.1-2012
* GM/T 0003.2-2012
* GM/T 0003.3-2012
* GM/T 0003.4-2012
* GM/T 0003.5-2012

IETF: https://tools.ietf.org/html/draft-shen-sm2-ecdsa-02
oscca: http://www.oscca.gov.cn/sca/xxgk/2010-12/17/content_1002386.shtml
scctc: http://www.gmbz.org.cn/main/bzlb.html

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f4928287 20-Sep-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: sm3 - export crypto_sm3_final function

Both crypto_sm3_update and crypto_sm3_finup have been
exported, exporting crypto_sm3_final, to avoid having to
use crypto_sm3_finup(desc, NULL, 0, dgst) to calculate
the hash in some cases.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Tested-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

77ebdabe 18-Sep-2020 Elena Petrova <lenaptr@google.com>

crypto: af_alg - add extra parameters for DRBG interface

Extend the user-space RNG interface:
1. Add entropy input via ALG_SET_DRBG_ENTROPY setsockopt option;
2. Add additional data input via sendmsg syscall.

This allows DRBG to be tested with test vectors, for example for the
purpose of CAVP testing, which otherwise isn't possible.

To prevent erroneous use of entropy input, it is hidden under
CRYPTO_USER_API_RNG_CAVP config option and requires CAP_SYS_ADMIN to
succeed.

Signed-off-by: Elena Petrova <lenaptr@google.com>
Acked-by: Stephan Müller <smueller@chronox.de>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fde2f57c 17-Sep-2020 Corentin Labbe <clabbe@baylibre.com>

crypto: proc - Removing some useless only space lines

Some line got only spaces, remove them

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4f86ff55 20-Aug-2020 Yufen Yu <yuyufen@huawei.com>

md/raid6: let async recovery function support different page offset

For now, asynchronous raid6 recovery calculate functions are require
common offset for pages. But, we expect them to support different page
offset after introducing stripe shared page. Do that by simplily adding
page offset where each page address are referred. Then, replace the
old interface with the new ones in raid6 and raid6test.

Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>

d69454bc 20-Aug-2020 Yufen Yu <yuyufen@huawei.com>

md/raid6: let syndrome computor support different page offset

For now, syndrome compute functions require common offset in the pages
array. However, we expect them to support different offset when try to
use shared page in the following. Simplily covert them by adding page
offset where each page address are referred.

Since the only caller of async_gen_syndrome() and async_syndrome_val()
are in raid6, we don't want to reserve the old interface but modify the
interface directly. After that, replacing old interfaces with new ones
for raid6 and raid6test.

Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>

29bcff78 20-Aug-2020 Yufen Yu <yuyufen@huawei.com>

md/raid5: add new xor function to support different page offset

raid5 will call async_xor() and async_xor_val() to compute xor.
For now, both of them require the common src/dst page offset. But,
we want them to support different src/dst page offset for following
shared page.

Here, adding two new function async_xor_offs() and async_xor_val_offs()
respectively for async_xor() and async_xor_val().

Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>

1674aea5 11-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: Kconfig - mark unused ciphers as obsolete

We have a few interesting pieces in our cipher museum, which are never
used internally, and were only ever provided as generic C implementations.

Unfortunately, we cannot simply remove this code, as we cannot be sure
that it is not being used via the AF_ALG socket API, however unlikely.

So let's mark the Anubis, Khazad, SEED and TEA algorithms as obsolete,
which means they can only be enabled in the build if the socket API is
enabled in the first place.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5f254dd4 01-Sep-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: cbc - Remove cbc.h

Now that crypto/cbc.h is only used by the generic cbc template,
we can merge it back into the CBC code.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9ace6771 31-Aug-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: arc4 - mark ecb(arc4) skcipher as obsolete

Cryptographic algorithms may have a lifespan that is significantly
shorter than Linux's, and so we need to start phasing out algorithms
that are known to be broken, and are no longer fit for general use.

RC4 (or arc4) is a good example here: there are a few areas where its
use is still somewhat acceptable, e.g., for interoperability with legacy
wifi hardware that can only use WEP or TKIP data encryption, but that
should not imply that, for instance, use of RC4 based EAP-TLS by the WPA
supplicant for negotiating TKIP keys is equally acceptable, or that RC4
should remain available as a general purpose cryptographic transform for
all in-kernel and user space clients.

Now that all in-kernel users that need to retain support have moved to
the arc4 library interface, and the known users of ecb(arc4) via the
socket API (iwd [0] and libell [1][2]) have been updated to switch to a
local implementation, we can take the next step, and mark the ecb(arc4)
skcipher as obsolete, and only provide it if the socket API is enabled in
the first place, as well as provide the option to disable all algorithms
that have been marked as obsolete.

[0] https://git.kernel.org/pub/scm/network/wireless/iwd.git/commit/?id=1db8a85a60c64523
[1] https://git.kernel.org/pub/scm/libs/ell/ell.git/commit/?id=53482ce421b727c2
[2] https://git.kernel.org/pub/scm/libs/ell/ell.git/commit/?id=7f6a137809d42f6b

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e43327c7 30-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

- fix regression in af_alg that affects iwd

- restore polling delay in qat

- fix double free in ingenic on error path

- fix potential build failure in sa2ul due to missing Kconfig dependency

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: af_alg - Work around empty control messages without MSG_MORE
crypto: sa2ul - add Kconfig selects to fix build error
crypto: ingenic - Drop kfree for memory allocated with devm_kzalloc
crypto: qat - add delay before polling mailbox


e73d340d 18-Aug-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: ahash - Add init_tfm/exit_tfm

This patch adds the type-safe init_tfm/exit_tfm functions to the
ahash interface. This is meant to replace the unsafe cra_init and
cra_exit interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c195d66a 27-Aug-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: af_alg - Work around empty control messages without MSG_MORE

The iwd daemon uses libell which sets up the skcipher operation with
two separate control messages. As the first control message is sent
without MSG_MORE, it is interpreted as an empty request.

While libell should be fixed to use MSG_MORE where appropriate, this
patch works around the bug in the kernel so that existing binaries
continue to work.

We will print a warning however.

A separate issue is that the new kernel code no longer allows the
control message to be sent twice within the same request. This
restriction is obviously incompatible with what iwd was doing (first
setting an IV and then sending the real control message). This
patch changes the kernel so that this is explicitly allowed.

Reported-by: Caleb Jorden <caljorden@hotmail.com>
Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

df561f66 23-Aug-2020 Gustavo A. R. Silva <gustavoars@kernel.org>

treewide: Use fallthrough pseudo-keyword

Replace the existing /* fall through */ comments and its variants with
the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
fall-through markings when it is the case.

[1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>

8afa25aa 10-Aug-2020 Ira Weiny <ira.weiny@intel.com>

crypto: hash - Remove unused async iterators

Revert "crypto: hash - Add real ahash walk interface"
This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4.

The callers of the functions in this commit were removed in ab8085c130ed

Remove these unused calls.

Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ba974adb 04-Aug-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: tcrypt - Add support for hash speed testing with keys

Currently if you speed test a hash that requires a key you'll get an
error because tcrypt does not set a key by default. This patch
allows a key to be set using the new module parameter klen.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cbdad1f2 31-Jul-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algif_aead - Do not set MAY_BACKLOG on the async path

The async path cannot use MAY_BACKLOG because it is not meant to
block, which is what MAY_BACKLOG does. On the other hand, both
the sync and async paths can make use of MAY_SLEEP.

Fixes: 83094e5e9e49 ("crypto: af_alg - add async support to...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2a05b029 31-Jul-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algif_skcipher - EBUSY on aio should be an error

I removed the MAY_BACKLOG flag on the aio path a while ago but
the error check still incorrectly interpreted EBUSY as success.
This may cause the submitter to wait for a request that will never
complete.

Fixes: dad419970637 ("crypto: algif_skcipher - Do not set...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

129a4dba 30-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: tcrypt - delete duplicated words in messages

Drop the doubled word "failed" in pr_err() messages.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

40a3af45 30-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: engine - delete duplicated word

Drop the doubled word "a".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

71952d78 30-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: crct10dif_generic - fix duplicated words

Change the doubled word "at" to "at a".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

743b9150 30-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: crc32c_generic - delete and fix duplicated words

Drop the doubled word "the".
Change "at at" to "at a".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4eb57bcd 30-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: algif_aead - delete duplicated word

Drop the doubled word "is".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0c3dc787 19-Aug-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algapi - Remove skbuff.h inclusion

The header file algapi.h includes skbuff.h unnecessarily since
all we need is a forward declaration for struct sk_buff. This
patch removes that inclusion.

Unfortunately skbuff.h pulls in a lot of things and drivers over
the years have come to rely on it so this patch adds a lot of
missing inclusions that result from this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1dbb920e 30-Jul-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algapi - Move crypto_yield into internal.h

This patch moves crypto_yield into internal.h as it's only used
by internal code such as skcipher. It also adds a missing inclusion
of sched.h which is required for cond_resched.

The header files in internal.h have been cleaned up to remove some
ancient junk and add some more specific inclusions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d9361cb2 14-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fix from Herbert Xu:
"This fixes a regression in af_alg"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algif_aead - fix uninitialized ctx->init


21dfbcd1 12-Aug-2020 Ondrej Mosnacek <omosnace@redhat.com>

crypto: algif_aead - fix uninitialized ctx->init

In skcipher_accept_parent_nokey() the whole af_alg_ctx structure is
cleared by memset() after allocation, so add such memset() also to
aead_accept_parent_nokey() so that the new "init" field is also
initialized to zero. Without that the initial ctx->init checks might
randomly return true and cause errors.

While there, also remove the redundant zero assignments in both
functions.

Found via libkcapi testsuite.

Cc: Stephan Mueller <smueller@chronox.de>
Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when ctx->more is zero")
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

453431a5 07-Aug-2020 Waiman Long <longman@redhat.com>

mm, treewide: rename kzfree() to kfree_sensitive()

As said by Linus:

A symmetric naming is only helpful if it implies symmetries in use.
Otherwise it's actively misleading.

In "kzalloc()", the z is meaningful and an important part of what the
caller wants.

In "kzfree()", the z is actively detrimental, because maybe in the
future we really _might_ want to use that "memfill(0xdeadbeef)" or
something. The "zero" part of the interface isn't even _relevant_.

The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.

Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.

The renaming is done by using the command sequence:

git grep -w --name-only kzfree |\
xargs sed -i 's/kzfree/kfree_sensitive/'

followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.

[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

6d2b84a4 06-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'sched-fifo-2020-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull sched/fifo updates from Ingo Molnar:
"This adds the sched_set_fifo*() encapsulation APIs to remove static
priority level knowledge from non-scheduler code.

The three APIs for non-scheduler code to set SCHED_FIFO are:

- sched_set_fifo()
- sched_set_fifo_low()
- sched_set_normal()

These are two FIFO priority levels: default (high), and a 'low'
priority level, plus sched_set_normal() to set the policy back to
non-SCHED_FIFO.

Since the changes affect a lot of non-scheduler code, we kept this in
a separate tree"

* tag 'sched-fifo-2020-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits)
sched,tracing: Convert to sched_set_fifo()
sched: Remove sched_set_*() return value
sched: Remove sched_setscheduler*() EXPORTs
sched,psi: Convert to sched_set_fifo_low()
sched,rcutorture: Convert to sched_set_fifo_low()
sched,rcuperf: Convert to sched_set_fifo_low()
sched,locktorture: Convert to sched_set_fifo()
sched,irq: Convert to sched_set_fifo()
sched,watchdog: Convert to sched_set_fifo()
sched,serial: Convert to sched_set_fifo()
sched,powerclamp: Convert to sched_set_fifo()
sched,ion: Convert to sched_set_normal()
sched,powercap: Convert to sched_set_fifo*()
sched,spi: Convert to sched_set_fifo*()
sched,mmc: Convert to sched_set_fifo*()
sched,ivtv: Convert to sched_set_fifo*()
sched,drm/scheduler: Convert to sched_set_fifo*()
sched,msm: Convert to sched_set_fifo*()
sched,psci: Convert to sched_set_fifo*()
sched,drbd: Convert to sched_set_fifo*()
...


47ec5303 05-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next

Pull networking updates from David Miller:

1) Support 6Ghz band in ath11k driver, from Rajkumar Manoharan.

2) Support UDP segmentation in code TSO code, from Eric Dumazet.

3) Allow flashing different flash images in cxgb4 driver, from Vishal
Kulkarni.

4) Add drop frames counter and flow status to tc flower offloading,
from Po Liu.

5) Support n-tuple filters in cxgb4, from Vishal Kulkarni.

6) Various new indirect call avoidance, from Eric Dumazet and Brian
Vazquez.

7) Fix BPF verifier failures on 32-bit pointer arithmetic, from
Yonghong Song.

8) Support querying and setting hardware address of a port function via
devlink, use this in mlx5, from Parav Pandit.

9) Support hw ipsec offload on bonding slaves, from Jarod Wilson.

10) Switch qca8k driver over to phylink, from Jonathan McDowell.

11) In bpftool, show list of processes holding BPF FD references to
maps, programs, links, and btf objects. From Andrii Nakryiko.

12) Several conversions over to generic power management, from Vaibhav
Gupta.

13) Add support for SO_KEEPALIVE et al. to bpf_setsockopt(), from Dmitry
Yakunin.

14) Various https url conversions, from Alexander A. Klimov.

15) Timestamping and PHC support for mscc PHY driver, from Antoine
Tenart.

16) Support bpf iterating over tcp and udp sockets, from Yonghong Song.

17) Support 5GBASE-T i40e NICs, from Aleksandr Loktionov.

18) Add kTLS RX HW offload support to mlx5e, from Tariq Toukan.

19) Fix the ->ndo_start_xmit() return type to be netdev_tx_t in several
drivers. From Luc Van Oostenryck.

20) XDP support for xen-netfront, from Denis Kirjanov.

21) Support receive buffer autotuning in MPTCP, from Florian Westphal.

22) Support EF100 chip in sfc driver, from Edward Cree.

23) Add XDP support to mvpp2 driver, from Matteo Croce.

24) Support MPTCP in sock_diag, from Paolo Abeni.

25) Commonize UDP tunnel offloading code by creating udp_tunnel_nic
infrastructure, from Jakub Kicinski.

26) Several pci_ --> dma_ API conversions, from Christophe JAILLET.

27) Add FLOW_ACTION_POLICE support to mlxsw, from Ido Schimmel.

28) Add SK_LOOKUP bpf program type, from Jakub Sitnicki.

29) Refactor a lot of networking socket option handling code in order to
avoid set_fs() calls, from Christoph Hellwig.

30) Add rfc4884 support to icmp code, from Willem de Bruijn.

31) Support TBF offload in dpaa2-eth driver, from Ioana Ciornei.

32) Support XDP_REDIRECT in qede driver, from Alexander Lobakin.

33) Support PCI relaxed ordering in mlx5 driver, from Aya Levin.

34) Support TCP syncookies in MPTCP, from Flowian Westphal.

35) Fix several tricky cases of PMTU handling wrt. briding, from Stefano
Brivio.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2056 commits)
net: thunderx: initialize VF's mailbox mutex before first usage
usb: hso: remove bogus check for EINPROGRESS
usb: hso: no complaint about kmalloc failure
hso: fix bailout in error case of probe
ip_tunnel_core: Fix build for archs without _HAVE_ARCH_IPV6_CSUM
selftests/net: relax cpu affinity requirement in msg_zerocopy test
mptcp: be careful on subflow creation
selftests: rtnetlink: make kci_test_encap() return sub-test result
selftests: rtnetlink: correct the final return value for the test
net: dsa: sja1105: use detected device id instead of DT one on mismatch
tipc: set ub->ifindex for local ipv6 address
ipv6: add ipv6_dev_find()
net: openvswitch: silence suspicious RCU usage warning
Revert "vxlan: fix tos value before xmit"
ptp: only allow phase values lower than 1 period
farsync: switch from 'pci_' to 'dma_' API
wan: wanxl: switch from 'pci_' to 'dma_' API
hv_netvsc: do not use VF device if link is down
dpaa2-eth: Fix passing zero to 'PTR_ERR' warning
net: macb: Properly handle phylink on at91sam9x
...


2324d50d 04-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'docs-5.9' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
"It's been a busy cycle for documentation - hopefully the busiest for a
while to come. Changes include:

- Some new Chinese translations

- Progress on the battle against double words words and non-HTTPS
URLs

- Some block-mq documentation

- More RST conversions from Mauro. At this point, that task is
essentially complete, so we shouldn't see this kind of churn again
for a while. Unless we decide to switch to asciidoc or
something...:)

- Lots of typo fixes, warning fixes, and more"

* tag 'docs-5.9' of git://git.lwn.net/linux: (195 commits)
scripts/kernel-doc: optionally treat warnings as errors
docs: ia64: correct typo
mailmap: add entry for <alobakin@marvell.com>
doc/zh_CN: add cpu-load Chinese version
Documentation/admin-guide: tainted-kernels: fix spelling mistake
MAINTAINERS: adjust kprobes.rst entry to new location
devices.txt: document rfkill allocation
PCI: correct flag name
docs: filesystems: vfs: correct flag name
docs: filesystems: vfs: correct sync_mode flag names
docs: path-lookup: markup fixes for emphasis
docs: path-lookup: more markup fixes
docs: path-lookup: fix HTML entity mojibake
CREDITS: Replace HTTP links with HTTPS ones
docs: process: Add an example for creating a fixes tag
doc/zh_CN: add Chinese translation prefer section
doc/zh_CN: add clearing-warn-once Chinese version
doc/zh_CN: add admin-guide index
doc:it_IT: process: coding-style.rst: Correct __maybe_unused compiler label
futex: MAINTAINERS: Re-add selftests directory
...


ab5c60b7 03-Aug-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Add support for allocating transforms on a specific NUMA Node
- Introduce the flag CRYPTO_ALG_ALLOCATES_MEMORY for storage users

Algorithms:
- Drop PMULL based ghash on arm64
- Fixes for building with clang on x86
- Add sha256 helper that does the digest in one go
- Add SP800-56A rev 3 validation checks to dh

Drivers:
- Permit users to specify NUMA node in hisilicon/zip
- Add support for i.MX6 in imx-rngc
- Add sa2ul crypto driver
- Add BA431 hwrng driver
- Add Ingenic JZ4780 and X1000 hwrng driver
- Spread IRQ affinity in inside-secure and marvell/cesa"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (157 commits)
crypto: sa2ul - Fix inconsistent IS_ERR and PTR_ERR
hwrng: core - remove redundant initialization of variable ret
crypto: x86/curve25519 - Remove unused carry variables
crypto: ingenic - Add hardware RNG for Ingenic JZ4780 and X1000
dt-bindings: RNG: Add Ingenic RNG bindings.
crypto: caam/qi2 - add module alias
crypto: caam - add more RNG hw error codes
crypto: caam/jr - remove incorrect reference to caam_jr_register()
crypto: caam - silence .setkey in case of bad key length
crypto: caam/qi2 - create ahash shared descriptors only once
crypto: caam/qi2 - fix error reporting for caam_hash_alloc
crypto: caam - remove deadcode on 32-bit platforms
crypto: ccp - use generic power management
crypto: xts - Replace memcpy() invocation with simple assignment
crypto: marvell/cesa - irq balance
crypto: inside-secure - irq balance
crypto: ecc - SP800-56A rev 3 local public key validation
crypto: dh - SP800-56A rev 3 local public key validation
crypto: dh - check validity of Z before export
lib/mpi: Add mpi_sub_ui()
...


958ea4e0 21-Jul-2020 Ard Biesheuvel <ardb@kernel.org>

crypto: xts - Replace memcpy() invocation with simple assignment

Colin reports that the memcpy() call in xts_cts_final() trigggers a
"Overlapping buffer in memory copy" warning in Coverity, which is a
false postive, given that tail is guaranteed to be smaller than or
equal to the distance between source and destination.

However, given that any additional bytes that we copy will be ignored
anyway, we can simply copy XTS_BLOCK_SIZE unconditionally, which means
we can use struct assignment of the array members instead, which is
likely to be more efficient as well.

Addresses-Coverity: ("Overlapping buffer in memory copy")
Fixes: 8083b1bf8163 ("crypto: xts - add support for ciphertext stealing")
Reported-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6914dd53 20-Jul-2020 Stephan Müller <smueller@chronox.de>

crypto: ecc - SP800-56A rev 3 local public key validation

After the generation of a local public key, SP800-56A rev 3 section
5.6.2.1.3 mandates a validation of that key with a full validation
compliant to section 5.6.2.3.3.

Only if the full validation passes, the key is allowed to be used.

The patch adds the full key validation compliant to 5.6.2.3.3 and
performs the required check on the generated public key.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2ed5ba61 20-Jul-2020 Stephan Müller <smueller@chronox.de>

crypto: dh - SP800-56A rev 3 local public key validation

After the generation of a local public key, SP800-56A rev 3 section
5.6.2.1.3 mandates a validation of that key with a full validation
compliant to section 5.6.2.3.1.

Only if the full validation passes, the key is allowed to be used.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

90fa9ae5 20-Jul-2020 Stephan Müller <smueller@chronox.de>

crypto: dh - check validity of Z before export

SP800-56A rev3 section 5.7.1.1 step 2 mandates that the validity of the
calculated shared secret is verified before the data is returned to the
caller. This patch adds the validation check.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Neil Horman <nhorman@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e7d2b41e 20-Jul-2020 Stephan Müller <smueller@chronox.de>

crypto: ecdh - check validity of Z before export

SP800-56A rev3 section 5.7.1.2 step 2 mandates that the validity of the
calculated shared secret is verified before the data is returned to the
caller. Thus, the export function and the validity check functions are
reversed. In addition, the sensitive variables of priv and rand_z are
zeroized.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Acked-by: Neil Horman <nhorman@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a57066b1 25-Jul-2020 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

The UDP reuseport conflict was a little bit tricky.

The net-next code, via bpf-next, extracted the reuseport handling
into a helper so that the BPF sk lookup code could invoke it.

At the same time, the logic for reuseport handling of unconnected
sockets changed via commit efc6b6f6c3113e8b203b9debfb72d81e0f3dcace
which changed the logic to carry on the reuseport result into the
rest of the lookup loop if we do not return immediately.

This requires moving the reuseport_has_conns() logic into the callers.

While we are here, get rid of inline directives as they do not belong
in foo.c files.

The other changes were cases of more straightforward overlapping
modifications.

Signed-off-by: David S. Miller <davem@davemloft.net>


a7b75c5a 23-Jul-2020 Christoph Hellwig <hch@lst.de>

net: pass a sockptr_t into ->setsockopt

Rework the remaining setsockopt code to pass a sockptr_t instead of a
plain user pointer. This removes the last remaining set_fs(KERNEL_DS)
outside of architecture specific code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Stefan Schmidt <stefan@datenfreihafen.org> [ieee802154]
Acked-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>

e493b31a 19-Jul-2020 Randy Dunlap <rdunlap@infradead.org>

crypto: testmgr - delete duplicated words

Delete the doubled word "from" in multiple places.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9332a9e7 19-Jul-2020 Alexander A. Klimov <grandmaster@al2klimov.de>

crypto: Replace HTTP links with HTTPS ones

Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
If not .svg:
For each line:
If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov <grandmaster@al2klimov.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3f257191 14-Jul-2020 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: fold padata_alloc_possible() into padata_alloc()

There's no reason to have two interfaces when there's only one caller.
Removing _possible saves text and simplifies future changes.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

350ef051 14-Jul-2020 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: remove stop function

padata_stop() has two callers and is unnecessary in both cases. When
pcrypt calls it before padata_free(), it's being unloaded so there are
no outstanding padata jobs[0]. When __padata_free() calls it, it's
either along the same path or else pcrypt initialization failed, which
of course means there are also no outstanding jobs.

Removing it simplifies padata and saves text.

[0] https://lore.kernel.org/linux-crypto/20191119225017.mjrak2fwa5vccazl@gondor.apana.org.au/

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bd25b488 14-Jul-2020 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: remove start function

padata_start() is only used right after pcrypt allocates an instance
with all possible CPUs, when PADATA_INVALID can't happen, so there's no
need for a separate "start" step. It can be done during allocation to
save text, make using padata easier, and avoid unneeded calls in the
future.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a44d9e72 17-Jul-2020 Christoph Hellwig <hch@lst.de>

net: make ->{get,set}sockopt in proto_ops optional

Just check for a NULL method instead of wiring up
sock_no_{get,set}sockopt.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>

e456ef6a 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: lrw - prefix function and struct names with "lrw"

Overly-generic names can cause problems like naming collisions,
confusing crash reports, and reduced grep-ability. E.g. see
commit d099ea6e6fde ("crypto - Avoid free() namespace collision").

Clean this up for the lrw template by prefixing the names with "lrw_".

(I didn't use "crypto_lrw_" instead because that seems overkill.)

Also constify the tfm context in a couple places.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a874f591 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: xts - prefix function and struct names with "xts"

Overly-generic names can cause problems like naming collisions,
confusing crash reports, and reduced grep-ability. E.g. see
commit d099ea6e6fde ("crypto - Avoid free() namespace collision").

Clean this up for the xts template by prefixing the names with "xts_".

(I didn't use "crypto_xts_" instead because that seems overkill.)

Also constify the tfm context in a couple places, and make
xts_free_instance() use the instance context structure so that it
doesn't just assume the crypto_skcipher_spawn is at the beginning.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2eb27c11 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - add NEED_FALLBACK to INHERITED_FLAGS

CRYPTO_ALG_NEED_FALLBACK is handled inconsistently. When it's requested
to be clear, some templates propagate that request to child algorithms,
while others don't.

It's apparently desired for NEED_FALLBACK to be propagated, to avoid
deadlocks where a module tries to load itself while it's being
initialized, and to avoid unnecessarily complex fallback chains where we
have e.g. cbc-aes-$driver falling back to cbc(aes-$driver) where
aes-$driver itself falls back to aes-generic, instead of cbc-aes-$driver
simply falling back to cbc(aes-generic). There have been a number of
fixes to this effect:

commit 89027579bc6c ("crypto: xts - Propagate NEED_FALLBACK bit")
commit d2c2a85cfe82 ("crypto: ctr - Propagate NEED_FALLBACK bit")
commit e6c2e65c70a6 ("crypto: cbc - Propagate NEED_FALLBACK bit")

But it seems that other templates can have the same problems too.

To avoid this whack-a-mole, just add NEED_FALLBACK to INHERITED_FLAGS so
that it's always inherited.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7bcb2c99 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - use common mechanism for inheriting flags

The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
"inherited" in the same way. This is difficult because the handling of
CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:

- Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
have these inheritance semantics.

- Add crypto_algt_inherited_mask(), for use by template ->create()
methods. It returns any of these flags that the user asked to be
unset and thus must be passed in the 'mask' to crypto_grab_*().

- Also modify crypto_check_attr_type() to handle computing the 'mask'
so that most templates can just use this.

- Make crypto_grab_*() propagate these flags to the template instance
being created so that templates don't have to do this themselves.

Make crypto/simd.c propagate these flags too, since it "wraps" another
algorithm, similar to a template.

Based on a patch by Mikulas Patocka <mpatocka@redhat.com>
(https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4688111e 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: seqiv - remove seqiv_create()

seqiv_create() is pointless because it just checks that the template is
being instantiated as an AEAD, then calls seqiv_aead_create(). But
seqiv_aead_create() does the exact same check, via aead_geniv_alloc().

Just remove seqiv_create() and use seqiv_aead_create() directly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e72b48c5 10-Jul-2020 Eric Biggers <ebiggers@google.com>

crypto: geniv - remove unneeded arguments from aead_geniv_alloc()

The type and mask arguments to aead_geniv_alloc() are always 0, so
remove them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6cbba1f9 15-Jul-2020 Wei Yongjun <weiyongjun1@huawei.com>

keys: asymmetric: fix error return code in software_key_query()

Fix to return negative error code -ENOMEM from kmalloc() error handling
case instead of 0, as done elsewhere in this function.

Fixes: f1774cb8956a ("X.509: parse public key parameters from x509 for akcipher")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

7bc13b5b 05-Jul-2020 Barry Song <song.bao.hua@hisilicon.com>

crypto: api - permit users to specify numa node of acomp hardware

For a Linux server with NUMA, there are possibly multiple (de)compressors
which are either local or remote to some NUMA node. Some drivers will
automatically use the (de)compressor near the CPU calling acomp_alloc().
However, it is not necessarily correct because users who send acomp_req
could be from different NUMA node with the CPU which allocates acomp.

Just like kernel has kmalloc() and kmalloc_node(), here crypto can have
same support.

Cc: Seth Jennings <sjenning@redhat.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

662bb52f 01-Jul-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: af_alg - Fix regression on empty requests

Some user-space programs rely on crypto requests that have no
control metadata. This broke when a check was added to require
the presence of control metadata with the ctx->init flag.

This patch fixes the regression by setting ctx->init as long as
one sendmsg(2) has been made, with or without a control message.

Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0efaaa86 15-Jun-2020 Mauro Carvalho Chehab <mchehab+huawei@kernel.org>

docs: crypto: convert asymmetric-keys.txt to ReST

This file is almost compatible with ReST. Just minor changes
were needed:

- Adjust document and titles markups;
- Adjust numbered list markups;
- Add a comments markup for the Contents section;
- Add markups for literal blocks.

Acked-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Link: https://lore.kernel.org/r/c2275ea94e0507a01b020ab66dfa824d8b1c2545.1592203650.git.mchehab+huawei@kernel.org
Signed-off-by: Jonathan Corbet <corbet@lwn.net>

f3c802a1 29-May-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algif_aead - Only wake up when ctx->more is zero

AEAD does not support partial requests so we must not wake up
while ctx->more is set. In order to distinguish between the
case of no data sent yet and a zero-length request, a new init
flag has been added to ctx.

SKCIPHER has also been modified to ensure that at least a block
of data is available if there is more data to come.

Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

34c86f4c 08-Jun-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock()

The locking in af_alg_release_parent is broken as the BH socket
lock can only be taken if there is a code-path to handle the case
where the lock is owned by process-context. Instead of adding
such handling, we can fix this by changing the ref counts to
atomic_t.

This patch also modifies the main refcnt to include both normal
and nokey sockets. This way we don't have to fudge the nokey
ref count when a socket changes from nokey to normal.

Credits go to Mauricio Faria de Oliveira who diagnosed this bug
and sent a patch for it:

https://lore.kernel.org/linux-crypto/20200605161657.535043-1-mfo@canonical.com/

Reported-by: Brian Moyles <bmoyles@netflix.com>
Reported-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Fixes: 37f96694cf73 ("crypto: af_alg - Use bh_lock_sock in...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dbc6d0d5 20-Apr-2020 Peter Zijlstra <peterz@infradead.org>

sched,crypto: Convert to sched_set_fifo*()

Because SCHED_FIFO is a broken scheduler model (see previous patches)
take away the priority field, the kernel can't possibly make an
informed decision.

Use sched_set_fifo() to request SCHED_FIFO and delegate
actual priority selection to userspace. Effectively no change in
behaviour.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>

819966c0 07-Jun-2020 Stephan Müller <smueller@chronox.de>

crypto: drbg - always try to free Jitter RNG instance

The Jitter RNG is unconditionally allocated as a seed source follwoing
the patch 97f2650e5040. Thus, the instance must always be deallocated.

Reported-by: syzbot+2e635807decef724a1fa@syzkaller.appspotmail.com
Fixes: 97f2650e5040 ("crypto: drbg - always seeded with SP800-90B ...")
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

77251e41 04-Jun-2020 Eric Biggers <ebiggers@google.com>

crypto: algboss - don't wait during notifier callback

When a crypto template needs to be instantiated, CRYPTO_MSG_ALG_REQUEST
is sent to crypto_chain. cryptomgr_schedule_probe() handles this by
starting a thread to instantiate the template, then waiting for this
thread to complete via crypto_larval::completion.

This can deadlock because instantiating the template may require loading
modules, and this (apparently depending on userspace) may need to wait
for the crc-t10dif module (lib/crc-t10dif.c) to be loaded. But
crc-t10dif's module_init function uses crypto_register_notifier() and
therefore takes crypto_chain.rwsem for write. That can't proceed until
the notifier callback has finished, as it holds this semaphore for read.

Fix this by removing the wait on crypto_larval::completion from within
cryptomgr_schedule_probe(). It's actually unnecessary because
crypto_alg_mod_lookup() calls crypto_larval_wait() itself after sending
CRYPTO_MSG_ALG_REQUEST.

This only actually became a problem in v4.20 due to commit b76377543b73
("crc-t10dif: Pick better transform if one becomes available"), but the
unnecessary wait was much older.

BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207159
Reported-by: Mike Gerow <gerow@google.com>
Fixes: 398710379f51 ("crypto: algapi - Move larval completion into algboss")
Cc: <stable@vger.kernel.org> # v3.6+
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reported-by: Kai Lüke <kai@kinvolk.io>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7cf81954 28-May-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algif_skcipher - Cap recv SG list at ctx->used

Somewhere along the line the cap on the SG list length for receive
was lost. This patch restores it and removes the subsequent test
which is now redundant.

Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4152d146 10-Jun-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'rwonce/rework' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux

Pull READ/WRITE_ONCE rework from Will Deacon:
"This the READ_ONCE rework I've been working on for a while, which
bumps the minimum GCC version and improves code-gen on arm64 when
stack protector is enabled"

[ Side note: I'm _really_ tempted to raise the minimum gcc version to
4.9, so that we can just say that we require _Generic() support.

That would allow us to more cleanly handle a lot of the cases where we
depend on very complex macros with 'sizeof' or __builtin_choose_expr()
with __builtin_types_compatible_p() etc.

This branch has a workaround for sparse not handling _Generic(),
either, but that was already fixed in the sparse development branch,
so it's really just gcc-4.9 that we'd require. - Linus ]

* 'rwonce/rework' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux:
compiler_types.h: Use unoptimized __unqual_scalar_typeof for sparse
compiler_types.h: Optimize __unqual_scalar_typeof compilation time
compiler.h: Enforce that READ_ONCE_NOCHECK() access size is sizeof(long)
compiler-types.h: Include naked type in __pick_integer_type() match
READ_ONCE: Fix comment describing 2x32-bit atomicity
gcov: Remove old GCC 3.4 support
arm64: barrier: Use '__unqual_scalar_typeof' for acquire/release macros
locking/barriers: Use '__unqual_scalar_typeof' for load-acquire macros
READ_ONCE: Drop pointer qualifiers when reading from scalar types
READ_ONCE: Enforce atomicity for {READ,WRITE}_ONCE() memory accesses
READ_ONCE: Simplify implementations of {READ,WRITE}_ONCE()
arm64: csum: Disable KASAN for do_csum()
fault_inject: Don't rely on "return value" from WRITE_ONCE()
net: tls: Avoid assigning 'const' pointer to non-const pointer
netfilter: Avoid assigning 'const' pointer to non-const pointer
compiler/gcc: Raise minimum GCC version for kernel builds to 4.8


d1c72f6e 19-May-2020 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: engine - do not requeue in case of fatal error

Now, in crypto-engine, if hardware queue is full (-ENOSPC),
requeue request regardless of MAY_BACKLOG flag.
If hardware throws any other error code (like -EIO, -EINVAL,
-ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into
crypto-engine's queue, since the others can be dropped.
The latter case can be fatal error, so those cannot be recovered from.
For example, in CAAM driver, -EIO is returned in case the job descriptor
is broken, so there is no possibility to fix the job descriptor.
Therefore, these errors might be fatal error, so we shouldn’t
requeue the request. This will just be pass back and forth between
crypto-engine and hardware.

Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism")
Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reported-by: Horia Geantă <horia.geanta@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0c0408e8 05-May-2020 Arnd Bergmann <arnd@arndb.de>

crypto: blake2b - Fix clang optimization for ARMv7-M

When building for ARMv7-M, clang-9 or higher tries to unroll some loops,
which ends up confusing the register allocator to the point of generating
rather bad code and using more than the warning limit for stack frames:

warning: stack frame size of 1200 bytes in function 'blake2b_compress' [-Wframe-larger-than=]

Forcing it to not unroll the final loop avoids this problem.

Fixes: 91d689337fe8 ("crypto: blake2b - add blake2b generic implementation")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

228c4f26 02-May-2020 Eric Biggers <ebiggers@google.com>

crypto: lib/sha1 - fold linux/cryptohash.h into crypto/sha.h

<linux/cryptohash.h> sounds very generic and important, like it's the
header to include if you're doing cryptographic hashing in the kernel.
But actually it only includes the library implementation of the SHA-1
compression function (not even the full SHA-1). This should basically
never be used anymore; SHA-1 is no longer considered secure, and there
are much better ways to do cryptographic hashing in the kernel.

Remove this header and fold it into <crypto/sha.h> which already
contains constants and functions for SHA-1 (along with SHA-2).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6b0b0fa2 02-May-2020 Eric Biggers <ebiggers@google.com>

crypto: lib/sha1 - rename "sha" to "sha1"

The library implementation of the SHA-1 compression function is
confusingly called just "sha_transform()". Alongside it are some "SHA_"
constants and "sha_init()". Presumably these are left over from a time
when SHA just meant SHA-1. But now there are also SHA-2 and SHA-3, and
moreover SHA-1 is now considered insecure and thus shouldn't be used.

Therefore, rename these functions and constants to make it very clear
that they are for SHA-1. Also add a comment to make it clear that these
shouldn't be used.

For the extra-misleadingly named "SHA_MESSAGE_BYTES", rename it to
SHA1_BLOCK_SIZE and define it to just '64' rather than '(512/8)' so that
it matches the same definition in <crypto/sha.h>. This prepares for
merging <linux/cryptohash.h> into <crypto/sha.h>.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1306664f 01-May-2020 Eric Biggers <ebiggers@google.com>

crypto: essiv - use crypto_shash_tfm_digest()

Instead of manually allocating a 'struct shash_desc' on the stack and
calling crypto_shash_digest(), switch to using the new helper function
crypto_shash_tfm_digest() which does this for us.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

822a98b8 01-May-2020 Eric Biggers <ebiggers@google.com>

crypto: hash - introduce crypto_shash_tfm_digest()

Currently the simplest use of the shash API is to use
crypto_shash_digest() to digest a whole buffer. However, this still
requires allocating a hash descriptor (struct shash_desc). Many users
don't really want to preallocate one and instead just use a one-off
descriptor on the stack like the following:

{
SHASH_DESC_ON_STACK(desc, tfm);
int err;

desc->tfm = tfm;

err = crypto_shash_digest(desc, data, len, out);

shash_desc_zero(desc);
}

Wrap this in a new helper function crypto_shash_tfm_digest() that can be
used instead of the above.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

13855fd8 01-May-2020 Eric Biggers <ebiggers@google.com>

crypto: lib/sha256 - return void

The SHA-256 / SHA-224 library functions can't fail, so remove the
useless return value.

Also long as the declarations are being changed anyway, also fix some
parameter names in the declarations to match the definitions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d099ea6e 30-Apr-2020 Arnd Bergmann <arnd@arndb.de>

crypto - Avoid free() namespace collision

gcc-10 complains about using the name of a standard library
function in the kernel, as we are not building with -ffreestanding:

crypto/xts.c:325:13: error: conflicting types for built-in function 'free'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch]
325 | static void free(struct skcipher_instance *inst)
| ^~~~
crypto/lrw.c:290:13: error: conflicting types for built-in function 'free'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch]
290 | static void free(struct skcipher_instance *inst)
| ^~~~
crypto/lrw.c:27:1: note: 'free' is declared in header '<stdlib.h>'

The xts and lrw cipher implementations run into this because they do
not use the conventional namespaced function names.

It might be better to rename all local functions in those files to
help with things like 'ctags' and 'grep', but just renaming these two
avoids the build issue. I picked the more verbose crypto_xts_free()
and crypto_lrw_free() names for consistency with several other drivers
that do use namespaced function names.

Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e0664ebc 30-Apr-2020 Wei Yongjun <weiyongjun1@huawei.com>

crypto: drbg - fix error return code in drbg_alloc_state()

Fix to return negative error code -ENOMEM from the kzalloc error handling
case instead of 0, as done elsewhere in this function.

Reported-by: Xiumei Mu <xmu@redhat.com>
Fixes: db07cd26ac6a ("crypto: drbg - add FIPS 140-2 CTRNG for noise source")
Cc: <stable@vger.kernel.org>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8d908226 28-Apr-2020 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: engine - support for batch requests

Added support for batch requests, per crypto engine.
A new callback is added, do_batch_requests, which executes a
batch of requests. This has the crypto_engine structure as argument
(for cases when more than one crypto-engine is used).
The crypto_engine_alloc_init_and_set function, initializes
crypto-engine, but also, sets the do_batch_requests callback.
On crypto_pump_requests, if do_batch_requests callback is
implemented in a driver, this will be executed. The link between
the requests will be done in driver, if possible.
do_batch_requests is available only if the hardware has support
for multiple request.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6a89f492 28-Apr-2020 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: engine - support for parallel requests based on retry mechanism

Added support for executing multiple requests, in parallel,
for crypto engine based on a retry mechanism.
If hardware was unable to execute a backlog request, enqueue it
back in front of crypto-engine queue, to keep the order
of requests.

A new variable is added, retry_support (this is to keep the
backward compatibility of crypto-engine) , which keeps track
whether the hardware has support for retry mechanism and,
also, if can run multiple requests.

If do_one_request() returns:
>= 0: hardware executed the request successfully;
< 0: this is the old error path. If hardware has support for retry
mechanism, the request is put back in front of crypto-engine queue.
For backwards compatibility, if the retry support is not available,
the crypto-engine will work as before.
If hardware queue is full (-ENOSPC), requeue request regardless
of MAY_BACKLOG flag.
If hardware throws any other error code (like -EIO, -EINVAL,
-ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into
crypto-engine's queue, since the others can be dropped.

The new crypto_engine_alloc_init_and_set function, initializes
crypto-engine, sets the maximum size for crypto-engine software
queue (not hardcoded anymore) and the retry_support variable
is set, by default, to false.
On crypto_pump_requests(), if do_one_request() returns >= 0,
a new request is send to hardware, until there is no space in
hardware and do_one_request() returns < 0.

By default, retry_support is false and crypto-engine will
work as before - will send requests to hardware,
one-by-one, on crypto_pump_requests(), and complete it, on
crypto_finalize_request(), and so on.

To support multiple requests, in each driver, retry_support
must be set on true, and if do_one_request() returns an error
the request must not be freed, since it will be enqueued back
into crypto-engine's queue.

When all drivers, that use crypto-engine now, will be updated for
retry mechanism, the retry_support variable can be removed.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ec6e2bf3 28-Apr-2020 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: algapi - create function to add request in front of queue

Add crypto_enqueue_request_head function that enqueues a
request in front of queue.
This will be used in crypto-engine, on error path. In case a request
was not executed by hardware, enqueue it back in front of queue (to
keep the order of requests).

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d6fc1a45 24-Apr-2020 Corentin Labbe <clabbe@baylibre.com>

crypto: drbg - should select CTR

if CRYPTO_DRBG_CTR is builtin and CTR is module, allocating such algo
will fail.
DRBG: could not allocate CTR cipher TFM handle: ctr(aes)
alg: drbg: Failed to reset rng
alg: drbg: Test 0 failed for drbg_pr_ctr_aes128
DRBG: could not allocate CTR cipher TFM handle: ctr(aes)
alg: drbg: Failed to reset rng
alg: drbg: Test 0 failed for drbg_nopr_ctr_aes128
DRBG: could not allocate CTR cipher TFM handle: ctr(aes)
alg: drbg: Failed to reset rng
alg: drbg: Test 0 failed for drbg_nopr_ctr_aes192
DRBG: could not allocate CTR cipher TFM handle: ctr(aes)
alg: drbg: Failed to reset rng
alg: drbg: Test 0 failed for drbg_nopr_ctr_aes256

So let's select CTR instead of just depend on it.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f23efcbc 24-Apr-2020 Corentin Labbe <clabbe@baylibre.com>

crypto: ctr - no longer needs CRYPTO_SEQIV

As comment of the v2, Herbert said: "The SEQIV select from CTR is historical
and no longer necessary."

So let's get rid of it.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

97f2650e 17-Apr-2020 Stephan Müller <smueller@chronox.de>

crypto: drbg - always seeded with SP800-90B compliant noise source

As the Jitter RNG provides an SP800-90B compliant noise source, use this
noise source always for the (re)seeding of the DRBG.

To make sure the DRBG is always properly seeded, the reseed threshold
is reduced to 1<<20 generate operations.

The Jitter RNG may report health test failures. Such health test
failures are treated as transient as follows. The DRBG will not reseed
from the Jitter RNG (but from get_random_bytes) in case of a health
test failure. Though, it produces the requested random number.

The Jitter RNG has a failure counter where at most 1024 consecutive
resets due to a health test failure are considered as a transient error.
If more consecutive resets are required, the Jitter RNG will return
a permanent error which is returned to the caller by the DRBG. With this
approach, the worst case reseed threshold is significantly lower than
mandated by SP800-90A in order to seed with an SP800-90B noise source:
the DRBG has a reseed threshold of 2^20 * 1024 = 2^30 generate requests.

Yet, in case of a transient Jitter RNG health test failure, the DRBG is
seeded with the data obtained from get_random_bytes.

However, if the Jitter RNG fails during the initial seeding operation
even due to a health test error, the DRBG will send an error to the
caller because at that time, the DRBG has received no seed that is
SP800-90B compliant.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

764428fe 17-Apr-2020 Stephan Müller <smueller@chronox.de>

crypto: jitter - SP800-90B compliance

SP800-90B specifies various requirements for the noise source(s) that
may seed any DRNG including SP800-90A DRBGs. In November 2020,
SP800-90B will be mandated for all noise sources that provide entropy
to DRBGs as part of a FIPS 140-[2|3] validation or other evaluation
types. Without SP800-90B compliance, a noise source is defined to always
deliver zero bits of entropy.

This patch ports the SP800-90B compliance from the user space Jitter RNG
version 2.2.0.

The following changes are applied:

- addition of (an enhanced version of) the repetitive count test (RCT)
from SP800-90B section 4.4.1 - the enhancement is due to the fact of
using the stuck test as input to the RCT.

- addition of the adaptive proportion test (APT) from SP800-90B section
4.4.2

- update of the power-on self test to perform a test measurement of 1024
noise samples compliant to SP800-90B section 4.3

- remove of the continuous random number generator test which is
replaced by APT and RCT

Health test failures due to the SP800-90B operation are only enforced in
FIPS mode. If a runtime health test failure is detected, the Jitter RNG
is reset. If more than 1024 resets in a row are performed, a permanent
error is returned to the caller.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

63e05f32 15-Apr-2020 Colin Ian King <colin.king@canonical.com>

crypto: algif_rng - remove redundant assignment to variable err

The variable err is being initialized with a value that is never read
and it is being updated later with a new value. The initialization is
redundant and can be removed.

Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6603523b 10-Apr-2020 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Fix use-after-free and race in crypto_spawn_alg

There are two problems in crypto_spawn_alg. First of all it may
return spawn->alg even if spawn->dead is set. This results in a
double-free as detected by syzbot.

Secondly the setting of the DYING flag is racy because we hold
the read-lock instead of the write-lock. We should instead call
crypto_shoot_alg in a safe manner by gaining a refcount, dropping
the lock, and then releasing the refcount.

This patch fixes both problems.

Reported-by: syzbot+fc0674cde00b66844470@syzkaller.appspotmail.com
Fixes: 4f87ee118d16 ("crypto: api - Do not zap spawn->alg")
Fixes: 73669cc55646 ("crypto: api - Fix race condition in...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

beeb460c 07-Apr-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - Avoid spurious modprobe on LOADED

Currently after any algorithm is registered and tested, there's an
unnecessary request_module("cryptomgr") even if it's already loaded.
Also, CRYPTO_MSG_ALG_LOADED is sent twice, and thus if the algorithm is
"crct10dif", lib/crc-t10dif.c replaces the tfm twice rather than once.

This occurs because CRYPTO_MSG_ALG_LOADED is sent using
crypto_probing_notify(), which tries to load "cryptomgr" if the
notification is not handled (NOTIFY_DONE). This doesn't make sense
because "cryptomgr" doesn't handle this notification.

Fix this by using crypto_notify() instead of crypto_probing_notify().

Fixes: dd8b083f9a5e ("crypto: api - Introduce notifier for new crypto algorithms")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5429ef62 22-Jan-2020 Will Deacon <will@kernel.org>

compiler/gcc: Raise minimum GCC version for kernel builds to 4.8

It is very rare to see versions of GCC prior to 4.8 being used to build
the mainline kernel. These old compilers are also know to have codegen
issues which can lead to silent miscompilation:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145

Raise the minimum GCC version for kernel build to 4.8 and remove some
tautological Kconfig dependencies as a consequence.

Cc: Masahiro Yamada <masahiroy@kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Will Deacon <will@kernel.org>

72f35423 01-Apr-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Fix out-of-sync IVs in self-test for IPsec AEAD algorithms

Algorithms:
- Use formally verified implementation of x86/curve25519

Drivers:
- Enhance hwrng support in caam

- Use crypto_engine for skcipher/aead/rsa/hash in caam

- Add Xilinx AES driver

- Add uacce driver

- Register zip engine to uacce in hisilicon

- Add support for OCTEON TX CPT engine in marvell"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits)
crypto: af_alg - bool type cosmetics
crypto: arm[64]/poly1305 - add artifact to .gitignore files
crypto: caam - limit single JD RNG output to maximum of 16 bytes
crypto: caam - enable prediction resistance in HRWNG
bus: fsl-mc: add api to retrieve mc version
crypto: caam - invalidate entropy register during RNG initialization
crypto: caam - check if RNG job failed
crypto: caam - simplify RNG implementation
crypto: caam - drop global context pointer and init_done
crypto: caam - use struct hwrng's .init for initialization
crypto: caam - allocate RNG instantiation descriptor with GFP_DMA
crypto: ccree - remove duplicated include from cc_aead.c
crypto: chelsio - remove set but not used variable 'adap'
crypto: marvell - enable OcteonTX cpt options for build
crypto: marvell - add the Virtual Function driver for CPT
crypto: marvell - add support for OCTEON TX CPT engine
crypto: marvell - create common Kconfig and Makefile for Marvell
crypto: arm/neon - memzero_explicit aes-cbc key
crypto: bcm - Use scnprintf() for avoiding potential buffer overflow
crypto: atmel-i2c - Fix wakeup fail
...


fcb90d51 20-Mar-2020 Lothar Rubusch <l.rubusch@gmail.com>

crypto: af_alg - bool type cosmetics

When working with bool values the true and false definitions should be used
instead of 1 and 0.

Hopefully I fixed my mailer and apologize for that.

Signed-off-by: Lothar Rubusch <l.rubusch@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8ff357a9 04-Mar-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - do comparison tests before inauthentic input tests

Do test_aead_vs_generic_impl() before test_aead_inauthentic_inputs() so
that any differences with the generic driver are detected before getting
to the inauthentic input tests, which intentionally use only the driver
being tested (so that they run even if a generic driver is unavailable).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6f3a06d9 04-Mar-2020 Eric Biggers <ebiggers@google.com>

crypto: testmgr - use consistent IV copies for AEADs that need it

rfc4543 was missing from the list of algorithms that may treat the end
of the AAD buffer specially.

Also, with rfc4106, rfc4309, rfc4543, and rfc7539esp, the end of the AAD
buffer is actually supposed to contain a second copy of the IV, and
we've concluded that if the IV copies don't match the behavior is
implementation-defined. So, the fuzz tests can't easily test that case.

So, make the fuzz tests only use inputs where the two IV copies match.

Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
Cc: Stephan Mueller <smueller@chronox.de>
Originally-from: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

732e5409 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: xts - simplify error handling in ->create()

Simplify the error handling in the XTS template's ->create() function by
taking advantage of crypto_drop_skcipher() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0708bb43 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: rsa-pkcs1pad - simplify error handling in pkcs1pad_create()

Simplify the error handling in pkcs1pad_create() by taking advantage of
crypto_grab_akcipher() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_akcipher() now accepting (as a no-op) a spawn
that hasn't been grabbed yet.

While we're at it, also simplify the way the hash_name optional argument
is handled. We only need to check whether it's present in one place,
and we can just assign directly to ctx->digest_info.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

07b24c7c 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: pcrypt - simplify error handling in pcrypt_create_aead()

Simplify the error handling in pcrypt_create_aead() by taking advantage
of crypto_grab_aead() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_aead() now accepting (as a no-op) a spawn that
hasn't been grabbed yet.

This required also making padata_free_shell() accept a NULL argument.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d5706310 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: lrw - simplify error handling in create()

Simplify the error handling in the LRW template's ->create() function by
taking advantage of crypto_drop_skcipher() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

376ffe1a 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: geniv - simply error handling in aead_geniv_alloc()

Simplify the error handling in aead_geniv_alloc() by taking advantage of
crypto_grab_aead() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_aead() now accepting (as a no-op) a spawn that
hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c4caa56d 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: gcm - simplify error handling in crypto_rfc4543_create()

Simplify the error handling in crypto_rfc4543_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Conveniently, this eliminates the 'ccm_name' variable which was
incorrectly named (it should have been 'gcm_name').

Also fix a weird case where a line was terminated by a comma rather than
a semicolon, causing the statement to be continued on the next line.
Fortunately the code still behaved as intended, though.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

959ac1cd 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: gcm - simplify error handling in crypto_rfc4106_create()

Simplify the error handling in crypto_rfc4106_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Conveniently, this eliminates the 'ccm_name' variable which was
incorrectly named (it should have been 'gcm_name').

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3ff2bab8 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: cts - simplify error handling in crypto_cts_create()

Simplify the error handling in crypto_cts_create() by taking advantage
of crypto_grab_skcipher() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_skcipher() now accepting (as a no-op) a spawn
that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a108dfcf 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: ctr - simplify error handling in crypto_rfc3686_create()

Simplify the error handling in crypto_rfc3686_create() by taking
advantage of crypto_grab_skcipher() now handling an ERR_PTR() name and
by taking advantage of crypto_drop_skcipher() now accepting (as a no-op)
a spawn that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b8c0d74a 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: cryptd - simplify error handling in cryptd_create_*()

Simplify the error handling in the various cryptd_create_*() functions
by taking advantage of crypto_grab_*() now handling an ERR_PTR() name
and by taking advantage of crypto_drop_*() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

64d66793 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: ccm - simplify error handling in crypto_rfc4309_create()

Simplify the error handling in crypto_rfc4309_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d1dc4df1 25-Feb-2020 Eric Biggers <ebiggers@google.com>

crypto: authencesn - fix weird comma-terminated line

Fix a weird case where a line was terminated by a comma rather than a
semicolon, causing the statement to be continued on the next line.
Fortunately the code still behaved as intended, though.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2fdddaf0 21-Feb-2020 YueHaibing <yuehaibing@huawei.com>

crypto: md5 - remove unused macros

crypto/md5.c:26:0: warning: macro "MD5_DIGEST_WORDS" is not used [-Wunused-macros]
crypto/md5.c:27:0: warning: macro "MD5_MESSAGE_BYTES" is not used [-Wunused-macros]

They are never used since commit 3c7eb3cc8360 ("md5: remove from
lib and only live in crypto").

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ebe7acad 20-Feb-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity

Pull IMA fixes from Mimi Zohar:
"Two bug fixes and an associated change for each.

The one that adds SM3 to the IMA list of supported hash algorithms is
a simple change, but could be considered a new feature"

* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
ima: add sm3 algorithm to hash algorithm configuration list
crypto: rename sm3-256 to sm3 in hash_algo_name
efi: Only print errors about failing to get certs if EFI vars are found
x86/ima: use correct identifier for SetupMode variable


6a30e1b1 10-Feb-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: rename sm3-256 to sm3 in hash_algo_name

The name sm3-256 is defined in hash_algo_name in hash_info, but the
algorithm name implemented in sm3_generic.c is sm3, which will cause
the sm3-256 algorithm to be not found in some application scenarios of
the hash algorithm, and an ENOENT error will occur. For example,
IMA, keys, and other subsystems that reference hash_algo_name all use
the hash algorithm of sm3.

Fixes: 5ca4c20cfd37 ("keys, trusted: select hash algorithm for TPM2 chips")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Reviewed-by: Pascal van Leeuwen <pvanleeuwen@rambus.com>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

3e71e121 15-Feb-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 's390-5.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

- Enable paes-s390 cipher selftests in testmgr (acked-by Herbert Xu).

- Fix protected key length update in PKEY_SEC2PROTK ioctl and increase
card/queue requests counter to 64-bit in crypto code.

- Fix clang warning in get_tod_clock.

- Fix ultravisor info length extensions handling.

- Fix style of SPDX License Identifier in vfio-ccw.

- Avoid unnecessary GFP_ATOMIC and simplify ACK tracking in qdio.

* tag 's390-5.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
crypto/testmgr: enable selftests for paes-s390 ciphers
s390/time: Fix clk type in get_tod_clock
s390/uv: Fix handling of length extensions
s390/qdio: don't allocate *aob array with GFP_ATOMIC
s390/qdio: simplify ACK tracking
s390/zcrypt: fix card and queue total counter wrap
s390/pkey: fix missing length of protected key on return
vfio-ccw: Use the correct style for SPDX License Identifier


64ae1342 13-Feb-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fix from Herbert Xu:
"This fixes a Kconfig anomaly when lib/crypto is enabled without Crypto
API"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: Kconfig - allow tests to be disabled when manager is disabled


c7ff8573 22-Jan-2020 Harald Freudenberger <freude@linux.ibm.com>

crypto/testmgr: enable selftests for paes-s390 ciphers

This patch enables the selftests for the s390 specific protected key
AES (PAES) cipher implementations:
* cbc-paes-s390
* ctr-paes-s390
* ecb-paes-s390
* xts-paes-s390
PAES is an AES cipher but with encrypted ('protected') key
material. However, the paes ciphers are able to derive an protected
key from clear key material with the help of the pkey kernel module.

So this patch now enables the generic AES tests for the paes
ciphers. Under the hood the setkey() functions rearrange the clear key
values as clear key token and so the pkey kernel module is able to
provide protected key blobs from the given clear key values. The
derived protected key blobs are then used within the paes cipers and
should produce the very same results as the generic AES implementation
with the clear key values.

The s390-paes cipher testlist entries are surrounded
by #if IS_ENABLED(CONFIG_CRYPTO_PAES_S390) because they don't
make any sense on non s390 platforms or without the PAES
cipher implementation.

Link: http://lkml.kernel.org/r/20200213083946.zicarnnt3wizl5ty@gondor.apana.org.au
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>

8e3b7fd7 04-Feb-2020 Horia Geantă <horia.geanta@nxp.com>

crypto: tcrypt - fix printed skcipher [a]sync mode

When running tcrypt skcipher speed tests, logs contain things like:
testing speed of async ecb(des3_ede) (ecb(des3_ede-generic)) encryption
or:
testing speed of async ecb(aes) (ecb(aes-ce)) encryption

The algorithm implementations are sync, not async.
Fix this inaccuracy.

Fixes: 7166e589da5b6 ("crypto: tcrypt - Use skcipher")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7f1cfe41 05-Feb-2020 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

crypto: proc - simplify the c_show function

The path with the CRYPTO_ALG_LARVAL flag has jumped to the end before

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

eed74b3e 20-Jan-2020 Dan Carpenter <dan.carpenter@oracle.com>

crypto: rng - Fix a refcounting bug in crypto_rng_reset()

We need to decrement this refcounter on these error paths.

Fixes: f7d76e05d058 ("crypto: user - fix use_after_free of struct xxx_request")
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2343d152 16-Jan-2020 Jason A. Donenfeld <Jason@zx2c4.com>

crypto: Kconfig - allow tests to be disabled when manager is disabled

The library code uses CRYPTO_MANAGER_DISABLE_TESTS to conditionalize its
tests, but the library code can also exist without CRYPTO_MANAGER. That
means on minimal configs, the test code winds up being built with no way
to disable it.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

45586c70 03-Feb-2020 Masahiro Yamada <masahiroy@kernel.org>

treewide: remove redundant IS_ERR() before error code check

'PTR_ERR(p) == -E*' is a stronger condition than IS_ERR(p).
Hence, IS_ERR(p) is unneeded.

The semantic patch that generates this commit is as follows:

// <smpl>
@@
expression ptr;
constant error_code;
@@
-IS_ERR(ptr) && (PTR_ERR(ptr) == - error_code)
+PTR_ERR(ptr) == - error_code
// </smpl>

Link: http://lkml.kernel.org/r/20200106045833.1725-1-masahiroy@kernel.org
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Stephen Boyd <sboyd@kernel.org> [drivers/clk/clk.c]
Acked-by: Bartosz Golaszewski <bgolaszewski@baylibre.com> [GPIO]
Acked-by: Wolfram Sang <wsa@the-dreams.de> [drivers/i2c]
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [acpi/scan.c]
Acked-by: Rob Herring <robh@kernel.org>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

a78208e2 28-Jan-2020 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Removed CRYPTO_TFM_RES flags
- Extended spawn grabbing to all algorithm types
- Moved hash descsize verification into API code

Algorithms:
- Fixed recursive pcrypt dead-lock
- Added new 32 and 64-bit generic versions of poly1305
- Added cryptogams implementation of x86/poly1305

Drivers:
- Added support for i.MX8M Mini in caam
- Added support for i.MX8M Nano in caam
- Added support for i.MX8M Plus in caam
- Added support for A33 variant of SS in sun4i-ss
- Added TEE support for Raven Ridge in ccp
- Added in-kernel API to submit TEE commands in ccp
- Added AMD-TEE driver
- Added support for BCM2711 in iproc-rng200
- Added support for AES256-GCM based ciphers for chtls
- Added aead support on SEC2 in hisilicon"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (244 commits)
crypto: arm/chacha - fix build failured when kernel mode NEON is disabled
crypto: caam - add support for i.MX8M Plus
crypto: x86/poly1305 - emit does base conversion itself
crypto: hisilicon - fix spelling mistake "disgest" -> "digest"
crypto: chacha20poly1305 - add back missing test vectors and test chunking
crypto: x86/poly1305 - fix .gitignore typo
tee: fix memory allocation failure checks on drv_data and amdtee
crypto: ccree - erase unneeded inline funcs
crypto: ccree - make cc_pm_put_suspend() void
crypto: ccree - split overloaded usage of irq field
crypto: ccree - fix PM race condition
crypto: ccree - fix FDE descriptor sequence
crypto: ccree - cc_do_send_request() is void func
crypto: ccree - fix pm wrongful error reporting
crypto: ccree - turn errors to debug msgs
crypto: ccree - fix AEAD decrypt auth fail
crypto: ccree - fix typo in comment
crypto: ccree - fix typos in error msgs
crypto: atmel-{aes,sha,tdes} - Retire crypto_platform_data
crypto: x86/sha - Eliminate casts on asm implementations
...


ab3d436b 12-Jan-2020 Geert Uytterhoeven <geert@linux-m68k.org>

crypto: essiv - fix AEAD capitalization and preposition use in help text

"AEAD" is capitalized everywhere else.
Use "an" when followed by a written or spoken vowel.

Fixes: be1eb7f78aa8fbe3 ("crypto: essiv - create wrapper template for ESSIV generation")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1c08a104 05-Jan-2020 Jason A. Donenfeld <Jason@zx2c4.com>

crypto: poly1305 - add new 32 and 64-bit generic versions

These two C implementations from Zinc -- a 32x32 one and a 64x64 one,
depending on the platform -- come from Andrew Moon's public domain
poly1305-donna portable code, modified for usage in the kernel. The
precomputation in the 32-bit version and the use of 64x64 multiplies in
the 64-bit version make these perform better than the code it replaces.
Moon's code is also very widespread and has received many eyeballs of
scrutiny.

There's a bit of interference between the x86 implementation, which
relies on internal details of the old scalar implementation. In the next
commit, the x86 implementation will be replaced with a faster one that
doesn't rely on this, so none of this matters much. But for now, to keep
this passing the tests, we inline the bits of the old implementation
that the x86 implementation relied on. Also, since we now support a
slightly larger key space, via the union, some offsets had to be fixed
up.

Nonce calculation was folded in with the emit function, to take
advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no
nonce handling in emit, so this path was conditionalized. We also
introduced a new struct, poly1305_core_key, to represent the precise
amount of space that particular implementation uses.

Testing with kbench9000, depending on the CPU, the update function for
the 32x32 version has been improved by 4%-7%, and for the 64x64 by
19%-30%. The 32x32 gains are small, but I think there's great value in
having a parallel implementation to the 64x64 one so that the two can be
compared side-by-side as nice stand-alone units.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d4fdc2df 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - enforce that all instances have a ->free() method

All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered. To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a24a1fd7 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove crypto_template::{alloc,free}()

Now that all templates provide a ->create() method which creates an
instance, installs a strongly-typed ->free() method directly to it, and
registers it, the older ->alloc() and ->free() methods in
'struct crypto_template' are no longer used. Remove them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a39c66cc 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: shash - convert shash_free_instance() to new style

Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself. This replaces the weakly-typed method crypto_template::free().

This will allow removing support for the old way of freeing instances.

Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

758ec5ac 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: cryptd - convert to new way of freeing instances

Convert the "cryptd" template to the new way of freeing instances, where
a ->free() method is installed to the instance struct itself. This
replaces the weakly-typed method crypto_template::free().

This will allow removing support for the old way of freeing instances.

Note that the 'default' case in cryptd_free() was already unreachable.
So, we aren't missing anything by keeping only the ahash and aead parts.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0f8f6d86 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: geniv - convert to new way of freeing instances

Convert the "seqiv" template to the new way of freeing instances where a
->free() method is installed to the instance struct itself. Also remove
the unused implementation of the old way of freeing instances from the
"echainiv" template, since it's already using the new way too.

In doing this, also simplify the code by making the helper function
aead_geniv_alloc() install the ->free() method, instead of making seqiv
and echainiv do this themselves. This is analogous to how
skcipher_alloc_instance_simple() works.

This will allow removing support for the old way of freeing instances.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

48fb3e57 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: hash - add support for new way of freeing instances

Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself. These methods are more
strongly-typed than crypto_template::free(), which they replace.

This will allow removing support for the old way of freeing instances.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aed11cf5 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - fold crypto_init_spawn() into crypto_grab_spawn()

Now that crypto_init_spawn() is only called by crypto_grab_spawn(),
simplify things by moving its functionality into crypto_grab_spawn().

In the process of doing this, also be more consistent about when the
spawn and instance are updated, and remove the crypto_spawn::dropref
flag since now it's always set.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6d1b41fc 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: ahash - unexport crypto_ahash_type

Now that all the templates that need ahash spawns have been converted to
use crypto_grab_ahash() rather than look up the algorithm directly,
crypto_ahash_type is no longer used outside of ahash.c. Make it static.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

629f1afc 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove obsoleted instance creation helpers

Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:

- crypto_get_attr_alg() and similar functions looked up an inner
algorithm directly from a template parameter. These were replaced
with getting the algorithm's name, then calling crypto_grab_*().

- crypto_init_spawn2() and similar functions initialized a spawn, given
an algorithm. Similarly, these were replaced with crypto_grab_*().

- crypto_alloc_instance() and similar functions allocated an instance
with a single spawn, given the inner algorithm. These aren't useful
anymore since crypto_grab_*() need the instance allocated first.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d5ed3b65 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: cipher - make crypto_spawn_cipher() take a crypto_cipher_spawn

Now that all users of single-block cipher spawns have been converted to
use 'struct crypto_cipher_spawn' rather than the less specifically typed
'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1e212a6a 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: xcbc - use crypto_grab_cipher() and simplify error paths

Make the xcbc template use the new function crypto_grab_cipher() to
initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making xcbc_create() allocate the instance directly rather
than use shash_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3b4e73d8 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: vmac - use crypto_grab_cipher() and simplify error paths

Make the vmac64 template use the new function crypto_grab_cipher() to
initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making vmac_create() allocate the instance directly rather
than use shash_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1d0459cd 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: cmac - use crypto_grab_cipher() and simplify error paths

Make the cmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making cmac_create() allocate the instance directly rather
than use shash_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

16672970 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: cbcmac - use crypto_grab_cipher() and simplify error paths

Make the cbcmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making cbcmac_create() allocate the instance directly
rather than use shash_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aacd5b4c 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: skcipher - use crypto_grab_cipher() and simplify error paths

Make skcipher_alloc_instance_simple() use the new function
crypto_grab_cipher() to initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c282586f 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: chacha20poly1305 - use crypto_grab_ahash() and simplify error paths

Make the rfc7539 and rfc7539esp templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

05b3bbb5 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: ccm - use crypto_grab_ahash() and simplify error paths

Make the ccm and ccm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ab6ffd36 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: gcm - use crypto_grab_ahash() and simplify error paths

Make the gcm and gcm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

37073882 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: authencesn - use crypto_grab_ahash() and simplify error paths

Make the authencesn template use the new function crypto_grab_ahash() to
initialize its ahash spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

37a861ad 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: authenc - use crypto_grab_ahash() and simplify error paths

Make the authenc template use the new function crypto_grab_ahash() to
initialize its ahash spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

39e7a283 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: hmac - use crypto_grab_shash() and simplify error paths

Make the hmac template use the new function crypto_grab_shash() to
initialize its shash spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making hmac_create() allocate the instance directly rather
than use shash_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

218c5035 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: cryptd - use crypto_grab_shash() and simplify error paths

Make the cryptd template (in the hash case) use the new function
crypto_grab_shash() to initialize its shash spawn.

This is needed to make all spawns be initialized in a consistent way.

This required making cryptd_create_hash() allocate the instance directly
rather than use cryptd_alloc_instance().

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ba448407 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: adiantum - use crypto_grab_{cipher,shash} and simplify error paths

Make the adiantum template use the new functions crypto_grab_cipher()
and crypto_grab_shash() to initialize its cipher and shash spawns.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

84a9c938 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: ahash - introduce crypto_grab_ahash()

Currently, ahash spawns are initialized by using ahash_attr_alg() or
crypto_find_alg() to look up the ahash algorithm, then calling
crypto_init_ahash_spawn().

This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.

The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.

So, let's introduce crypto_grab_ahash() so that we can convert all
templates to the same way of initializing their spawns.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fdfad1ff 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: shash - introduce crypto_grab_shash()

Currently, shash spawns are initialized by using shash_attr_alg() or
crypto_alg_mod_lookup() to look up the shash algorithm, then calling
crypto_init_shash_spawn().

This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.

The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.

So, let's introduce crypto_grab_shash() so that we can convert all
templates to the same way of initializing their spawns.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

de95c957 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - pass instance to crypto_grab_spawn()

Currently, crypto_spawn::inst is first used temporarily to pass the
instance to crypto_grab_spawn(). Then crypto_init_spawn() overwrites it
with crypto_spawn::next, which shares the same union. Finally,
crypto_spawn::inst is set again when the instance is registered.

Make this less convoluted by just passing the instance as an argument to
crypto_grab_spawn() instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

73bed26f 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: akcipher - pass instance to crypto_grab_akcipher()

Initializing a crypto_akcipher_spawn currently requires:

1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_akcipher().

But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

So just make crypto_grab_akcipher() take the instance as an argument.

To keep the function call from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into pkcs1pad_create().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cd900f0c 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: aead - pass instance to crypto_grab_aead()

Initializing a crypto_aead_spawn currently requires:

1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_aead().

But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

So just make crypto_grab_aead() take the instance as an argument.

To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b9f76ddd 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: skcipher - pass instance to crypto_grab_skcipher()

Initializing a crypto_skcipher_spawn currently requires:

1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().

But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

So just make crypto_grab_skcipher() take the instance as an argument.

To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ca94e937 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - make crypto_grab_spawn() handle an ERR_PTR() name

To allow further simplifying template ->create() functions, make
crypto_grab_spawn() handle an ERR_PTR() name by passing back the error.

For most templates, this will allow the result of crypto_attr_alg_name()
to be passed directly to crypto_grab_*(), rather than first having to
assign it to a variable [where it can then potentially be misused, as it
was in the rfc7539 template prior to commit 5e27f38f1f3f ("crypto:
chacha20poly1305 - set cra_name correctly")] and check it for error.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ff670627 02-Jan-2020 Eric Biggers <ebiggers@google.com>

crypto: algapi - make crypto_drop_spawn() a no-op on uninitialized spawns

Make crypto_drop_spawn() do nothing when the spawn hasn't been
initialized with an algorithm yet. This will allow simplifying error
handling in all the template ->create() functions, since on error they
will be able to just call their usual "free instance" function, rather
than having to handle dropping just the spawns that have been
initialized so far.

This does assume the spawn starts out zero-filled, but that's always the
case since instances are allocated with kzalloc(). And some other code
already assumes this anyway.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

af5034e8 30-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: remove propagation of CRYPTO_TFM_RES_* flags

The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.

Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.

So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c4c4db0d 30-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: remove CRYPTO_TFM_RES_WEAK_KEY

The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make
the ->setkey() functions provide more information about errors.

However, no one actually checks for this flag, which makes it pointless.
There are also no tests that verify that all algorithms actually set (or
don't set) it correctly.

This is also the last remaining CRYPTO_TFM_RES_* flag, which means that
it's the only thing still needing all the boilerplate code which
propagates these flags around from child => parent tfms.

And if someone ever needs to distinguish this error in the future (which
is somewhat unlikely, as it's been unneeded for a long time), it would
be much better to just define a new return value like -EKEYREJECTED.
That would be much simpler, less error-prone, and easier to test.

So just remove this flag.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

674f368a 30-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: remove CRYPTO_TFM_RES_BAD_KEY_LEN

The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.

However, no one actually checks for this flag, which makes it pointless.

Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
many more in arch/*/crypto/ and drivers/crypto/.

Some algorithms can even set this flag when the key is the correct
length. For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.

So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly. But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.

So just remove this flag.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

70ffa8fd 30-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove skcipher_walk_aead()

skcipher_walk_aead() is unused and is identical to
skcipher_walk_aead_encrypt(), so remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b3c16bfc 19-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: skcipher - Add skcipher_ialg_simple helper

This patch introduces the skcipher_ialg_simple helper which fetches
the crypto_alg structure from a simple skcipher instance's spawn.

This allows us to remove the third argument from the function
skcipher_alloc_instance_simple.

In doing so the reference count to the algorithm is now maintained
by the Crypto API and the caller no longer needs to drop the alg
refcount.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5f567fff 18-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Retain alg refcount in crypto_grab_spawn

This patch changes crypto_grab_spawn to retain the reference count
on the algorithm. This is because the caller needs to access the
algorithm parameters and without the reference count the algorithm
can be freed at any time.

The reference count will be subsequently dropped by the crypto API
once the instance has been registered. The helper crypto_drop_spawn
will also conditionally drop the reference count depending on whether
it has been registered.

Note that the code is actually added to crypto_init_spawn. However,
unless the caller activates this by setting spawn->dropref beforehand
then nothing happens. The only caller that sets dropref is currently
crypto_grab_spawn.

Once all legacy users of crypto_init_spawn disappear, then we can
kill the dropref flag.

Internally each instance will maintain a list of its spawns prior
to registration. This memory used by this list is shared with
other fields that are only used after registration. In order for
this to work a new flag spawn->registered is added to indicate
whether spawn->inst can be used.

Fixes: d6ef2f198d4c ("crypto: api - Add crypto_grab_spawn primitive")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c6d633a9 15-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - make unregistration functions return void

Some of the algorithm unregistration functions return -ENOENT when asked
to unregister a non-registered algorithm, while others always return 0
or always return void. But no users check the return value, except for
two of the bulk unregistration functions which print a message on error
but still always return 0 to their caller, and crypto_del_alg() which
calls crypto_unregister_instance() which always returns 0.

Since unregistering a non-registered algorithm is always a kernel bug
but there isn't anything callers should do to handle this situation at
runtime, let's simplify things by making all the unregistration
functions return void, and moving the error message into
crypto_unregister_alg() and upgrading it to a WARN().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2bbb3375 10-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - fix unexpectedly getting generic implementation

When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
algorithm that needs to be instantiated using a template will always get
the generic implementation, even when an accelerated one is available.

This happens because the extra self-tests for the accelerated
implementation allocate the generic implementation for comparison
purposes, and then crypto_alg_tested() for the generic implementation
"fulfills" the original request (i.e. sets crypto_larval::adult).

This patch fixes this by only fulfilling the original request if
we are currently the best outstanding larval as judged by the
priority. If we're not the best then we will ask all waiters on
that larval request to retry the lookup.

Note that this patch introduces a behaviour change when the module
providing the new algorithm is unregistered during the process.
Previously we would have failed with ENOENT, after the patch we
will instead redo the lookup.

Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...")
Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...")
Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...")
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4a94c433 18-Dec-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'tpmdd-next-20191219' of git://git.infradead.org/users/jjs/linux-tpmdd

Pull tpm fixes from Jarkko Sakkinen:
"Bunch of fixes for rc3"

* tag 'tpmdd-next-20191219' of git://git.infradead.org/users/jjs/linux-tpmdd:
tpm/tpm_ftpm_tee: add shutdown call back
tpm: selftest: cleanup after unseal with wrong auth/policy test
tpm: selftest: add test covering async mode
tpm: fix invalid locking in NONBLOCKING mode
security: keys: trusted: fix lost handle flush
tpm_tis: reserve chip for duration of tpm_tis_core_init
KEYS: asymmetric: return ENOMEM if akcipher_request_alloc() fails
KEYS: remove CONFIG_KEYS_COMPAT


bea37414 09-Oct-2019 Eric Biggers <ebiggers@google.com>

KEYS: asymmetric: return ENOMEM if akcipher_request_alloc() fails

No error code was being set on this error path.

Cc: stable@vger.kernel.org
Fixes: ad4b1eb5fb33 ("KEYS: asym_tpm: Implement encryption operation [ver #2]")
Fixes: c08fed737126 ("KEYS: Implement encrypt, decrypt and sign for software asymmetric key [ver #2]")
Reviewed-by: James Morris <jamorris@linux.microsoft.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>

d9e1670b 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: hmac - Use init_tfm/exit_tfm interface

This patch switches hmac over to the new init_tfm/exit_tfm interface
as opposed to cra_init/cra_exit. This way the shash API can make
sure that descsize does not exceed the maximum.

This patch also adds the API helper shash_alg_instance.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fbce6be5 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: shash - Add init_tfm/exit_tfm and verify descsize

The shash interface supports a dynamic descsize field because of
the presence of fallbacks (it's just padlock-sha actually, perhaps
we can remove it one day). As it is the API does not verify the
setting of descsize at all. It is up to the individual algorithms
to ensure that descsize does not exceed the specified maximum value
of HASH_MAX_DESCSIZE (going above would cause stack corruption).

In order to allow the API to impose this limit directly, this patch
adds init_tfm/exit_tfm hooks to the shash_alg structure. We can
then verify the descsize setting in the API directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

02244ba4 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Add more comments to crypto_remove_spawns

This patch explains the logic behind crypto_remove_spawns and its
underling crypto_more_spawns.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4f87ee11 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Do not zap spawn->alg

Currently when a spawn is removed we will zap its alg field.
This is racy because the spawn could belong to an unregistered
instance which may dereference the spawn->alg field.

This patch fixes this by keeping spawn->alg constant and instead
adding a new spawn->dead field to indicate that a spawn is going
away.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

73669cc5 07-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Fix race condition in crypto_spawn_alg

The function crypto_spawn_alg is racy because it drops the lock
before shooting the dying algorithm. The algorithm could disappear
altogether before we shoot it.

This patch fixes it by moving the shooting into the locked section.

Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7db3b61b 05-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Check spawn->alg under lock in crypto_drop_spawn

We need to check whether spawn->alg is NULL under lock as otherwise
the algorithm could be removed from under us after we have checked
it and found it to be non-NULL. This could cause us to remove the
spawn from a non-existent list.

Fixes: 7ede5a5ba55a ("crypto: api - Fix crypto_drop_spawn crash...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

37f96694 04-Dec-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: af_alg - Use bh_lock_sock in sk_destruct

As af_alg_release_parent may be called from BH context (most notably
due to an async request that only completes after socket closure,
or as reported here because of an RCU-delayed sk_destruct call), we
must use bh_lock_sock instead of lock_sock.

Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Fixes: c840ac6af3f8 ("crypto: af_alg - Disallow bind/setkey/...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

91a71d61 03-Dec-2019 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: remove cpumask change notifier

Since commit 63d3578892dc ("crypto: pcrypt - remove padata cpumask
notifier") this feature is unused, so get rid of it.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e8cfed5e 02-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: cipher - remove crt_u.cipher (struct cipher_tfm)

Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey()
is pointless because it always points to setkey() in crypto/cipher.c.

->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless,
since if the algorithm doesn't have an alignmask, they are set directly
to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization"
isn't worthwhile because:

- The "cipher" algorithm type is the only algorithm still using crt_u,
so it's bloating every struct crypto_tfm for every algorithm type.

- If the algorithm has an alignmask, this "optimization" actually makes
things slower, as it causes 2 indirect calls per block rather than 1.

- It adds extra code complexity.

- Some templates already call ->cia_encrypt()/->cia_decrypt() directly
instead of going through ->cit_encrypt_one()/->cit_decrypt_one().

- The "cipher" algorithm type never gives optimal performance anyway.
For that, a higher-level type such as skcipher needs to be used.

Therefore, just remove the extra indirection, and make
crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and
crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c.

Also remove the unused function crypto_cipher_cast().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c441a909 02-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: compress - remove crt_u.compress (struct compress_tfm)

crt_u.compress (struct compress_tfm) is pointless because its two
fields, ->cot_compress() and ->cot_decompress(), always point to
crypto_compress() and crypto_decompress().

Remove this pointless indirection, and just make crypto_comp_compress()
and crypto_comp_decompress() be direct calls to what used to be
crypto_compress() and crypto_decompress().

Also remove the unused function crypto_comp_cast().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

49763fc6 01-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - generate inauthentic AEAD test vectors

The whole point of using an AEAD over length-preserving encryption is
that the data is authenticated. However currently the fuzz tests don't
test any inauthentic inputs to verify that the data is actually being
authenticated. And only two algorithms ("rfc4543(gcm(aes))" and
"ccm(aes)") even have any inauthentic test vectors at all.

Therefore, update the AEAD fuzz tests to sometimes generate inauthentic
test vectors, either by generating a (ciphertext, AAD) pair without
using the key, or by mutating an authentic pair that was generated.

To avoid flakiness, only assume this works reliably if the auth tag is
at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp
algorithms intentionally ignoring the last 8 AAD bytes, and for some
algorithms doing extra checks that result in EINVAL rather than EBADMSG.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2ea91505 01-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - create struct aead_extra_tests_ctx

In preparation for adding inauthentic input fuzz tests, which don't
require that a generic implementation of the algorithm be available,
refactor test_aead_vs_generic_impl() so that instead there's a
higher-level function test_aead_extra() which initializes a struct
aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a
pointer to that struct.

As a bonus, this reduces stack usage.

Also switch from crypto_aead_alg(tfm)->maxauthsize to
crypto_aead_maxauthsize(), now that the latter is available in
<crypto/aead.h>.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fd8c37c7 01-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - test setting misaligned keys

The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa515
("crypto: ghash - fix unaligned memory access in ghash_setkey()")
wasn't reliably detected by the crypto self-tests on ARM because the
tests only set the keys directly from the test vectors.

To improve test coverage, update the tests to sometimes pass misaligned
keys to setkey(). This applies to shash, ahash, skcipher, and aead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fd60f727 01-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - check skcipher min_keysize

When checking two implementations of the same skcipher algorithm for
consistency, require that the minimum key size be the same, not just the
maximum key size. There's no good reason to allow different minimum key
sizes.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

eb455dbd 01-Dec-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - don't try to decrypt uninitialized buffers

Currently if the comparison fuzz tests encounter an encryption error
when generating an skcipher or AEAD test vector, they will still test
the decryption side (passing it the uninitialized ciphertext buffer)
and expect it to fail with the same error.

This is sort of broken because it's not well-defined usage of the API to
pass an uninitialized buffer, and furthermore in the AEAD case it's
acceptable for the decryption error to be EBADMSG (meaning "inauthentic
input") even if the encryption error was something else like EINVAL.

Fix this for skcipher by explicitly initializing the ciphertext buffer
on error, and for AEAD by skipping the decryption test on error.

Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>
Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation")
Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c2881789 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithms

The essiv and hmac templates refuse to use any hash algorithm that has a
->setkey() function, which includes not just algorithms that always need
a key, but also algorithms that optionally take a key.

Previously the only optionally-keyed hash algorithms in the crypto API
were non-cryptographic algorithms like crc32, so this didn't really
matter. But that's changed with BLAKE2 support being added. BLAKE2
should work with essiv and hmac, just like any other cryptographic hash.

Fix this by allowing the use of both algorithms without a ->setkey()
function and algorithms that have the OPTIONAL_KEY flag set.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

89873b44 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher_extsize()

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher_extsize() now simply calls crypto_alg_extsize(). So
remove it and just use crypto_alg_extsize().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7e1c1099 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher::decrypt

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::decrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->decrypt.

Remove it and update crypto_skcipher_decrypt() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

848755e3 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher::encrypt

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::encrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->encrypt.

Remove it and update crypto_skcipher_encrypt() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

15252d94 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher::setkey

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::setkey now always points to skcipher_setkey().

Simplify by removing this function pointer and instead just making
skcipher_setkey() be crypto_skcipher_setkey() directly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9ac0d136 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher::keysize

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::keysize is now redundant since it always equals
crypto_skcipher_alg(tfm)->max_keysize.

Remove it and update crypto_skcipher_default_keysize() accordingly.

Also rename crypto_skcipher_default_keysize() to
crypto_skcipher_max_keysize() to clarify that it specifically returns
the maximum key size, not some unspecified "default".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

140734d3 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove crypto_skcipher::ivsize

Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::ivsize is now redundant since it always equals
crypto_skcipher_alg(tfm)->ivsize.

Remove it and update crypto_skcipher_ivsize() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0a940d4e 29-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: api - remove another reference to blkcipher

Update a comment to refer to crypto_alloc_skcipher() rather than
crypto_alloc_blkcipher() (the latter having been removed).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e8d99826 29-Nov-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: pcrypt - Do not clear MAY_SLEEP flag in original request

We should not be modifying the original request's MAY_SLEEP flag
upon completion. It makes no sense to do so anyway.

Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9c1e8836 26-Nov-2019 Kees Cook <keescook@chromium.org>

crypto: x86 - Regularize glue function prototypes

The crypto glue performed function prototype casting via macros to make
indirect calls to assembly routines. Instead of performing casts at the
call sites (which trips Control Flow Integrity prototype checking), switch
each prototype to a common standard set of arguments which allows the
removal of the existing macros. In order to keep pointer math unchanged,
internal casting between u128 pointers and u8 pointers is added.

Co-developed-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bbefa1dd 26-Nov-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: pcrypt - Avoid deadlock by using per-instance padata queues

If the pcrypt template is used multiple times in an algorithm, then a
deadlock occurs because all pcrypt instances share the same
padata_instance, which completes requests in the order submitted. That
is, the inner pcrypt request waits for the outer pcrypt request while
the outer request is already waiting for the inner.

This patch fixes this by allocating a set of queues for each pcrypt
instance instead of using two global queues. In order to maintain
the existing user-space interface, the pinst structure remains global
so any sysfs modifications will apply to every pcrypt instance.

Note that when an update occurs we have to allocate memory for
every pcrypt instance. Should one of the allocations fail we
will abort the update without rolling back changes already made.

The new per-instance data structure is called padata_shell and is
essentially a wrapper around parallel_data.

Reproducer:

#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>

int main()
{
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "pcrypt(pcrypt(rfc4106-gcm-aesni))"
};
int algfd, reqfd;
char buf[32] = { 0 };

algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, 20);
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 32);
read(reqfd, buf, 16);
}

Reported-by: syzbot+56c7151cad94eec37c521f0e47d2eee53f9361c4@syzkaller.appspotmail.com
Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

07bfd9bd 19-Nov-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: pcrypt - Fix user-after-free on module unload

On module unload of pcrypt we must unregister the crypto algorithms
first and then tear down the padata structure. As otherwise the
crypto algorithms are still alive and can be used while the padata
structure is being freed.

Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c593642c 09-Dec-2019 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>

treewide: Use sizeof_field() macro

Replace all the occurrences of FIELD_SIZEOF() with sizeof_field() except
at places where these are defined. Later patches will remove the unused
definition of FIELD_SIZEOF().

This patch is generated using following script:

EXCLUDE_FILES="include/linux/stddef.h|include/linux/kernel.h"

git grep -l -e "\bFIELD_SIZEOF\b" | while read file;
do

if [[ "$file" =~ $EXCLUDE_FILES ]]; then
continue
fi
sed -i -e 's/\bFIELD_SIZEOF\b/sizeof_field/g' $file;
done

Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Link: https://lore.kernel.org/r/20190924105839.110713-3-pankaj.laxminarayan.bharadiya@intel.com
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: David Miller <davem@davemloft.net> # for net

642356cb 25-Nov-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Add library interfaces of certain crypto algorithms for WireGuard
- Remove the obsolete ablkcipher and blkcipher interfaces
- Move add_early_randomness() out of rng_mutex

Algorithms:
- Add blake2b shash algorithm
- Add blake2s shash algorithm
- Add curve25519 kpp algorithm
- Implement 4 way interleave in arm64/gcm-ce
- Implement ciphertext stealing in powerpc/spe-xts
- Add Eric Biggers's scalar accelerated ChaCha code for ARM
- Add accelerated 32r2 code from Zinc for MIPS
- Add OpenSSL/CRYPTOGRAMS poly1305 implementation for ARM and MIPS

Drivers:
- Fix entropy reading failures in ks-sa
- Add support for sam9x60 in atmel
- Add crypto accelerator for amlogic GXL
- Add sun8i-ce Crypto Engine
- Add sun8i-ss cryptographic offloader
- Add a host of algorithms to inside-secure
- Add NPCM RNG driver
- add HiSilicon HPRE accelerator
- Add HiSilicon TRNG driver"

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (285 commits)
crypto: vmx - Avoid weird build failures
crypto: lib/chacha20poly1305 - use chacha20_crypt()
crypto: x86/chacha - only unregister algorithms if registered
crypto: chacha_generic - remove unnecessary setkey() functions
crypto: amlogic - enable working on big endian kernel
crypto: sun8i-ce - enable working on big endian
crypto: mips/chacha - select CRYPTO_SKCIPHER, not CRYPTO_BLKCIPHER
hwrng: ks-sa - Enable COMPILE_TEST
crypto: essiv - remove redundant null pointer check before kfree
crypto: atmel-aes - Change data type for "lastc" buffer
crypto: atmel-tdes - Set the IV after {en,de}crypt
crypto: sun4i-ss - fix big endian issues
crypto: sun4i-ss - hide the Invalid keylen message
crypto: sun4i-ss - use crypto_ahash_digestsize
crypto: sun4i-ss - remove dependency on not 64BIT
crypto: sun4i-ss - Fix 64-bit size_t warnings on sun4i-ss-hash.c
MAINTAINERS: Add maintainer for HiSilicon SEC V2 driver
crypto: hisilicon - add DebugFS for HiSilicon SEC
Documentation: add DebugFS doc for HiSilicon SEC
crypto: hisilicon - add SRIOV for HiSilicon SEC
...


2043323a 18-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha_generic - remove unnecessary setkey() functions

Use chacha20_setkey() and chacha12_setkey() from
<crypto/internal/chacha.h> instead of defining them again in
chacha_generic.c.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

660eda8d 16-Nov-2019 Eric Biggers <ebiggers@google.com>

crypto: mips/chacha - select CRYPTO_SKCIPHER, not CRYPTO_BLKCIPHER

Another instance of CRYPTO_BLKCIPHER made it in just after it was
renamed to CRYPTO_SKCIPHER. Fix it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e18036da 15-Nov-2019 Chen Wandun <chenwandun@huawei.com>

crypto: essiv - remove redundant null pointer check before kfree

kfree has taken null pointer check into account. so it is safe to
remove the unnecessary check.

Signed-off-by: Chen Wandun <chenwandun@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c433a1a8 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - rename tfm context and _setkey callback

The TFM context can be renamed to a more appropriate name and the local
varaibles as well, using 'tctx' which seems to be more common than
'mctx'.

The _setkey callback was the last one without the blake2b_ prefix,
rename that too.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0b4b5f10 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - merge _update to api callback

Now that there's only one call to blake2b_update, we can merge it to the
callback and simplify. The empty input check is split and the rest of
code un-indented.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a2e4bdce 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - open code set last block helper

The helper is trival and called once, inlining makes things simpler.
There's a comment to tie it back to the idea behind the code.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d063d632 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - delete unused structs or members

All the code for param block has been inlined, last_node and outlen from
the state are not used or have become redundant due to other code.
Remove it.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e87e484d 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - simplify key init

The keyed init writes the key bytes to the input buffer and does an
update. We can do that in two ways: fill the buffer and update
immediatelly. This is what current blake2b_init_key does. Any other
following _update or _final will continue from the updated state.

The other way is to write the key and set the number of bytes to process
at the next _update or _final, lazy evaluation. Which leads to the the
simplified code in this patch.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e3749695 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - merge blake2 init to api callback

The call chain from blake2b_init can be simplified because the param
block is effectively zeros, besides the key.

- blake2b_init0 zeroes state and sets IV
- blake2b_init sets up param block with defaults (key and some 1s)
- init with key, write it to the input buffer and recalculate state

So the compact way is to zero out the state and initialize index 0 of
the state directly with the non-zero values and the key.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

086db43b 12-Nov-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - merge _final implementation to callback

blake2b_final is called only once, merge it to the crypto API callback
and simplify. This avoids the temporary buffer and swaps the bytes of
internal buffer.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d63007eb 09-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: ablkcipher - remove deprecated and unused ablkcipher support

Now that all users of the deprecated ablkcipher interface have been
moved to the skcipher interface, ablkcipher is no longer used and
can be removed.

Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

07d8f185 08-Nov-2019 Corentin Labbe <clabbe@baylibre.com>

crypto: tcrypt - constify check alg list

this patchs constify the alg list because this list is never modified.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bb611bdf 08-Nov-2019 Jason A. Donenfeld <Jason@zx2c4.com>

crypto: curve25519 - x86_64 library and KPP implementations

This implementation is the fastest available x86_64 implementation, and
unlike Sandy2x, it doesn't requie use of the floating point registers at
all. Instead it makes use of BMI2 and ADX, available on recent
microarchitectures. The implementation was written by Armando
Faz-Hernández with contributions (upstream) from Samuel Neves and me,
in addition to further changes in the kernel implementation from us.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
[ardb: - move to arch/x86/crypto
- wire into lib/crypto framework
- implement crypto API KPP hooks ]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ee772cb6 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: curve25519 - implement generic KPP driver

Expose the generic Curve25519 library via the crypto API KPP interface.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f613457a 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: curve25519 - add kpp selftest

In preparation of introducing KPP implementations of Curve25519, import
the set of test cases proposed by the Zinc patch set, but converted to
the KPP format.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ed0356ed 08-Nov-2019 Jason A. Donenfeld <Jason@zx2c4.com>

crypto: blake2s - x86_64 SIMD implementation

These implementations from Samuel Neves support AVX and AVX-512VL.
Originally this used AVX-512F, but Skylake thermal throttling made
AVX-512VL more attractive and possible to do with negligable difference.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
[ardb: move to arch/x86/crypto, wire into lib/crypto framework]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7f9b0880 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: blake2s - implement generic shash driver

Wire up our newly added Blake2s implementation via the shash API.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

17e1df67 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: testmgr - add test cases for Blake2s

As suggested by Eric for the Blake2b implementation contributed by
David, introduce a set of test vectors for Blake2s covering different
digest and key sizes.

blake2s-128 blake2s-160 blake2s-224 blake2s-256
---------------------------------------------------
len=0 | klen=0 klen=1 klen=16 klen=32
len=1 | klen=16 klen=32 klen=0 klen=1
len=7 | klen=32 klen=0 klen=1 klen=16
len=15 | klen=1 klen=16 klen=32 klen=0
len=64 | klen=0 klen=1 klen=16 klen=32
len=247 | klen=16 klen=32 klen=0 klen=1
len=256 | klen=32 klen=0 klen=1 klen=16

Cc: David Sterba <dsterba@suse.com>
Cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c12d3362 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

int128: move __uint128_t compiler test to Kconfig

In order to use 128-bit integer arithmetic in C code, the architecture
needs to have declared support for it by setting ARCH_SUPPORTS_INT128,
and it requires a version of the toolchain that supports this at build
time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test
whether __SIZEOF_INT128__ is defined, since this is only the case for
compilers that can support 128-bit integers.

Let's fold this additional test into the Kconfig declaration of
ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles,
e.g., to decide whether a certain object needs to be included in the
first place.

Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a11d055e 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: mips/poly1305 - incorporate OpenSSL/CRYPTOGAMS optimized implementation

This is a straight import of the OpenSSL/CRYPTOGAMS Poly1305 implementation for
MIPS authored by Andy Polyakov, a prior 64-bit only version of which has been
contributed by him to the OpenSSL project. The file 'poly1305-mips.pl' is taken
straight from this upstream GitHub repository [0] at commit
d22ade312a7af958ec955620b0d241cf42c37feb, and already contains all the changes
required to build it as part of a Linux kernel module.

[0] https://github.com/dot-asm/cryptogams

Co-developed-by: Andy Polyakov <appro@cryptogams.org>
Signed-off-by: Andy Polyakov <appro@cryptogams.org>
Co-developed-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f0e89bcf 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/poly1305 - expose existing driver as poly1305 library

Implement the arch init/update/final Poly1305 library routines in the
accelerated SIMD driver for x86 so they are accessible to users of
the Poly1305 library interface as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1b2c6a51 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/poly1305 - depend on generic library not generic shash

Remove the dependency on the generic Poly1305 driver. Instead, depend
on the generic library so that we only reuse code without pulling in
the generic skcipher implementation as well.

While at it, remove the logic that prefers the non-SIMD path for short
inputs - this is no longer necessary after recent FPU handling changes
on x86.

Since this removes the last remaining user of the routines exported
by the generic shash driver, unexport them and make them static.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a1d93064 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: poly1305 - expose init/update/final library interface

Expose the existing generic Poly1305 code via a init/update/final
library interface so that callers are not required to go through
the crypto API's shash abstraction to access it. At the same time,
make some preparations so that the library implementation can be
superseded by an accelerated arch-specific version in the future.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ad8f5b88 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/poly1305 - unify Poly1305 state struct with generic code

In preparation of exposing a Poly1305 library interface directly from
the accelerated x86 driver, align the state descriptor of the x86 code
with the one used by the generic driver. This is needed to make the
library interface unified between all implementations.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

48ea8c6e 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: poly1305 - move core routines into a separate library

Move the core Poly1305 routines shared between the generic Poly1305
shash driver and the Adiantum and NHPoly1305 drivers into a separate
library so that using just this pieces does not pull in the crypto
API pieces of the generic Poly1305 routine.

In a subsequent patch, we will augment this generic library with
init/update/final routines so that Poyl1305 algorithm can be used
directly without the need for using the crypto API's shash abstraction.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

22cf7053 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: chacha - unexport chacha_generic routines

Now that all users of generic ChaCha code have moved to the core library,
there is no longer a need for the generic ChaCha skcpiher driver to
export parts of it implementation for reuse by other drivers. So drop
the exports, and make the symbols static.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3a2f58f3 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: mips/chacha - wire up accelerated 32r2 code from Zinc

This integrates the accelerated MIPS 32r2 implementation of ChaCha
into both the API and library interfaces of the kernel crypto stack.

The significance of this is that, in addition to becoming available
as an accelerated library implementation, it can also be used by
existing crypto API code such as Adiantum (for block encryption on
ultra low performance cores) or IPsec using chacha20poly1305. These
are use cases that have already opted into using the abstract crypto
API. In order to support Adiantum, the core assembler routine has
been adapted to take the round count as a function argument rather
than hardcoding it to 20.

Co-developed-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

84e03fa3 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/chacha - expose SIMD ChaCha routine as library function

Wire the existing x86 SIMD ChaCha code into the new ChaCha library
interface, so that users of the library interface will get the
accelerated version when available.

Given that calls into the library API will always go through the
routines in this module if it is enabled, switch to static keys
to select the optimal implementation available (which may be none
at all, in which case we defer to the generic implementation for
all invocations).

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

28e8d89b 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/chacha - depend on generic chacha library instead of crypto driver

In preparation of extending the x86 ChaCha driver to also expose the ChaCha
library interface, drop the dependency on the chacha_generic crypto driver
as a non-SIMD fallback, and depend on the generic ChaCha library directly.
This way, we only pull in the code we actually need, without registering
a set of ChaCha skciphers that we will never use.

Since turning the FPU on and off is cheap these days, simplify the SIMD
routine by dropping the per-page yield, which makes for a cleaner switch
to the library API as well. This also allows use to invoke the skcipher
walk routines in non-atomic mode.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5fb8ef25 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: chacha - move existing library code into lib/crypto

Currently, our generic ChaCha implementation consists of a permute
function in lib/chacha.c that operates on the 64-byte ChaCha state
directly [and which is always included into the core kernel since it
is used by the /dev/random driver], and the crypto API plumbing to
expose it as a skcipher.

In order to support in-kernel users that need the ChaCha streamcipher
but have no need [or tolerance] for going through the abstractions of
the crypto API, let's expose the streamcipher bits via a library API
as well, in a way that permits the implementation to be superseded by
an architecture specific one if provided.

So move the streamcipher code into a separate module in lib/crypto,
and expose the init() and crypt() routines to users of the library.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

746b2e02 08-Nov-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: lib - tidy up lib/crypto Kconfig and Makefile

In preparation of introducing a set of crypto library interfaces, tidy
up the Makefile and split off the Kconfig symbols into a separate file.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

20cc01ba 08-Nov-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: aead - Split out geniv into its own module

If aead is built as a module along with cryptomgr, it creates a
dependency loop due to the dependency chain aead => crypto_null =>
cryptomgr => aead.

This is due to the presence of the AEAD geniv code. This code is
not really part of the AEAD API but simply support code for IV
generators such as seqiv. This patch moves the geniv code into
its own module thus breaking the dependency loop.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8ab23d54 08-Nov-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: api - Add softdep on cryptomgr

The crypto API requires cryptomgr to be present for probing to work
so we need a softdep to ensure that cryptomgr is added to the
initramfs.

This was usually not a problem because until very recently it was
not practical to build crypto API as module but with the recent
work to eliminate direct AES users this is now possible.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

698b2227 05-Nov-2019 Tian Tao <tiantao6@huawei.com>

crypto: tgr192 - remove unneeded semicolon

Fix the warning below.
./crypto/tgr192.c:558:43-44: Unneeded semicolon
./crypto/tgr192.c:586:44-45: Unneeded semicolon

Fixes: f63fbd3d501b ("crypto: tgr192 - Switch to shash")

Signed-off-by: Tian Tao <tiantao6@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

47f9c279 15-Oct-2019 Sumit Garg <sumit.garg@linaro.org>

KEYS: trusted: Create trusted keys subsystem

Move existing code to trusted keys subsystem. Also, rename files with
"tpm" as suffix which provides the underlying implementation.

Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>

c6f61e59 15-Oct-2019 Sumit Garg <sumit.garg@linaro.org>

KEYS: Use common tpm_buf for trusted and asymmetric keys

Switch to utilize common heap based tpm_buf code for TPM based trusted
and asymmetric keys rather than using stack based tpm1_buf code. Also,
remove tpm1_buf code.

Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>

74edff2d 15-Oct-2019 Sumit Garg <sumit.garg@linaro.org>

tpm: Move tpm_buf code to include/linux/

Move tpm_buf code to common include/linux/tpm.h header so that it can
be reused via other subsystems like trusted keys etc.

Also rename trusted keys and asymmetric keys usage of TPM 1.x buffer
implementation to tpm1_buf to avoid any compilation errors.

Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>

b95bba5d 25-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - rename the crypto_blkcipher module and kconfig option

Now that the blkcipher algorithm type has been removed in favor of
skcipher, rename the crypto_blkcipher kernel module to crypto_skcipher,
and rename the config options accordingly:

CONFIG_CRYPTO_BLKCIPHER => CONFIG_CRYPTO_SKCIPHER
CONFIG_CRYPTO_BLKCIPHER2 => CONFIG_CRYPTO_SKCIPHER2

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c65058b7 25-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove the "blkcipher" algorithm type

Now that all "blkcipher" algorithms have been converted to "skcipher",
remove the blkcipher algorithm type.

The skcipher (symmetric key cipher) algorithm type was introduced a few
years ago to replace both blkcipher and ablkcipher (synchronous and
asynchronous block cipher). The advantages of skcipher include:

- A much less confusing name, since none of these algorithm types have
ever actually been for raw block ciphers, but rather for all
length-preserving encryption modes including block cipher modes of
operation, stream ciphers, and other length-preserving modes.

- It unified blkcipher and ablkcipher into a single algorithm type
which supports both synchronous and asynchronous implementations.
Note, blkcipher already operated only on scatterlists, so the fact
that skcipher does too isn't a regression in functionality.

- Better type safety by using struct skcipher_alg, struct
crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc.

- It sometimes simplifies the implementations of algorithms.

Also, the blkcipher API was no longer being tested.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

53253064 25-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - rename crypto_skcipher_type2 to crypto_skcipher_type

Now that the crypto_skcipher_type() function has been removed, there's
no reason to call the crypto_type struct for skciphers
"crypto_skcipher_type2". Rename it to simply "crypto_skcipher_type".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d3ca75a8 25-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - unify the crypto_has_skcipher*() functions

crypto_has_skcipher() and crypto_has_skcipher2() do the same thing: they
check for the availability of an algorithm of type skcipher, blkcipher,
or ablkcipher, which also meets any non-type constraints the caller
specified. And they have exactly the same prototype.

Therefore, eliminate the redundancy by removing crypto_has_skcipher()
and renaming crypto_has_skcipher2() to crypto_has_skcipher().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a1afe274 24-Oct-2019 David Sterba <dsterba@suse.com>

crypto: testmgr - add test vectors for blake2b

Test vectors for blake2b with various digest sizes. As the algorithm is
the same up to the digest calculation, the key and input data length is
distributed in a way that tests all combinanions of the two over the
digest sizes.

Based on the suggestion from Eric, the following input sizes are tested
[0, 1, 7, 15, 64, 247, 256], where blake2b blocksize is 128, so the
padded and the non-padded input buffers are tested.

blake2b-160 blake2b-256 blake2b-384 blake2b-512
---------------------------------------------------
len=0 | klen=0 klen=1 klen=32 klen=64
len=1 | klen=32 klen=64 klen=0 klen=1
len=7 | klen=64 klen=0 klen=1 klen=32
len=15 | klen=1 klen=32 klen=64 klen=0
len=64 | klen=0 klen=1 klen=32 klen=64
len=247 | klen=32 klen=64 klen=0 klen=1
len=256 | klen=64 klen=0 klen=1 klen=32

Where key:

- klen=0: empty key
- klen=1: 1 byte value 0x42, 'B'
- klen=32: first 32 bytes of the default key, sequence 00..1f
- klen=64: default key, sequence 00..3f

The unkeyed vectors are ordered before keyed, as this is required by
testmgr.

CC: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

91d68933 24-Oct-2019 David Sterba <dsterba@suse.com>

crypto: blake2b - add blake2b generic implementation

The patch brings support of several BLAKE2 variants (2b with various
digest lengths). The keyed digest is supported, using tfm->setkey call.
The in-tree user will be btrfs (for checksumming), we're going to use
the BLAKE2b-256 variant.

The code is reference implementation taken from the official sources and
modified in terms of kernel coding style (whitespace, comments, uintXX_t
-> uXX types, removed unused prototypes and #ifdefs, removed testing
code, changed secure_zero_memory -> memzero_explicit, used own helpers
for unaligned reads/writes and rotations).

Further changes removed sanity checks of key length or output size,
these values are verified in the crypto API callbacks or hardcoded in
shash_alg and not exposed to users.

Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f398243e 23-Oct-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: ecdh - fix big endian bug in ECC library

The elliptic curve arithmetic library used by the EC-DH KPP implementation
assumes big endian byte order, and unconditionally reverses the byte
and word order of multi-limb quantities. On big endian systems, the byte
reordering is not necessary, while the word ordering needs to be retained.

So replace the __swab64() invocation with a call to be64_to_cpu() which
should do the right thing for both little and big endian builds.

Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
Cc: <stable@vger.kernel.org> # v4.9+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7f725f41 14-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: powerpc - convert SPE AES algorithms to skcipher API

Convert the glue code for the PowerPC SPE implementations of AES-ECB,
AES-CBC, AES-CTR, and AES-XTS from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.

Tested with:

export ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu-
make mpc85xx_defconfig
cat >> .config << EOF
# CONFIG_MODULES is not set
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_XTS=y
CONFIG_CRYPTO_AES_PPC_SPE=y
EOF
make olddefconfig
make -j32
qemu-system-ppc -M mpc8544ds -cpu e500 -nographic \
-kernel arch/powerpc/boot/zImage \
-append cryptomgr.fuzz_iterations=1000

Note that xts-ppc-spe still fails the comparison tests due to the lack
of ciphertext stealing support. This is not addressed by this patch.

This patch also cleans up the code by making ->encrypt() and ->decrypt()
call a common function for each of ECB, CBC, and XTS, and by using a
clearer way to compute the length to process at each step.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

52828263 14-Oct-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - duplicate init() and final() hooks in SIMD code

In order to speed up aegis128 processing even more, duplicate the init()
and final() routines as SIMD versions in their entirety. This results
in a 2x speedup on ARM Cortex-A57 for ~1500 byte packets (using AES
instructions).

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2698bce1 14-Oct-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - avoid function pointers for parameterization

Instead of passing around an ops structure with function pointers,
which forces indirect calls to be used, refactor the code slightly
so we can use ordinary function calls. At the same time, switch to
a static key to decide whether or not the SIMD code path may be used.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cd5d2f84 11-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: sparc/des - convert to skcipher API

Convert the glue code for the SPARC64 DES opcodes implementations of
DES-ECB, DES-CBC, 3DES-ECB, and 3DES-CBC from the deprecated "blkcipher"
API to the "skcipher" API. This is needed in order for the blkcipher
API to be removed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c72a26ef 11-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: sparc/camellia - convert to skcipher API

Convert the glue code for the SPARC64 Camellia opcodes implementations
of Camellia-ECB and Camellia-CBC from the deprecated "blkcipher" API to
the "skcipher" API. This is needed in order for the blkcipher API to be
removed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

64db5e74 11-Oct-2019 Eric Biggers <ebiggers@google.com>

crypto: sparc/aes - convert to skcipher API

Convert the glue code for the SPARC64 AES opcodes implementations of
AES-ECB, AES-CBC, and AES-CTR from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

965d7286 09-Oct-2019 Ben Dooks <ben.dooks@codethink.co.uk>

crypto: jitter - add header to fix buildwarnings

Fix the following build warnings by adding a header for
the definitions shared between jitterentropy.c and
jitterentropy-kcapi.c. Fixes the following:

crypto/jitterentropy.c:445:5: warning: symbol 'jent_read_entropy' was not declared. Should it be static?
crypto/jitterentropy.c:475:18: warning: symbol 'jent_entropy_collector_alloc' was not declared. Should it be static?
crypto/jitterentropy.c:509:6: warning: symbol 'jent_entropy_collector_free' was not declared. Should it be static?
crypto/jitterentropy.c:516:5: warning: symbol 'jent_entropy_init' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:59:6: warning: symbol 'jent_zalloc' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:64:6: warning: symbol 'jent_zfree' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:69:5: warning: symbol 'jent_fips_enabled' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:74:6: warning: symbol 'jent_panic' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:79:6: warning: symbol 'jent_memcpy' was not declared. Should it be static?
crypto/jitterentropy-kcapi.c:93:6: warning: symbol 'jent_get_nstime' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Stephan Mueller <smueller@chronox.de
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c03b04dc 04-Oct-2019 Navid Emamdoost <navid.emamdoost@gmail.com>

crypto: user - fix memory leak in crypto_reportstat

In crypto_reportstat, a new skb is created by nlmsg_new(). This skb is
leaked if crypto_reportstat_alg() fails. Required release for skb is
added.

Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics")
Cc: <stable@vger.kernel.org>
Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ffdde593 04-Oct-2019 Navid Emamdoost <navid.emamdoost@gmail.com>

crypto: user - fix memory leak in crypto_report

In crypto_report, a new skb is created via nlmsg_new(). This skb should
be released if crypto_report_alg() fails.

Fixes: a38f7907b926 ("crypto: Add userspace configuration API")
Cc: <stable@vger.kernel.org>
Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

64e7f852 04-Oct-2019 Ayush Sawal <ayush.sawal@chelsio.com>

crypto: af_alg - cast ki_complete ternary op to int

when libkcapi test is executed using HW accelerator, cipher operation
return -74.Since af_alg_async_cb->ki_complete treat err as unsigned int,
libkcapi receive 429467222 even though it expect -ve value.

Hence its required to cast resultlen to int so that proper
error is returned to libkcapi.

AEAD one shot non-aligned test 2(libkcapi test)
./../bin/kcapi -x 10 -c "gcm(aes)" -i 7815d4b06ae50c9c56e87bd7
-k ea38ac0c9b9998c80e28fb496a2b88d9 -a
"853f98a750098bec1aa7497e979e78098155c877879556bb51ddeb6374cbaefc"
-t "c4ce58985b7203094be1d134c1b8ab0b" -q
"b03692f86d1b8b39baf2abb255197c98"

Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

83053677 02-Oct-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128/simd - build 32-bit ARM for v8 architecture explicitly

Now that the Clang compiler has taken it upon itself to police the
compiler command line, and reject combinations for arguments it views
as incompatible, the AEGIS128 no longer builds correctly, and errors
out like this:

clang-10: warning: ignoring extension 'crypto' because the 'armv7-a'
architecture does not support it [-Winvalid-command-line-argument]

So let's switch to armv8-a instead, which matches the crypto-neon-fp-armv8
FPU profile we specify. Since neither were actually supported by GCC
versions before 4.8, let's tighten the Kconfig dependencies as well so
we won't run into errors when building with an ancient compiler.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Reported-by: <ci_notify@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e1f653cb 17-Sep-2019 Alexander E. Patrakov <patrakov@gmail.com>

crypto: jitter - fix comments

One should not say "ec can be NULL" and then dereference it.
One cannot talk about the return value if the function returns void.

Signed-off-by: Alexander E. Patrakov <patrakov@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2eb2d198 13-Sep-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128-neon - use Clang compatible cflags for ARM

The next version of Clang will start policing compiler command line
options, and will reject combinations of -march and -mfpu that it
thinks are incompatible.

This results in errors like

clang-10: warning: ignoring extension 'crypto' because the 'armv7-a'
architecture does not support it [-Winvalid-command-line-argument]
/tmp/aegis128-neon-inner-5ee428.s: Assembler messages:
/tmp/aegis128-neon-inner-5ee428.s:73: Error: selected
processor does not support `aese.8 q2,q14' in ARM mode

when buiding the SIMD aegis128 code for 32-bit ARM, given that the
'armv7-a' -march argument is considered to be compatible with the
ARM crypto extensions. Instead, we should use armv8-a, which does
allow the crypto extensions to be enabled.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e4886214 13-Sep-2019 Pascal van Leeuwen <pascalvanl@gmail.com>

crypto: testmgr - Added testvectors for the rfc3686(ctr(sm4)) skcipher

Added testvectors for the rfc3686(ctr(sm4)) skcipher algorithm

changes since v1:
- nothing

Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a06b15b2 13-Sep-2019 Pascal van Leeuwen <pascalvanl@gmail.com>

crypto: testmgr - Added testvectors for the ofb(sm4) & cfb(sm4) skciphers

Added testvectors for the ofb(sm4) and cfb(sm4) skcipher algorithms

changes since v1:
- nothing

Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8194fd1d 13-Sep-2019 Pascal van Leeuwen <pascalvanl@gmail.com>

crypto: testmgr - Added testvectors for the hmac(sm3) ahash

Added testvectors for the hmac(sm3) ahash authentication algorithm

changes since v1 & v2:
-nothing

Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ec05a74f 10-Sep-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: testmgr - add another gcm(aes) testcase

Add an additional gcm(aes) test case that triggers the code path in
the new arm64 driver that deals with tail blocks whose size is not
a multiple of the block size, and where the size of the preceding
input is a multiple of 64 bytes.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5b0fe955 09-Sep-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: algif_skcipher - Use chunksize instead of blocksize

When algif_skcipher does a partial operation it always process data
that is a multiple of blocksize. However, for algorithms such as
CTR this is wrong because even though it can process any number of
bytes overall, the partial block must come at the very end and not
in the middle.

This is exactly what chunksize is meant to describe so this patch
changes blocksize to chunksize.

Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aefcf2f4 28-Sep-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull kernel lockdown mode from James Morris:
"This is the latest iteration of the kernel lockdown patchset, from
Matthew Garrett, David Howells and others.

From the original description:

This patchset introduces an optional kernel lockdown feature,
intended to strengthen the boundary between UID 0 and the kernel.
When enabled, various pieces of kernel functionality are restricted.
Applications that rely on low-level access to either hardware or the
kernel may cease working as a result - therefore this should not be
enabled without appropriate evaluation beforehand.

The majority of mainstream distributions have been carrying variants
of this patchset for many years now, so there's value in providing a
doesn't meet every distribution requirement, but gets us much closer
to not requiring external patches.

There are two major changes since this was last proposed for mainline:

- Separating lockdown from EFI secure boot. Background discussion is
covered here: https://lwn.net/Articles/751061/

- Implementation as an LSM, with a default stackable lockdown LSM
module. This allows the lockdown feature to be policy-driven,
rather than encoding an implicit policy within the mechanism.

The new locked_down LSM hook is provided to allow LSMs to make a
policy decision around whether kernel functionality that would allow
tampering with or examining the runtime state of the kernel should be
permitted.

The included lockdown LSM provides an implementation with a simple
policy intended for general purpose use. This policy provides a coarse
level of granularity, controllable via the kernel command line:

lockdown={integrity|confidentiality}

Enable the kernel lockdown feature. If set to integrity, kernel features
that allow userland to modify the running kernel are disabled. If set to
confidentiality, kernel features that allow userland to extract
confidential information from the kernel are also disabled.

This may also be controlled via /sys/kernel/security/lockdown and
overriden by kernel configuration.

New or existing LSMs may implement finer-grained controls of the
lockdown features. Refer to the lockdown_reason documentation in
include/linux/security.h for details.

The lockdown feature has had signficant design feedback and review
across many subsystems. This code has been in linux-next for some
weeks, with a few fixes applied along the way.

Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf
when kernel lockdown is in confidentiality mode") is missing a
Signed-off-by from its author. Matthew responded that he is providing
this under category (c) of the DCO"

* 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits)
kexec: Fix file verification on S390
security: constify some arrays in lockdown LSM
lockdown: Print current->comm in restriction messages
efi: Restrict efivar_ssdt_load when the kernel is locked down
tracefs: Restrict tracefs when the kernel is locked down
debugfs: Restrict debugfs when the kernel is locked down
kexec: Allow kexec_file() with appropriate IMA policy when locked down
lockdown: Lock down perf when in confidentiality mode
bpf: Restrict bpf when kernel lockdown is in confidentiality mode
lockdown: Lock down tracing and perf kprobes when in confidentiality mode
lockdown: Lock down /proc/kcore
x86/mmiotrace: Lock down the testmmiotrace module
lockdown: Lock down module params that specify hardware parameters (eg. ioport)
lockdown: Lock down TIOCSSERIAL
lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down
acpi: Disable ACPI table override if the kernel is locked down
acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down
ACPI: Limit access to custom_method when the kernel is locked down
x86/msr: Restrict MSR access when the kernel is locked down
x86: Lock down IO port access when the kernel is locked down
...


f1f2f614 27-Sep-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity

Pull integrity updates from Mimi Zohar:
"The major feature in this time is IMA support for measuring and
appraising appended file signatures. In addition are a couple of bug
fixes and code cleanup to use struct_size().

In addition to the PE/COFF and IMA xattr signatures, the kexec kernel
image may be signed with an appended signature, using the same
scripts/sign-file tool that is used to sign kernel modules.

Similarly, the initramfs may contain an appended signature.

This contained a lot of refactoring of the existing appended signature
verification code, so that IMA could retain the existing framework of
calculating the file hash once, storing it in the IMA measurement list
and extending the TPM, verifying the file's integrity based on a file
hash or signature (eg. xattrs), and adding an audit record containing
the file hash, all based on policy. (The IMA support for appended
signatures patch set was posted and reviewed 11 times.)

The support for appended signature paves the way for adding other
signature verification methods, such as fs-verity, based on a single
system-wide policy. The file hash used for verifying the signature and
the signature, itself, can be included in the IMA measurement list"

* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
ima: ima_api: Use struct_size() in kzalloc()
ima: use struct_size() in kzalloc()
sefltest/ima: support appended signatures (modsig)
ima: Fix use after free in ima_read_modsig()
MODSIGN: make new include file self contained
ima: fix freeing ongoing ahash_request
ima: always return negative code for error
ima: Store the measurement again when appraising a modsig
ima: Define ima-modsig template
ima: Collect modsig
ima: Implement support for module-style appended signatures
ima: Factor xattr_verify() out of ima_appraise_measurement()
ima: Add modsig appraise_type option for module-style appended signatures
integrity: Select CONFIG_KEYS instead of depending on it
PKCS#7: Introduce pkcs7_get_digest()
PKCS#7: Refactor verify_pkcs7_signature()
MODSIGN: Export module signature definitions
ima: initialize the "template" field with the default template


3e414b5b 21-Sep-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper updates from Mike Snitzer:

- crypto and DM crypt advances that allow the crypto API to reclaim
implementation details that do not belong in DM crypt. The wrapper
template for ESSIV generation that was factored out will also be used
by fscrypt in the future.

- Add root hash pkcs#7 signature verification to the DM verity target.

- Add a new "clone" DM target that allows for efficient remote
replication of a device.

- Enhance DM bufio's cache to be tailored to each client based on use.
Clients that make heavy use of the cache get more of it, and those
that use less have reduced cache usage.

- Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the
version number of a DM target (even if the associated module isn't
yet loaded).

- Fix invalid memory access in DM zoned target.

- Fix the max_discard_sectors limit advertised by the DM raid target;
it was mistakenly storing the limit in bytes rather than sectors.

- Small optimizations and cleanups in DM writecache target.

- Various fixes and cleanups in DM core, DM raid1 and space map portion
of DM persistent data library.

* tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits)
dm: introduce DM_GET_TARGET_VERSION
dm bufio: introduce a global cache replacement
dm bufio: remove old-style buffer cleanup
dm bufio: introduce a global queue
dm bufio: refactor adjust_total_allocated
dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer
dm: add clone target
dm raid: fix updating of max_discard_sectors limit
dm writecache: skip writecache_wait for pmem mode
dm stats: use struct_size() helper
dm crypt: omit parsing of the encapsulated cipher
dm crypt: switch to ESSIV crypto API template
crypto: essiv - create wrapper template for ESSIV generation
dm space map common: remove check for impossible sm_find_free() return value
dm raid1: use struct_size() with kzalloc()
dm writecache: optimize performance by sorting the blocks for writeback_all
dm writecache: add unlikely for getting two block with same LBA
dm writecache: remove unused member pointer in writeback_struct
dm zoned: fix invalid memory access
dm verity: add root hash pkcs#7 signature verification
...


cc491d8e 05-Sep-2019 Daniel Jordan <daniel.m.jordan@oracle.com>

padata, pcrypt: take CPU hotplug lock internally in padata_alloc_possible

With pcrypt's cpumask no longer used, take the CPU hotplug lock inside
padata_alloc_possible.

Useful later in the series for avoiding nested acquisition of the CPU
hotplug lock in padata when padata_alloc_possible is allocating an
unbound workqueue.

Without this patch, this nested acquisition would happen later in the
series:

pcrypt_init_padata
get_online_cpus
alloc_padata_possible
alloc_padata
alloc_workqueue(WQ_UNBOUND) // later in the series
alloc_and_link_pwqs
apply_wqattrs_lock
get_online_cpus // recursive rwsem acquisition

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

63d35788 05-Sep-2019 Daniel Jordan <daniel.m.jordan@oracle.com>

crypto: pcrypt - remove padata cpumask notifier

Now that padata_do_parallel takes care of finding an alternate callback
CPU, there's no need for pcrypt's callback cpumask, so remove it and the
notifier callback that keeps it in sync.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e6ce0e08 05-Sep-2019 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: make padata_do_parallel find alternate callback CPU

padata_do_parallel currently returns -EINVAL if the callback CPU isn't
in the callback cpumask.

pcrypt tries to prevent this situation by keeping its own callback
cpumask in sync with padata's and checks that the callback CPU it passes
to padata is valid. Make padata handle this instead.

padata_do_parallel now takes a pointer to the callback CPU and updates
it for the caller if an alternate CPU is used. Overall behavior in
terms of which callback CPUs are chosen stays the same.

Prepares for removal of the padata cpumask notifier in pcrypt, which
will fix a lockdep complaint about nested acquisition of the CPU hotplug
lock later in the series.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b128a304 05-Sep-2019 Daniel Jordan <daniel.m.jordan@oracle.com>

padata: allocate workqueue internally

Move workqueue allocation inside of padata to prepare for further
changes to how padata uses workqueues.

Guarantees the workqueue is created with max_active=1, which padata
relies on to work correctly. No functional change.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-crypto@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0ba3c026 05-Sep-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: skcipher - Unmap pages after an external error

skcipher_walk_done may be called with an error by internal or
external callers. For those internal callers we shouldn't unmap
pages but for external callers we must unmap any pages that are
in use.

This patch distinguishes between the two cases by checking whether
walk->nbytes is zero or not. For internal callers, we now set
walk->nbytes to zero prior to the call. For external callers,
walk->nbytes has always been non-zero (as zero is used to indicate
the termination of a walk).

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

34d6245f 01-Sep-2019 Hans de Goede <hdegoede@redhat.com>

crypto: sha256 - Merge crypto/sha256.h into crypto/sha.h

The generic sha256 implementation from lib/crypto/sha256.c uses data
structs defined in crypto/sha.h, so lets move the function prototypes
there too.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

be1eb7f7 19-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: essiv - create wrapper template for ESSIV generation

Implement a template that wraps a (skcipher,shash) or (aead,shash) tuple
so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and
move it into the crypto API. This will result in better test coverage, and
will allow future changes to make the bare cipher interface internal to the
crypto subsystem, in order to increase robustness of the API against misuse.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Milan Broz <gmazyland@gmail.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>

f1d087b9 22-Aug-2019 YueHaibing <yuehaibing@huawei.com>

crypto: aegis128 - Fix -Wunused-const-variable warning

crypto/aegis.h:27:32: warning:
crypto_aegis_const defined but not used [-Wunused-const-variable=]

crypto_aegis_const is only used in aegis128-core.c,
just move the definition over there.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f975abb2 19-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: essiv - add tests for essiv in cbc(aes)+sha256 mode

Add a test vector for the ESSIV mode that is the most widely used,
i.e., using cbc(aes) and sha256, in both skcipher and AEAD modes
(the latter is used by tcrypt to encapsulate the authenc template
or h/w instantiations of the same)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

389139b3 19-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: arm64/aegis128 - use explicit vector load for permute vectors

When building the new aegis128 NEON code in big endian mode, Clang
complains about the const uint8x16_t permute vectors in the following
way:

crypto/aegis128-neon-inner.c:58:40: warning: vector initializers are not
compatible with NEON intrinsics in big endian mode
[-Wnonportable-vector-initialization]
static const uint8x16_t shift_rows = {
^
crypto/aegis128-neon-inner.c:58:40: note: consider using vld1q_u8() to
initialize a vector from memory, or vcombine_u8(vcreate_u8(), vcreate_u8())
to initialize from integer constants

Since the same issue applies to the uint8x16x4_t loads of the AES Sbox,
update those references as well. However, since GCC does not implement
the vld1q_u8_x4() intrinsic, switch from IS_ENABLED() to a preprocessor
conditional to conditionally include this code.

Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

08c327f6 17-Aug-2019 Hans de Goede <hdegoede@redhat.com>

crypto: sha256_generic - Switch to the generic lib/crypto/sha256.c lib code

Drop the duplicate generic sha256 (and sha224) implementation from
crypto/sha256_generic.c and use the implementation from
lib/crypto/sha256.c instead.

"diff -u lib/crypto/sha256.c sha256_generic.c" shows that the core
sha256_transform function from both implementations is identical and
the other code is functionally identical too.

Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

01d3aee8 17-Aug-2019 Hans de Goede <hdegoede@redhat.com>

crypto: sha256 - Make lib/crypto/sha256.c suitable for generic use

Before this commit lib/crypto/sha256.c has only been used in the s390 and
x86 purgatory code, make it suitable for generic use:

* Export interesting symbols
* Add -D__DISABLE_EXPORTS to CFLAGS_sha256.o for purgatory builds to
avoid the exports for the purgatory builds
* Add to lib/crypto/Makefile and crypto/Kconfig

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1a01333d 17-Aug-2019 Hans de Goede <hdegoede@redhat.com>

crypto: sha256_generic - Fix some coding style issues

Add a bunch of missing spaces after commas and arround operators.

Note the main goal of this is to make sha256_transform and its helpers
identical in formatting too the duplcate implementation in lib/sha256.c,
so that "diff -u" can be used to compare them to prove that no functional
changes are made when further patches in this series consolidate the 2
implementations into 1.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

18fbe0da 14-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: des - remove now unused __des3_ede_setkey()

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

04007b0e 14-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: des - split off DES library from generic DES cipher driver

Another one for the cipher museum: split off DES core processing into
a separate module so other drivers (mostly for crypto accelerators)
can reuse the code without pulling in the generic DES cipher itself.
This will also permit the cipher interface to be made private to the
crypto API itself once we move the only user in the kernel (CIFS) to
this library interface.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4fd4be05 14-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: 3des - move verification out of exported routine

In preparation of moving the shared key expansion routine into the
DES library, move the verification done by __des3_ede_setkey() into
its callers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6ee41e54 14-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: des/3des_ede - add new helpers to verify keys

The recently added helper routine to perform key strength validation
of triple DES keys is slightly inadequate, since it comes in two versions,
neither of which are highly useful for anything other than skciphers (and
many drivers still use the older blkcipher interfaces).

So let's add a new helper and, considering that this is a helper function
that is only intended to be used by crypto code itself, put it in a new
des.h header under crypto/internal.

While at it, implement a similar helper for single DES, so that we can
start replacing the pattern of calling des_ekey() into a temp buffer
that occurs in many drivers in drivers/crypto.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

99d5cadf 19-Aug-2019 Jiri Bohac <jbohac@suse.cz>

kexec_file: split KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE

This is a preparatory patch for kexec_file_load() lockdown. A locked down
kernel needs to prevent unsigned kernel images from being loaded with
kexec_file_load(). Currently, the only way to force the signature
verification is compiling with KEXEC_VERIFY_SIG. This prevents loading
usigned images even when the kernel is not locked down at runtime.

This patch splits KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE.
Analogous to the MODULE_SIG and MODULE_SIG_FORCE for modules, KEXEC_SIG
turns on the signature verification but allows unsigned images to be
loaded. KEXEC_SIG_FORCE disallows images without a valid signature.

Signed-off-by: Jiri Bohac <jbohac@suse.cz>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Garrett <mjg59@google.com>
cc: kexec@lists.infradead.org
Signed-off-by: James Morris <jmorris@namei.org>

19842963 11-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: arm64/aegis128 - implement plain NEON version

Provide a version of the core AES transform to the aegis128 SIMD
code that does not rely on the special AES instructions, but uses
plain NEON instructions instead. This allows the SIMD version of
the aegis128 driver to be used on arm64 systems that do not
implement those instructions (which are not mandatory in the
architecture), such as the Raspberry Pi 3.

Since GCC makes a mess of this when using the tbl/tbx intrinsics
to perform the sbox substitution, preload the Sbox into v16..v31
in this case and use inline asm to emit the tbl/tbx instructions.
Clang does not support this approach, nor does it require it, since
it does a much better job at code generation, so there we use the
intrinsics as usual.

Cc: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a4397635 11-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - provide a SIMD implementation based on NEON intrinsics

Provide an accelerated implementation of aegis128 by wiring up the
SIMD hooks in the generic driver to an implementation based on NEON
intrinsics, which can be compiled to both ARM and arm64 code.

This results in a performance of 2.2 cycles per byte on Cortex-A53,
which is a performance increase of ~11x compared to the generic
code.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cf3d41ad 11-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - add support for SIMD acceleration

Add some plumbing to allow the AEGIS128 code to be built with SIMD
routines for acceleration.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8083b1bf 09-Aug-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: xts - add support for ciphertext stealing

Add support for the missing ciphertext stealing part of the XTS-AES
specification, which permits inputs of any size >= the block size.

Cc: Pascal van Leeuwen <pvanleeuwen@verimatrix.com>
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Tested-by: Milan Broz <gmazyland@gmail.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a62084d2 09-Aug-2019 Pascal van Leeuwen <pascalvanl@gmail.com>

crypto: aead - Do not allow authsize=0 if auth. alg has digestsize>0

Return -EINVAL on an attempt to set the authsize to 0 with an auth.
algorithm with a non-zero digestsize (i.e. anything but digest_null)
as authenticating the data and then throwing away the result does not
make any sense at all.

The digestsize zero exception is for use with digest_null for testing
purposes only.

Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

440dc9aa 09-Aug-2019 YueHaibing <yuehaibing@huawei.com>

crypto: streebog - remove two unused variables

crypto/streebog_generic.c:162:17: warning:
Pi defined but not used [-Wunused-const-variable=]
crypto/streebog_generic.c:151:17: warning:
Tau defined but not used [-Wunused-const-variable=]

They are never used, so can be removed.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c2ccfa9e 09-Aug-2019 YueHaibing <yuehaibing@huawei.com>

crypto: aes-generic - remove unused variable 'rco_tab'

crypto/aes_generic.c:64:18: warning:
rco_tab defined but not used [-Wunused-const-variable=]

It is never used, so can be removed.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

43b970fa 08-Aug-2019 Chuhong Yuan <hslester96@gmail.com>

crypto: cryptd - Use refcount_t for refcount

Reference counters are preferred to use refcount_t instead of
atomic_t.
This is because the implementation of refcount_t can prevent
overflows and detect possible use-after-free.
So convert atomic_t ref counters to refcount_t.

Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

74bf81d0 02-Aug-2019 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: gcm - restrict assoclen for rfc4543

Based on seqiv, IPsec ESP and rfc4543/rfc4106 the assoclen can be 16 or
20 bytes.

From esp4/esp6, assoclen is sizeof IP Header. This includes spi, seq_no
and extended seq_no, that is 8 or 12 bytes.
In seqiv, to asscolen is added the IV size (8 bytes).
Therefore, the assoclen, for rfc4543, should be restricted to 16 or 20
bytes, as for rfc4106.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geanta <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d13dfae3 01-Aug-2019 Peter Zijlstra <peterz@infradead.org>

crypto: engine - Reduce default RT priority

The crypto engine initializes its kworker thread to FIFO-99 (when
requesting RT priority), reduce this to FIFO-50.

FIFO-99 is the very highest priority available to SCHED_FIFO and
it not a suitable default; it would indicate the crypto work is the
most important work on the machine.

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

65526f63 31-Jul-2019 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: gcm - helper functions for assoclen/authsize check

Added inline helper functions to check authsize and assoclen for
gcm, rfc4106 and rfc4543.
These are used in the generic implementation of gcm, rfc4106 and
rfc4543.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e201af16 27-Jun-2019 Thiago Jung Bauermann <bauerman@linux.ibm.com>

PKCS#7: Introduce pkcs7_get_digest()

IMA will need to access the digest of the PKCS7 message (as calculated by
the kernel) before the signature is verified, so introduce
pkcs7_get_digest() for that purpose.

Also, modify pkcs7_digest() to detect when the digest was already
calculated so that it doesn't have to do redundant work. Verifying that
sinfo->sig->digest isn't NULL is sufficient because both places which
allocate sinfo->sig (pkcs7_parse_message() and pkcs7_note_signed_info())
use kzalloc() so sig->digest is always initialized to zero.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Reviewed-by: Mimi Zohar <zohar@linux.ibm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

dec0fb39 24-Jul-2019 Arnd Bergmann <arnd@arndb.de>

crypto: jitterentropy - build without sanitizer

Recent clang-9 snapshots double the kernel stack usage when building
this file with -O0 -fsanitize=kernel-hwaddress, compared to clang-8
and older snapshots, this changed between commits svn364966 and
svn366056:

crypto/jitterentropy.c:516:5: error: stack frame size of 2640 bytes in function 'jent_entropy_init' [-Werror,-Wframe-larger-than=]
int jent_entropy_init(void)
^
crypto/jitterentropy.c:185:14: error: stack frame size of 2224 bytes in function 'jent_lfsr_time' [-Werror,-Wframe-larger-than=]
static __u64 jent_lfsr_time(struct rand_data *ec, __u64 time, __u64 loop_cnt)
^

I prepared a reduced test case in case any clang developers want to
take a closer look, but from looking at the earlier output it seems
that even with clang-8, something was very wrong here.

Turn off any KASAN and UBSAN sanitizing for this file, as that likely
clashes with -O0 anyway. Turning off just KASAN avoids the warning
already, but I suspect both of these have undesired side-effects
for jitterentropy.

Link: https://godbolt.org/z/fDcwZ5
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c9f1fd4f 01-Aug-2019 Herbert Xu <herbert@gondor.apana.org.au>

Revert "crypto: aegis128 - add support for SIMD acceleration"

This reverts commit ecc8bc81f2fb3976737ef312f824ba6053aa3590
("crypto: aegis128 - provide a SIMD implementation based on NEON
intrinsics") and commit 7cdc0ddbf74a19cecb2f0e9efa2cae9d3c665189
("crypto: aegis128 - add support for SIMD acceleration").

They cause compile errors on platforms other than ARM because
the mechanism to selectively compile the SIMD code is broken.

Repoted-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8dfa20fc 20-Jul-2019 Eric Biggers <ebiggers@google.com>

crypto: ghash - add comment and improve help text

To help avoid confusion, add a comment to ghash-generic.c which explains
the convention that the kernel's implementation of GHASH uses.

Also update the Kconfig help text and module descriptions to call GHASH
a "hash function" rather than a "message digest", since the latter
normally means a real cryptographic hash function, which GHASH is not.

Cc: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

97ac82d9 18-Jul-2019 Arnd Bergmann <arnd@arndb.de>

crypto: aegis - fix badly optimized clang output

Clang sometimes makes very different inlining decisions from gcc.
In case of the aegis crypto algorithms, it decides to turn the innermost
primitives (and, xor, ...) into separate functions but inline most of
the rest.

This results in a huge amount of variables spilled on the stack, leading
to rather slow execution as well as kernel stack usage beyond the 32-bit
warning limit when CONFIG_KASAN is enabled:

crypto/aegis256.c:123:13: warning: stack frame size of 648 bytes in function 'crypto_aegis256_encrypt_chunk' [-Wframe-larger-than=]
crypto/aegis256.c:366:13: warning: stack frame size of 1264 bytes in function 'crypto_aegis256_crypt' [-Wframe-larger-than=]
crypto/aegis256.c:187:13: warning: stack frame size of 656 bytes in function 'crypto_aegis256_decrypt_chunk' [-Wframe-larger-than=]
crypto/aegis128l.c:135:13: warning: stack frame size of 832 bytes in function 'crypto_aegis128l_encrypt_chunk' [-Wframe-larger-than=]
crypto/aegis128l.c:415:13: warning: stack frame size of 1480 bytes in function 'crypto_aegis128l_crypt' [-Wframe-larger-than=]
crypto/aegis128l.c:218:13: warning: stack frame size of 848 bytes in function 'crypto_aegis128l_decrypt_chunk' [-Wframe-larger-than=]
crypto/aegis128.c:116:13: warning: stack frame size of 584 bytes in function 'crypto_aegis128_encrypt_chunk' [-Wframe-larger-than=]
crypto/aegis128.c:351:13: warning: stack frame size of 1064 bytes in function 'crypto_aegis128_crypt' [-Wframe-larger-than=]
crypto/aegis128.c:177:13: warning: stack frame size of 592 bytes in function 'crypto_aegis128_decrypt_chunk' [-Wframe-larger-than=]

Forcing the primitives to all get inlined avoids the issue and the
resulting code is similar to what gcc produces.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

91b05a7e 09-Jul-2019 Ondrej Mosnacek <omosnace@redhat.com>

crypto: user - make NETLINK_CRYPTO work inside netns

Currently, NETLINK_CRYPTO works only in the init network namespace. It
doesn't make much sense to cut it out of the other network namespaces,
so do the minor plumbing work necessary to make it work in any network
namespace. Code inspired by net/core/sock_diag.c.

Tested using kcapi-dgst from libkcapi [1]:
Before:
# unshare -n kcapi-dgst -c sha256 </dev/null | wc -c
libkcapi - Error: Netlink error: sendmsg failed
libkcapi - Error: Netlink error: sendmsg failed
libkcapi - Error: NETLINK_CRYPTO: cannot obtain cipher information for hmac(sha512) (is required crypto_user.c patch missing? see documentation)
0

After:
# unshare -n kcapi-dgst -c sha256 </dev/null | wc -c
32

[1] https://github.com/smuellerDD/libkcapi

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

97bcb161 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - add a speed test for AEGIS128

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ecc8bc81 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - provide a SIMD implementation based on NEON intrinsics

Provide an accelerated implementation of aegis128 by wiring up the
SIMD hooks in the generic driver to an implementation based on NEON
intrinsics, which can be compiled to both ARM and arm64 code.

This results in a performance of 2.2 cycles per byte on Cortex-A53,
which is a performance increase of ~11x compared to the generic
code.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7cdc0ddb 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - add support for SIMD acceleration

Add some plumbing to allow the AEGIS128 code to be built with SIMD
routines for acceleration.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

521cdde7 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis - avoid prerotated AES tables

The generic AES code provides four sets of lookup tables, where each
set consists of four tables containing the same 32-bit values, but
rotated by 0, 8, 16 and 24 bits, respectively. This makes sense for
CISC architectures such as x86 which support memory operands, but
for other architectures, the rotates are quite cheap, and using all
four tables needlessly thrashes the D-cache, and actually hurts rather
than helps performance.

Since x86 already has its own implementation of AEGIS based on AES-NI
instructions, let's tweak the generic implementation towards other
architectures, and avoid the prerotated tables, and perform the
rotations inline. On ARM Cortex-A53, this results in a ~8% speedup.

Acked-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

368b1bdc 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128 - drop empty TFM init/exit routines

TFM init/exit routines are optional, so no need to provide empty ones.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

520c1993 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis128l/aegis256 - remove x86 and generic implementations

Three variants of AEGIS were proposed for the CAESAR competition, and
only one was selected for the final portfolio: AEGIS128.

The other variants, AEGIS128L and AEGIS256, are not likely to ever turn
up in networking protocols or other places where interoperability
between Linux and other systems is a concern, nor are they likely to
be subjected to further cryptanalysis. However, uninformed users may
think that AEGIS128L (which is faster) is equally fit for use.

So let's remove them now, before anyone starts using them and we are
forced to support them forever.

Note that there are no known flaws in the algorithms or in any of these
implementations, but they have simply outlived their usefulness.

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5cb97700 03-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: morus - remove generic and x86 implementations

MORUS was not selected as a winner in the CAESAR competition, which
is not surprising since it is considered to be cryptographically
broken [0]. (Note that this is not an implementation defect, but a
flaw in the underlying algorithm). Since it is unlikely to be in use
currently, let's remove it before we're stuck with it.

[0] https://eprint.iacr.org/2019/172.pdf

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f248caf9 02-Jul-2019 Hannah Pan <hannahpan@google.com>

crypto: testmgr - add tests for lzo-rle

Add self-tests for the lzo-rle algorithm.

Signed-off-by: Hannah Pan <hannahpan@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1e25ca02 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aes-generic - unexport last-round AES tables

The versions of the AES lookup tables that are only used during the last
round are never used outside of the driver, so there is no need to
export their symbols.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5bb12d78 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aes-generic - drop key expansion routine in favor of library version

Drop aes-generic's version of crypto_aes_expand_key(), and switch to
the key expansion routine provided by the AES library. AES key expansion
is not performance critical, and it is better to have a single version
shared by all AES implementations.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1d2c3279 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/aes - drop scalar assembler implementations

The AES assembler code for x86 isn't actually faster than code
generated by the compiler from aes_generic.c, and considering
the disproportionate maintenance burden of assembler code on
x86, it is better just to drop it entirely. Modern x86 systems
will use AES-NI anyway, and given that the modules being removed
have a dependency on aes_generic already, we can remove them
without running the risk of regressions.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2c53fd11 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/aes-ni - switch to generic for fallback and key routines

The AES-NI code contains fallbacks for invocations that occur from a
context where the SIMD unit is unavailable, which really only occurs
when running in softirq context that was entered from a hard IRQ that
was taken while running kernel code that was already using the FPU.

That means performance is not really a consideration, and we can just
use the new library code for this use case, which has a smaller
footprint and is believed to be time invariant. This will allow us to
drop the non-SIMD asm routines in a subsequent patch.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e59c1c98 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aes - create AES library based on the fixed time AES code

Take the existing small footprint and mostly time invariant C code
and turn it into a AES library that can be used for non-performance
critical, casual use of AES, and as a fallback for, e.g., SIMD code
that needs a secondary path that can be taken in contexts where the
SIMD unit is off limits (e.g., in hard interrupts taken from kernel
context)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b158fcbb 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aes/fixed-time - align key schedule with other implementations

The fixed time AES code mangles the key schedule so that xoring the
first round key with values at fixed offsets across the Sbox produces
the correct value. This primes the D-cache with the entire Sbox before
any data dependent lookups are done, making it more difficult to infer
key bits from timing variances when the plaintext is known.

The downside of this approach is that it renders the key schedule
incompatible with other implementations of AES in the kernel, which
makes it cumbersome to use this implementation as a fallback for SIMD
based AES in contexts where this is not allowed.

So let's tweak the fixed Sbox indexes so that they add up to zero under
the xor operation. While at it, increase the granularity to 16 bytes so
we cover the entire Sbox even on systems with 16 byte cachelines.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

724ecd3c 02-Jul-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: aes - rename local routines to prevent future clashes

Rename some local AES encrypt/decrypt routines so they don't clash with
the names we are about to introduce for the routines exposed by the
generic AES library.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9552389c 02-Jul-2019 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: fips - add FIPS test failure notification chain

Crypto test failures in FIPS mode cause an immediate panic, but
on some system the cryptographic boundary extends beyond just
the Linux controlled domain.

Add a simple atomic notification chain to allow interested parties
to register to receive notification prior to us kicking the bucket.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

17a20aca 11-Jul-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'usb-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / PHY updates from Greg KH:
"Here is the big USB and PHY driver pull request for 5.3-rc1.

Lots of stuff here, all of which has been in linux-next for a while
with no reported issues. Nothing is earth-shattering, just constant
forward progress for more devices supported and cleanups and small
fixes:

- USB gadget driver updates and fixes

- new USB gadget driver for some hardware, followed by a quick revert
of those patches as they were not ready to be merged...

- PHY driver updates

- Lots of new driver additions and cleanups with a few fixes mixed
in"

* tag 'usb-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (145 commits)
Revert "usb: gadget: storage: Remove warning message"
Revert "dt-bindings: add binding for USBSS-DRD controller."
Revert "usb:gadget Separated decoding functions from dwc3 driver."
Revert "usb:gadget Patch simplify usb_decode_set_clear_feature function."
Revert "usb:gadget Simplify usb_decode_get_set_descriptor function."
Revert "usb:cdns3 Add Cadence USB3 DRD Driver"
Revert "usb:cdns3 Fix for stuck packets in on-chip OUT buffer."
usb :fsl: Change string format for errata property
usb: host: Stops USB controller init if PLL fails to lock
usb: linux/fsl_device: Add platform member has_fsl_erratum_a006918
usb: phy: Workaround for USB erratum-A005728
usb: fsl: Set USB_EN bit to select ULPI phy
usb: Handle USB3 remote wakeup for LPM enabled devices correctly
drivers/usb/typec/tps6598x.c: fix 4CC cmd write
drivers/usb/typec/tps6598x.c: fix portinfo width
usb: storage: scsiglue: Do not skip VPD if try_vpd_pages is set
usb: renesas_usbhs: add a workaround for a race condition of workqueue
usb: gadget: udc: renesas_usb3: remove redundant assignment to ret
usb: dwc2: use a longer AHB idle timeout in dwc2_core_reset()
USB: gadget: function: fix issue Unneeded variable: "value"
...


4d2fa8b4 08-Jul-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"Here is the crypto update for 5.3:

API:
- Test shash interface directly in testmgr
- cra_driver_name is now mandatory

Algorithms:
- Replace arc4 crypto_cipher with library helper
- Implement 5 way interleave for ECB, CBC and CTR on arm64
- Add xxhash
- Add continuous self-test on noise source to drbg
- Update jitter RNG

Drivers:
- Add support for SHA204A random number generator
- Add support for 7211 in iproc-rng200
- Fix fuzz test failures in inside-secure
- Fix fuzz test failures in talitos
- Fix fuzz test failures in qat"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
crypto: stm32/hash - remove interruptible condition for dma
crypto: stm32/hash - Fix hmac issue more than 256 bytes
crypto: stm32/crc32 - rename driver file
crypto: amcc - remove memset after dma_alloc_coherent
crypto: ccp - Switch to SPDX license identifiers
crypto: ccp - Validate the the error value used to index error messages
crypto: doc - Fix formatting of new crypto engine content
crypto: doc - Add parameter documentation
crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
crypto: arm64/aes-ce - add 5 way interleave routines
crypto: talitos - drop icv_ool
crypto: talitos - fix hash on SEC1.
crypto: talitos - move struct talitos_edesc into talitos.h
lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
crypto: asymmetric_keys - select CRYPTO_HASH where needed
crypto: serpent - mark __serpent_setkey_sbox noinline
crypto: testmgr - dynamically allocate crypto_shash
crypto: testmgr - dynamically allocate testvec_config
crypto: talitos - eliminate unneeded 'done' functions at build time
...


c84ca912 08-Jul-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'keys-namespace-20190627' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

Pull keyring namespacing from David Howells:
"These patches help make keys and keyrings more namespace aware.

Firstly some miscellaneous patches to make the process easier:

- Simplify key index_key handling so that the word-sized chunks
assoc_array requires don't have to be shifted about, making it
easier to add more bits into the key.

- Cache the hash value in the key so that we don't have to calculate
on every key we examine during a search (it involves a bunch of
multiplications).

- Allow keying_search() to search non-recursively.

Then the main patches:

- Make it so that keyring names are per-user_namespace from the point
of view of KEYCTL_JOIN_SESSION_KEYRING so that they're not
accessible cross-user_namespace.

keyctl_capabilities() shows KEYCTL_CAPS1_NS_KEYRING_NAME for this.

- Move the user and user-session keyrings to the user_namespace
rather than the user_struct. This prevents them propagating
directly across user_namespaces boundaries (ie. the KEY_SPEC_*
flags will only pick from the current user_namespace).

- Make it possible to include the target namespace in which the key
shall operate in the index_key. This will allow the possibility of
multiple keys with the same description, but different target
domains to be held in the same keyring.

keyctl_capabilities() shows KEYCTL_CAPS1_NS_KEY_TAG for this.

- Make it so that keys are implicitly invalidated by removal of a
domain tag, causing them to be garbage collected.

- Institute a network namespace domain tag that allows keys to be
differentiated by the network namespace in which they operate. New
keys that are of a type marked 'KEY_TYPE_NET_DOMAIN' are assigned
the network domain in force when they are created.

- Make it so that the desired network namespace can be handed down
into the request_key() mechanism. This allows AFS, NFS, etc. to
request keys specific to the network namespace of the superblock.

This also means that the keys in the DNS record cache are
thenceforth namespaced, provided network filesystems pass the
appropriate network namespace down into dns_query().

For DNS, AFS and NFS are good, whilst CIFS and Ceph are not. Other
cache keyrings, such as idmapper keyrings, also need to set the
domain tag - for which they need access to the network namespace of
the superblock"

* tag 'keys-namespace-20190627' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
keys: Pass the network namespace into request_key mechanism
keys: Network namespace domain tag
keys: Garbage collect keys for which the domain has been removed
keys: Include target namespace in match criteria
keys: Move the user and user-session keyrings to the user_namespace
keys: Namespace keyring names
keys: Add a 'recurse' flag for keyring searches
keys: Cache the hash value to avoid lots of recalculation
keys: Simplify key description management


ee39d46d 04-Jul-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:
"This fixes two memory leaks and a list corruption bug"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: user - prevent operating on larval algorithms
crypto: cryptd - Fix skcipher instance memory leak
lib/mpi: Fix karactx leak in mpi_powm


21d4120e 02-Jul-2019 Eric Biggers <ebiggers@google.com>

crypto: user - prevent operating on larval algorithms

Michal Suchanek reported [1] that running the pcrypt_aead01 test from
LTP [2] in a loop and holding Ctrl-C causes a NULL dereference of
alg->cra_users.next in crypto_remove_spawns(), via crypto_del_alg().
The test repeatedly uses CRYPTO_MSG_NEWALG and CRYPTO_MSG_DELALG.

The crash occurs when the instance that CRYPTO_MSG_DELALG is trying to
unregister isn't a real registered algorithm, but rather is a "test
larval", which is a special "algorithm" added to the algorithms list
while the real algorithm is still being tested. Larvals don't have
initialized cra_users, so that causes the crash. Normally pcrypt_aead01
doesn't trigger this because CRYPTO_MSG_NEWALG waits for the algorithm
to be tested; however, CRYPTO_MSG_NEWALG returns early when interrupted.

Everything else in the "crypto user configuration" API has this same bug
too, i.e. it inappropriately allows operating on larval algorithms
(though it doesn't look like the other cases can cause a crash).

Fix this by making crypto_alg_match() exclude larval algorithms.

[1] https://lkml.kernel.org/r/20190625071624.27039-1-msuchanek@suse.de
[2] https://github.com/linux-test-project/ltp/blob/20190517/testcases/kernel/crypto/pcrypt_aead01.c

Reported-by: Michal Suchanek <msuchanek@suse.de>
Fixes: a38f7907b926 ("crypto: Add userspace configuration API")
Cc: <stable@vger.kernel.org> # v3.2+
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1a0fad63 02-Jul-2019 Vincent Whitchurch <vincent.whitchurch@axis.com>

crypto: cryptd - Fix skcipher instance memory leak

cryptd_skcipher_free() fails to free the struct skcipher_instance
allocated in cryptd_create_skcipher(), leading to a memory leak. This
is detected by kmemleak on bootup on ARM64 platforms:

unreferenced object 0xffff80003377b180 (size 1024):
comm "cryptomgr_probe", pid 822, jiffies 4294894830 (age 52.760s)
backtrace:
kmem_cache_alloc_trace+0x270/0x2d0
cryptd_create+0x990/0x124c
cryptomgr_probe+0x5c/0x1e8
kthread+0x258/0x318
ret_from_fork+0x10/0x1c

Fixes: 4e0958d19bd8 ("crypto: cryptd - Add support for skcipher")
Cc: <stable@vger.kernel.org>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

90acc065 18-Jun-2019 Arnd Bergmann <arnd@arndb.de>

crypto: asymmetric_keys - select CRYPTO_HASH where needed

Build testing with some core crypto options disabled revealed
a few modules that are missing CRYPTO_HASH:

crypto/asymmetric_keys/x509_public_key.o: In function `x509_get_sig_params':
x509_public_key.c:(.text+0x4c7): undefined reference to `crypto_alloc_shash'
x509_public_key.c:(.text+0x5e5): undefined reference to `crypto_shash_digest'
crypto/asymmetric_keys/pkcs7_verify.o: In function `pkcs7_digest.isra.0':
pkcs7_verify.c:(.text+0xab): undefined reference to `crypto_alloc_shash'
pkcs7_verify.c:(.text+0x1b2): undefined reference to `crypto_shash_digest'
pkcs7_verify.c:(.text+0x3c1): undefined reference to `crypto_shash_update'
pkcs7_verify.c:(.text+0x411): undefined reference to `crypto_shash_finup'

This normally doesn't show up in randconfig tests because there is
a large number of other options that select CRYPTO_HASH.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

47397118 18-Jun-2019 Arnd Bergmann <arnd@arndb.de>

crypto: serpent - mark __serpent_setkey_sbox noinline

The same bug that gcc hit in the past is apparently now showing
up with clang, which decides to inline __serpent_setkey_sbox:

crypto/serpent_generic.c:268:5: error: stack frame size of 2112 bytes in function '__serpent_setkey' [-Werror,-Wframe-larger-than=]

Marking it 'noinline' reduces the stack usage from 2112 bytes to
192 and 96 bytes, respectively, and seems to generate more
useful object code.

Fixes: c871c10e4ea7 ("crypto: serpent - improve __serpent_setkey with UBSAN")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

149c4e6e 18-Jun-2019 Arnd Bergmann <arnd@arndb.de>

crypto: testmgr - dynamically allocate crypto_shash

The largest stack object in this file is now the shash descriptor.
Since there are many other stack variables, this can push it
over the 1024 byte warning limit, in particular with clang and
KASAN:

crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]

Make test_hash_vs_generic_impl() do the same thing as the
corresponding eaed and skcipher functions by allocating the
descriptor dynamically. We can still do better than this,
but it brings us well below the 1024 byte limit.

Suggested-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against their generic implementation")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6b5ca646 18-Jun-2019 Arnd Bergmann <arnd@arndb.de>

crypto: testmgr - dynamically allocate testvec_config

On arm32, we get warnings about high stack usage in some of the functions:

crypto/testmgr.c:2269:12: error: stack frame size of 1032 bytes in function 'alg_test_aead' [-Werror,-Wframe-larger-than=]
static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
^
crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]
static int __alg_test_hash(const struct hash_testvec *vecs,
^

On of the larger objects on the stack here is struct testvec_config, so
change that to dynamic allocation.

Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation")
Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against their generic implementation")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dcf49dbc 26-Jun-2019 David Howells <dhowells@redhat.com>

keys: Add a 'recurse' flag for keyring searches

Add a 'recurse' flag for keyring searches so that the flag can be omitted
and recursion disabled, thereby allowing just the nominated keyring to be
searched and none of the children.

Signed-off-by: David Howells <dhowells@redhat.com>

58ee0100 23-Jun-2019 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Merge 5.2-rc6 into usb-next

We need the USB fixes in here too.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


611a23c2 12-Jun-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: arc4 - remove cipher implementation

There are no remaining users of the cipher implementation, and there
are no meaningful ways in which the arc4 cipher can be combined with
templates other than ECB (and the way we do provide that combination
is highly dubious to begin with).

So let's drop the arc4 cipher altogether, and only keep the ecb(arc4)
skcipher, which is used in various places in the kernel.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dc51f257 12-Jun-2019 Ard Biesheuvel <ardb@kernel.org>

crypto: arc4 - refactor arc4 core code into separate library

Refactor the core rc4 handling so we can move most users to a library
interface, permitting us to drop the cipher interface entirely in a
future patch. This is part of an effort to simplify the crypto API
and improve its robustness against incorrect use.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bdb275bb 20-Jun-2019 Herbert Xu <herbert@gondor.apana.org.au>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Merge crypto tree to pick up vmx changes.


d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

caab277b 02-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not see http www gnu org
licenses

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 503 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Enrico Weigelt <info@metux.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

ae748b9c 17-Jun-2019 Ard Biesheuvel <ardb@kernel.org>

wusb: switch to cbcmac transform

The wusb code takes a very peculiar approach at implementing CBC-MAC,
by using plain CBC into a scratch buffer, and taking the output IV
as the MAC.

We can clean up this code substantially by switching to the cbcmac
shash, as exposed by the CCM template. To ensure that the module is
loaded on demand, add the cbcmac template name as a module alias.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

860ab2e5 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha - constify ctx and iv arguments

Constify the ctx and iv arguments to crypto_chacha_init() and the
various chacha*_stream_xor() functions. This makes it clear that they
are not modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

76cadf22 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha20poly1305 - a few cleanups

- Use sg_init_one() instead of sg_init_table() then sg_set_buf().

- Remove unneeded calls to sg_init_table() prior to scatterwalk_ffwd().

- Simplify initializing the poly tail block.

- Simplify computing padlen.

This doesn't change any actual behavior.

Cc: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

81bcbb1e 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - un-inline encrypt and decrypt functions

crypto_skcipher_encrypt() and crypto_skcipher_decrypt() have grown to be
more than a single indirect function call. They now also check whether
a key has been set, and with CONFIG_CRYPTO_STATS=y they also update the
crypto statistics. That can add up to a lot of bloat at every call
site. Moreover, these always involve a function call anyway, which
greatly limits the benefits of inlining.

So change them to be non-inline.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f2fe1154 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: aead - un-inline encrypt and decrypt functions

crypto_aead_encrypt() and crypto_aead_decrypt() have grown to be more
than a single indirect function call. They now also check whether a key
has been set, the decryption side checks whether the input is at least
as long as the authentication tag length, and with CONFIG_CRYPTO_STATS=y
they also update the crypto statistics. That can add up to a lot of
bloat at every call site. Moreover, these always involve a function
call anyway, which greatly limits the benefits of inlining.

So change them to be non-inline.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e63e1b0d 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add some more preemption points

Call cond_resched() after each fuzz test iteration. This avoids stall
warnings if fuzz_iterations is set very high for testing purposes.

While we're at it, also call cond_resched() after finishing testing each
test vector.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

177f87d0 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - require cra_name and cra_driver_name

Now that all algorithms explicitly set cra_driver_name, make it required
for algorithm registration and remove the code that generated a default
cra_driver_name.

Also add an explicit check that cra_name is set too, since that's
obviously required too, yet it didn't seem to be checked anywhere.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d6ebf528 02-Jun-2019 Eric Biggers <ebiggers@google.com>

crypto: make all generic algorithms set cra_driver_name

Most generic crypto algorithms declare a driver name ending in
"-generic". The rest don't declare a driver name and instead rely on
the crypto API automagically appending "-generic" upon registration.

Having multiple conventions is unnecessarily confusing and makes it
harder to grep for all generic algorithms in the kernel source tree.
But also, allowing NULL driver names is problematic because sometimes
people fail to set it, e.g. the case fixed by commit 417980364300
("crypto: cavium/zip - fix collision with generic cra_driver_name").

Of course, people can also incorrectly name their drivers "-generic".
But that's much easier to notice / grep for.

Therefore, let's make cra_driver_name mandatory. In preparation for
this, this patch makes all generic algorithms set cra_driver_name.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9331b674 08-Jun-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull yet more SPDX updates from Greg KH:
"Another round of SPDX header file fixes for 5.2-rc4

These are all more "GPL-2.0-or-later" or "GPL-2.0-only" tags being
added, based on the text in the files. We are slowly chipping away at
the 700+ different ways people tried to write the license text. All of
these were reviewed on the spdx mailing list by a number of different
people.

We now have over 60% of the kernel files covered with SPDX tags:
$ ./scripts/spdxcheck.py -v 2>&1 | grep Files
Files checked: 64533
Files with SPDX: 40392
Files with errors: 0

I think the majority of the "easy" fixups are now done, it's now the
start of the longer-tail of crazy variants to wade through"

* tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (159 commits)
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 450
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 449
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 448
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 446
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 445
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 444
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 443
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 442
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 440
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 438
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 437
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 435
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 434
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 433
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 432
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 431
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 430
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 429
...


ae876604 06-Jun-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:
"This fixes a regression that breaks the jitterentropy RNG and a
potential memory leak in hmac"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: hmac - fix memory leak in hmac_init_tfm()
crypto: jitterentropy - change back to module_init()


7545b6c2 31-May-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha20poly1305 - fix atomic sleep when using async algorithm

Clear the CRYPTO_TFM_REQ_MAY_SLEEP flag when the chacha20poly1305
operation is being continued from an async completion callback, since
sleeping may not be allowed in that context.

This is basically the same bug that was recently fixed in the xts and
lrw templates. But, it's always been broken in chacha20poly1305 too.
This was found using syzkaller in combination with the updated crypto
self-tests which actually test the MAY_SLEEP flag now.

Reproducer:

python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(
("aead", "rfc7539(cryptd(chacha20-generic),poly1305-generic)"))'

Kernel output:

BUG: sleeping function called from invalid context at include/crypto/algapi.h:426
in_atomic(): 1, irqs_disabled(): 0, pid: 1001, name: kworker/2:2
[...]
CPU: 2 PID: 1001 Comm: kworker/2:2 Not tainted 5.2.0-rc2 #5
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
Workqueue: crypto cryptd_queue_worker
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x4d/0x6a lib/dump_stack.c:113
___might_sleep kernel/sched/core.c:6138 [inline]
___might_sleep.cold.19+0x8e/0x9f kernel/sched/core.c:6095
crypto_yield include/crypto/algapi.h:426 [inline]
crypto_hash_walk_done+0xd6/0x100 crypto/ahash.c:113
shash_ahash_update+0x41/0x60 crypto/shash.c:251
shash_async_update+0xd/0x10 crypto/shash.c:260
crypto_ahash_update include/crypto/hash.h:539 [inline]
poly_setkey+0xf6/0x130 crypto/chacha20poly1305.c:337
poly_init+0x51/0x60 crypto/chacha20poly1305.c:364
async_done_continue crypto/chacha20poly1305.c:78 [inline]
poly_genkey_done+0x15/0x30 crypto/chacha20poly1305.c:369
cryptd_skcipher_complete+0x29/0x70 crypto/cryptd.c:279
cryptd_skcipher_decrypt+0xcd/0x110 crypto/cryptd.c:339
cryptd_queue_worker+0x70/0xa0 crypto/cryptd.c:184
process_one_work+0x1ed/0x420 kernel/workqueue.c:2269
worker_thread+0x3e/0x3a0 kernel/workqueue.c:2415
kthread+0x11f/0x140 kernel/kthread.c:255
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352

Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Cc: <stable@vger.kernel.org> # v4.2+
Cc: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

20a0f976 30-May-2019 Eric Biggers <ebiggers@google.com>

crypto: lrw - use correct alignmask

Commit c778f96bf347 ("crypto: lrw - Optimize tweak computation")
incorrectly reduced the alignmask of LRW instances from
'__alignof__(u64) - 1' to '__alignof__(__be32) - 1'.

However, xor_tweak() and setkey() assume that the data and key,
respectively, are aligned to 'be128', which has u64 alignment.

Fix the alignmask to be at least '__alignof__(be128) - 1'.

Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5c6bc4df 30-May-2019 Eric Biggers <ebiggers@google.com>

crypto: ghash - fix unaligned memory access in ghash_setkey()

Changing ghash_mod_init() to be subsys_initcall made it start running
before the alignment fault handler has been installed on ARM. In kernel
builds where the keys in the ghash test vectors happened to be
misaligned in the kernel image, this exposed the longstanding bug that
ghash_setkey() is incorrectly casting the key buffer (which can have any
alignment) to be128 for passing to gf128mul_init_4k_lle().

Fix this by memcpy()ing the key to a temporary buffer.

Don't fix it by setting an alignmask on the algorithm instead because
that would unnecessarily force alignment of the data too.

Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM")
Reported-by: Peter Robinson <pbrobinson@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Peter Robinson <pbrobinson@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

67882e76 30-May-2019 Nikolay Borisov <nborisov@suse.com>

crypto: xxhash - Implement xxhash support

xxhash is currently implemented as a self-contained module in /lib.
This patch enables that module to be used as part of the generic kernel
crypto framework. It adds a simple wrapper to the 64bit version.

I've also added test vectors (with help from Nick Terrell). The upstream
xxhash code is tested by running hashing operation on random 222 byte
data with seed values of 0 and a prime number. The upstream test
suite can be found at https://github.com/Cyan4973/xxHash/blob/cf46e0c/xxhsum.c#L664

Essentially hashing is run on data of length 0,1,14,222 with the
aforementioned seed values 0 and prime 2654435761. The particular random
222 byte string was provided to me by Nick Terrell by reading
/dev/random and the checksums were calculated by the upstream xxsum
utility with the following bash script:

dd if=/dev/random of=TEST_VECTOR bs=1 count=222

for a in 0 1; do
for l in 0 1 14 222; do
for s in 0 2654435761; do
echo algo $a length $l seed $s;
head -c $l TEST_VECTOR | ~/projects/kernel/xxHash/xxhsum -H$a -s$s
done
done
done

This produces output as follows:

algo 0 length 0 seed 0
02cc5d05 stdin
algo 0 length 0 seed 2654435761
02cc5d05 stdin
algo 0 length 1 seed 0
25201171 stdin
algo 0 length 1 seed 2654435761
25201171 stdin
algo 0 length 14 seed 0
c1d95975 stdin
algo 0 length 14 seed 2654435761
c1d95975 stdin
algo 0 length 222 seed 0
b38662a6 stdin
algo 0 length 222 seed 2654435761
b38662a6 stdin
algo 1 length 0 seed 0
ef46db3751d8e999 stdin
algo 1 length 0 seed 2654435761
ac75fda2929b17ef stdin
algo 1 length 1 seed 0
27c3f04c2881203a stdin
algo 1 length 1 seed 2654435761
4a15ed26415dfe4d stdin
algo 1 length 14 seed 0
3d33dc700231dfad stdin
algo 1 length 14 seed 2654435761
ea5f7ddef9a64f80 stdin
algo 1 length 222 seed 0
5f3d3c08ec2bef34 stdin
algo 1 length 222 seed 2654435761
6a9df59664c7ed62 stdin

algo 1 is xx64 variant, algo 0 is the 32 bit variant which is currently
not hooked up.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d9d67c87 29-May-2019 Stephan Müller <smueller@chronox.de>

crypto: jitter - update implementation to 2.1.2

The Jitter RNG implementation is updated to comply with upstream version
2.1.2. The change covers the following aspects:

* Time variation measurement is conducted over the LFSR operation
instead of the XOR folding

* Invcation of stuck test during initialization

* Removal of the stirring functionality and the Von-Neumann
unbiaser as the LFSR using a primitive and irreducible polynomial
generates an identical distribution of random bits

This implementation was successfully used in FIPS 140-2 validations
as well as in German BSI evaluations.

This kernel implementation was tested as follows:

* The unchanged kernel code file jitterentropy.c is compiled as part
of user space application to generate raw unconditioned noise
data. That data is processed with the NIST SP800-90B non-IID test
tool to verify that the kernel code exhibits an equal amount of noise
as the upstream Jitter RNG version 2.1.2.

* Using AF_ALG with the libkcapi tool of kcapi-rng the Jitter RNG was
output tested with dieharder to verify that the output does not
exhibit statistical weaknesses. The following command was used:
kcapi-rng -n "jitterentropy_rng" -b 100000000000 | dieharder -a -g 200

* The unchanged kernel code file jitterentropy.c is compiled as part
of user space application to test the LFSR implementation. The
LFSR is injected a monotonically increasing counter as input and
the output is fed into dieharder to verify that the LFSR operation
does not exhibit statistical weaknesses.

* The patch was tested on the Muen separation kernel which returns
a more coarse time stamp to verify that the Jitter RNG does not cause
regressions with its initialization test considering that the Jitter
RNG depends on a high-resolution timer.

Tested-by: Reto Buerki <reet@codelabs.ch>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d8ea98aa 28-May-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - test the shash API

For hash algorithms implemented using the "shash" algorithm type, test
both the ahash and shash APIs, not just the ahash API.

Testing the ahash API already tests the shash API indirectly, which is
normally good enough. However, there have been corner cases where there
have been shash bugs that don't get exposed through the ahash API. So,
update testmgr to test the shash API too.

This would have detected the arm64 SHA-1 and SHA-2 bugs for which fixes
were just sent out (https://patchwork.kernel.org/patch/10964843/ and
https://patchwork.kernel.org/patch/10965089/):

alg: shash: sha1-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"
alg: shash: sha224-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"
alg: shash: sha256-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"

This also would have detected the bugs fixed by commit 307508d10729
("crypto: crct10dif-generic - fix use via crypto_shash_digest()") and
commit dec3d0b1071a
("crypto: x86/crct10dif-pcl - fix use via crypto_shash_digest()").

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2b27bdcc 29-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 336

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not write to the free
software foundation inc 51 franklin st fifth floor boston ma 02110
1301 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 246 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190530000436.674189849@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

a61127c2 29-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 335

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not write to the free
software foundation inc 51 franklin st fifth floor boston ma 02110
1301 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 111 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190530000436.567572064@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

1802d0be 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 174

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 655 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070034.575739538@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

c942fddf 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157

Based on 3 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version [author] [kishon] [vijay] [abraham]
[i] [kishon]@[ti] [com] this program is distributed in the hope that
it will be useful but without any warranty without even the implied
warranty of merchantability or fitness for a particular purpose see
the gnu general public license for more details

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version [author] [graeme] [gregory]
[gg]@[slimlogic] [co] [uk] [author] [kishon] [vijay] [abraham] [i]
[kishon]@[ti] [com] [based] [on] [twl6030]_[usb] [c] [author] [hema]
[hk] [hemahk]@[ti] [com] this program is distributed in the hope
that it will be useful but without any warranty without even the
implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1105 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070033.202006027@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

2874c5fd 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

1cc6582e 23-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 140

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of gnu general public license as published by the
free software foundation either version 2 of the license or at your
option any later version you should have received a copy of the gnu
general public license along with this program if not see http www
gnu org licenses

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 2 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190524100844.276644418@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

5e99a0a7 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove crypto_tfm_in_queue()

Remove the crypto_tfm_in_queue() function, which is unused.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

84ede58d 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: hash - remove CRYPTO_ALG_TYPE_DIGEST

Remove the unnecessary constant CRYPTO_ALG_TYPE_DIGEST, which has the
same value as CRYPTO_ALG_TYPE_HASH.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3e56e168 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: cryptd - move kcrypto_wq into cryptd

kcrypto_wq is only used by cryptd, so move it into cryptd.c and change
the workqueue name from "crypto" to "cryptd".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e590e132 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: gf128mul - make unselectable by user

There's no reason for users to select CONFIG_CRYPTO_GF128MUL, since it's
just some helper functions, and algorithms that need it select it.

Remove the prompt string so that it's not shown to users.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

87804144 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: echainiv - change to 'default n'

echainiv is the only algorithm or template in the crypto API that is
enabled by default. But there doesn't seem to be a good reason for it.
And it pulls in a lot of stuff as dependencies, like AEAD support and a
"NIST SP800-90A DRBG" including HMAC-SHA256.

The commit which made it default 'm', commit 3491244c6298 ("crypto:
echainiv - Set Kconfig default to m"), mentioned that it's needed for
IPsec. However, later commit 32b6170ca59c ("ipv4+ipv6: Make INET*_ESP
select CRYPTO_ECHAINIV") made the IPsec kconfig options select it.

So, remove the 'default m'.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c8a3315a 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: make all templates select CRYPTO_MANAGER

The "cryptomgr" module is required for templates to be used. Many
templates select it, but others don't. Make all templates select it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

929d34ca 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - make extra tests depend on cryptomgr

The crypto self-tests are part of the "cryptomgr" module, which can
technically be disabled (though it rarely is). If you do so, currently
you can still enable CRYPTO_MANAGER_EXTRA_TESTS, which doesn't make
sense since in that case testmgr.c isn't compiled at all. Fix it by
making it CRYPTO_MANAGER_EXTRA_TESTS depend on CRYPTO_MANAGER2, like
CRYPTO_MANAGER_DISABLE_TESTS already does.

Fixes: 5b2706a4d459 ("crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e944eab3 20-May-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - fix length truncation with large page size

On PowerPC with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, there is sometimes
a crash in generate_random_aead_testvec(). The problem is that the
generated test vectors use data lengths of up to about 2 * PAGE_SIZE,
which is 128 KiB on PowerPC; however, the data length fields in the test
vectors are 'unsigned short', so the lengths get truncated. Fix this by
changing the relevant fields to 'unsigned int'.

Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7829a0c1 22-May-2019 Eric Biggers <ebiggers@google.com>

crypto: hmac - fix memory leak in hmac_init_tfm()

When I added the sanity check of 'descsize', I missed that the child
hash tfm needs to be freed if the sanity check fails. Of course this
should never happen, hence the use of WARN_ON(), but it should be fixed.

Fixes: e1354400b25d ("crypto: hash - fix incorrect HASH_MAX_DESCSIZE")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9c5b34c2f 21-May-2019 Eric Biggers <ebiggers@google.com>

crypto: jitterentropy - change back to module_init()

"jitterentropy_rng" doesn't have any other implementations, nor is it
tested by the crypto self-tests. So it was unnecessary to change it to
subsys_initcall. Also it depends on the main clocksource being
initialized, which may happen after subsys_initcall, causing this error:

jitterentropy: Initialization failed with host not compliant with requirements: 2

Change it back to module_init().

Fixes: c4741b230597 ("crypto: run initcalls for generic implementations earlier")
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fd534e9b 23-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not write to the free software foundation inc
51 franklin st fifth floor boston ma 02110 1301 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 50 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091649.499889647@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

af1a8899 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 47

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 or at your option any
later version you should have received a copy of the gnu general
public license for example usr src linux copying if not write to the
free software foundation inc 675 mass ave cambridge ma 02139 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 20 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170858.552543146@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

8d7c56d0 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 45

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 or at your option any
later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 11 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170858.370933192@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

6979193b 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 44

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of gnu general public license as published by the
free software foundation either version 2 of the license or at your
option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170858.279640225@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

59e0b61c 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 42

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170858.098509240@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

b4d0d230 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 36

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public licence as published by
the free software foundation either version 2 of the licence or at
your option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 114 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170857.552531963@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

e62d9491 20-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 33

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not write to the free software foundation inc
59 temple place suite 330 boston ma 02111 1307 usa the full gnu
general public license is included in this distribution in the file
called copying

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 7 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170857.277062491@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

db07cd26 08-May-2019 Stephan Mueller <smueller@chronox.de>

crypto: drbg - add FIPS 140-2 CTRNG for noise source

FIPS 140-2 section 4.9.2 requires a continuous self test of the noise
source. Up to kernel 4.8 drivers/char/random.c provided this continuous
self test. Afterwards it was moved to a location that is inconsistent
with the FIPS 140-2 requirements. The relevant patch was
e192be9d9a30555aae2ca1dc3aad37cba484cd4a .

Thus, the FIPS 140-2 CTRNG is added to the DRBG when it obtains the
seed. This patch resurrects the function drbg_fips_continous_test that
existed some time ago and applies it to the noise sources. The patch
that removed the drbg_fips_continous_test was
b3614763059b82c26bdd02ffcb1c016c1132aad0 .

The Jitter RNG implements its own FIPS 140-2 self test and thus does not
need to be subjected to the test in the DRBG.

The patch contains a tiny fix to ensure proper zeroization in case of an
error during the Jitter RNG data gathering.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2c1212de 21-May-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'spdx-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull SPDX update from Greg KH:
"Here is a series of patches that add SPDX tags to different kernel
files, based on two different things:

- SPDX entries are added to a bunch of files that we missed a year
ago that do not have any license information at all.

These were either missed because the tool saw the MODULE_LICENSE()
tag, or some EXPORT_SYMBOL tags, and got confused and thought the
file had a real license, or the files have been added since the
last big sweep, or they were Makefile/Kconfig files, which we
didn't touch last time.

- Add GPL-2.0-only or GPL-2.0-or-later tags to files where our scan
tools can determine the license text in the file itself. Where this
happens, the license text is removed, in order to cut down on the
700+ different ways we have in the kernel today, in a quest to get
rid of all of these.

These patches have been out for review on the linux-spdx@vger mailing
list, and while they were created by automatic tools, they were
hand-verified by a bunch of different people, all whom names are on
the patches are reviewers.

The reason for these "large" patches is if we were to continue to
progress at the current rate of change in the kernel, adding license
tags to individual files in different subsystems, we would be finished
in about 10 years at the earliest.

There will be more series of these types of patches coming over the
next few weeks as the tools and reviewers crunch through the more
"odd" variants of how to say "GPLv2" that developers have come up with
over the years, combined with other fun oddities (GPL + a BSD
disclaimer?) that are being unearthed, with the goal for the whole
kernel to be cleaned up.

These diffstats are not small, 3840 files are touched, over 10k lines
removed in just 24 patches"

* tag 'spdx-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (24 commits)
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 25
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 24
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 23
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 22
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 21
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 20
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 19
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 18
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 17
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 15
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 14
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 13
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 12
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 11
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 10
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 7
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 5
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 4
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 3
...


d53e860f 21-May-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

- Two long-standing bugs in the powerpc assembly of vmx

- Stack overrun caused by HASH_MAX_DESCSIZE being too small

- Regression in caam

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: vmx - ghash: do nosimd fallback manually
crypto: vmx - CTR: always increment IV as quadword
crypto: hash - fix incorrect HASH_MAX_DESCSIZE
crypto: caam - fix typo in i.MX6 devices list for errata


1ccea77e 19-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 13

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not see http www gnu org licenses

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details [based]
[from] [clk] [highbank] [c] you should have received a copy of the
gnu general public license along with this program if not see http
www gnu org licenses

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 355 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154041.837383322@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

e1354400 14-May-2019 Eric Biggers <ebiggers@google.com>

crypto: hash - fix incorrect HASH_MAX_DESCSIZE

The "hmac(sha3-224-generic)" algorithm has a descsize of 368 bytes,
which is greater than HASH_MAX_DESCSIZE (360) which is only enough for
sha3-224-generic. The check in shash_prepare_alg() doesn't catch this
because the HMAC template doesn't set descsize on the algorithms, but
rather sets it on each individual HMAC transform.

This causes a stack buffer overflow when SHASH_DESC_ON_STACK() is used
with hmac(sha3-224-generic).

Fix it by increasing HASH_MAX_DESCSIZE to the real maximum. Also add a
sanity check to hmac_init().

This was detected by the improved crypto self-tests in v5.2, by loading
the tcrypt module with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y enabled. I
didn't notice this bug when I ran the self-tests by requesting the
algorithms via AF_ALG (i.e., not using tcrypt), probably because the
stack layout differs in the two cases and that made a difference here.

KASAN report:

BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:359 [inline]
BUG: KASAN: stack-out-of-bounds in shash_default_import+0x52/0x80 crypto/shash.c:223
Write of size 360 at addr ffff8880651defc8 by task insmod/3689

CPU: 2 PID: 3689 Comm: insmod Tainted: G E 5.1.0-10741-g35c99ffa20edd #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x86/0xc5 lib/dump_stack.c:113
print_address_description+0x7f/0x260 mm/kasan/report.c:188
__kasan_report+0x144/0x187 mm/kasan/report.c:317
kasan_report+0x12/0x20 mm/kasan/common.c:614
check_memory_region_inline mm/kasan/generic.c:185 [inline]
check_memory_region+0x137/0x190 mm/kasan/generic.c:191
memcpy+0x37/0x50 mm/kasan/common.c:125
memcpy include/linux/string.h:359 [inline]
shash_default_import+0x52/0x80 crypto/shash.c:223
crypto_shash_import include/crypto/hash.h:880 [inline]
hmac_import+0x184/0x240 crypto/hmac.c:102
hmac_init+0x96/0xc0 crypto/hmac.c:107
crypto_shash_init include/crypto/hash.h:902 [inline]
shash_digest_unaligned+0x9f/0xf0 crypto/shash.c:194
crypto_shash_digest+0xe9/0x1b0 crypto/shash.c:211
generate_random_hash_testvec.constprop.11+0x1ec/0x5b0 crypto/testmgr.c:1331
test_hash_vs_generic_impl+0x3f7/0x5c0 crypto/testmgr.c:1420
__alg_test_hash+0x26d/0x340 crypto/testmgr.c:1502
alg_test_hash+0x22e/0x330 crypto/testmgr.c:1552
alg_test.part.7+0x132/0x610 crypto/testmgr.c:4931
alg_test+0x1f/0x40 crypto/testmgr.c:4952

Fixes: b68a7ec1e9a3 ("crypto: hash - Remove VLA usage")
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

80f23212 07-May-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next

Pull networking updates from David Miller:
"Highlights:

1) Support AES128-CCM ciphers in kTLS, from Vakul Garg.

2) Add fib_sync_mem to control the amount of dirty memory we allow to
queue up between synchronize RCU calls, from David Ahern.

3) Make flow classifier more lockless, from Vlad Buslov.

4) Add PHY downshift support to aquantia driver, from Heiner
Kallweit.

5) Add SKB cache for TCP rx and tx, from Eric Dumazet. This reduces
contention on SLAB spinlocks in heavy RPC workloads.

6) Partial GSO offload support in XFRM, from Boris Pismenny.

7) Add fast link down support to ethtool, from Heiner Kallweit.

8) Use siphash for IP ID generator, from Eric Dumazet.

9) Pull nexthops even further out from ipv4/ipv6 routes and FIB
entries, from David Ahern.

10) Move skb->xmit_more into a per-cpu variable, from Florian
Westphal.

11) Improve eBPF verifier speed and increase maximum program size,
from Alexei Starovoitov.

12) Eliminate per-bucket spinlocks in rhashtable, and instead use bit
spinlocks. From Neil Brown.

13) Allow tunneling with GUE encap in ipvs, from Jacky Hu.

14) Improve link partner cap detection in generic PHY code, from
Heiner Kallweit.

15) Add layer 2 encap support to bpf_skb_adjust_room(), from Alan
Maguire.

16) Remove SKB list implementation assumptions in SCTP, your's truly.

17) Various cleanups, optimizations, and simplifications in r8169
driver. From Heiner Kallweit.

18) Add memory accounting on TX and RX path of SCTP, from Xin Long.

19) Switch PHY drivers over to use dynamic featue detection, from
Heiner Kallweit.

20) Support flow steering without masking in dpaa2-eth, from Ioana
Ciocoi.

21) Implement ndo_get_devlink_port in netdevsim driver, from Jiri
Pirko.

22) Increase the strict parsing of current and future netlink
attributes, also export such policies to userspace. From Johannes
Berg.

23) Allow DSA tag drivers to be modular, from Andrew Lunn.

24) Remove legacy DSA probing support, also from Andrew Lunn.

25) Allow ll_temac driver to be used on non-x86 platforms, from Esben
Haabendal.

26) Add a generic tracepoint for TX queue timeouts to ease debugging,
from Cong Wang.

27) More indirect call optimizations, from Paolo Abeni"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1763 commits)
cxgb4: Fix error path in cxgb4_init_module
net: phy: improve pause mode reporting in phy_print_status
dt-bindings: net: Fix a typo in the phy-mode list for ethernet bindings
net: macb: Change interrupt and napi enable order in open
net: ll_temac: Improve error message on error IRQ
net/sched: remove block pointer from common offload structure
net: ethernet: support of_get_mac_address new ERR_PTR error
net: usb: smsc: fix warning reported by kbuild test robot
staging: octeon-ethernet: Fix of_get_mac_address ERR_PTR check
net: dsa: support of_get_mac_address new ERR_PTR error
net: dsa: sja1105: Fix status initialization in sja1105_get_ethtool_stats
vrf: sit mtu should not be updated when vrf netdev is the link
net: dsa: Fix error cleanup path in dsa_init_module
l2tp: Fix possible NULL pointer dereference
taprio: add null check on sched_nest to avoid potential null pointer dereference
net: mvpp2: cls: fix less than zero check on a u32 variable
net_sched: sch_fq: handle non connected flows
net_sched: sch_fq: do not assume EDT packets are ordered
net: hns3: use devm_kcalloc when allocating desc_cb
net: hns3: some cleanup for struct hns3_enet_ring
...


81ff5d2c 06-May-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
"API:
- Add support for AEAD in simd
- Add fuzz testing to testmgr
- Add panic_on_fail module parameter to testmgr
- Use per-CPU struct instead multiple variables in scompress
- Change verify API for akcipher

Algorithms:
- Convert x86 AEAD algorithms over to simd
- Forbid 2-key 3DES in FIPS mode
- Add EC-RDSA (GOST 34.10) algorithm

Drivers:
- Set output IV with ctr-aes in crypto4xx
- Set output IV in rockchip
- Fix potential length overflow with hashing in sun4i-ss
- Fix computation error with ctr in vmx
- Add SM4 protected keys support in ccree
- Remove long-broken mxc-scc driver
- Add rfc4106(gcm(aes)) cipher support in cavium/nitrox"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (179 commits)
crypto: ccree - use a proper le32 type for le32 val
crypto: ccree - remove set but not used variable 'du_size'
crypto: ccree - Make cc_sec_disable static
crypto: ccree - fix spelling mistake "protedcted" -> "protected"
crypto: caam/qi2 - generate hash keys in-place
crypto: caam/qi2 - fix DMA mapping of stack memory
crypto: caam/qi2 - fix zero-length buffer DMA mapping
crypto: stm32/cryp - update to return iv_out
crypto: stm32/cryp - remove request mutex protection
crypto: stm32/cryp - add weak key check for DES
crypto: atmel - remove set but not used variable 'alg_name'
crypto: picoxcell - Use dev_get_drvdata()
crypto: crypto4xx - get rid of redundant using_sd variable
crypto: crypto4xx - use sync skcipher for fallback
crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues
crypto: crypto4xx - fix ctr-aes missing output IV
crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA
crypto: ux500 - use ccflags-y instead of CFLAGS_<basename>.o
crypto: ccree - handle tee fips error during power management resume
crypto: ccree - add function to handle cryptocell tee fips error
...


ff24e498 02-May-2019 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Three trivial overlapping conflicts.

Signed-off-by: David S. Miller <davem@davemloft.net>


8cb08174 26-Apr-2019 Johannes Berg <johannes.berg@intel.com>

netlink: make validation more configurable for future strictness

We currently have two levels of strict validation:

1) liberal (default)
- undefined (type >= max) & NLA_UNSPEC attributes accepted
- attribute length >= expected accepted
- garbage at end of message accepted
2) strict (opt-in)
- NLA_UNSPEC attributes accepted
- attribute length >= expected accepted

Split out parsing strictness into four different options:
* TRAILING - check that there's no trailing data after parsing
attributes (in message or nested)
* MAXTYPE - reject attrs > max known type
* UNSPEC - reject attributes with NLA_UNSPEC policy entries
* STRICT_ATTRS - strictly validate attribute size

The default for future things should be *everything*.
The current *_strict() is a combination of TRAILING and MAXTYPE,
and is renamed to _deprecated_strict().
The current regular parsing has none of this, and is renamed to
*_parse_deprecated().

Additionally it allows us to selectively set one of the new flags
even on old policies. Notably, the UNSPEC flag could be useful in
this case, since it can be arranged (by filling in the policy) to
not be an incompatible userspace ABI change, but would then going
forward prevent forgetting attribute entries. Similar can apply
to the POLICY flag.

We end up with the following renames:
* nla_parse -> nla_parse_deprecated
* nla_parse_strict -> nla_parse_deprecated_strict
* nlmsg_parse -> nlmsg_parse_deprecated
* nlmsg_parse_strict -> nlmsg_parse_deprecated_strict
* nla_parse_nested -> nla_parse_nested_deprecated
* nla_validate_nested -> nla_validate_nested_deprecated

Using spatch, of course:
@@
expression TB, MAX, HEAD, LEN, POL, EXT;
@@
-nla_parse(TB, MAX, HEAD, LEN, POL, EXT)
+nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT)

@@
expression NLH, HDRLEN, TB, MAX, POL, EXT;
@@
-nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT)
+nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT)

@@
expression NLH, HDRLEN, TB, MAX, POL, EXT;
@@
-nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT)
+nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT)

@@
expression TB, MAX, NLA, POL, EXT;
@@
-nla_parse_nested(TB, MAX, NLA, POL, EXT)
+nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT)

@@
expression START, MAX, POL, EXT;
@@
-nla_validate_nested(START, MAX, POL, EXT)
+nla_validate_nested_deprecated(START, MAX, POL, EXT)

@@
expression NLH, HDRLEN, MAX, POL, EXT;
@@
-nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT)
+nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT)

For this patch, don't actually add the strict, non-renamed versions
yet so that it breaks compile if I get it wrong.

Also, while at it, make nla_validate and nla_parse go down to a
common __nla_validate_parse() function to avoid code duplication.

Ultimately, this allows us to have very strict validation for every
new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the
next patch, while existing things will continue to work as is.

In effect then, this adds fully strict validation for any new command.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

1036633e 23-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA

Fix undefined symbol issue in ecrdsa_generic module when ASN1
or OID_REGISTRY aren't enabled in the config by selecting these
options for CRYPTO_ECRDSA.

ERROR: "asn1_ber_decoder" [crypto/ecrdsa_generic.ko] undefined!
ERROR: "look_up_OID" [crypto/ecrdsa_generic.ko] undefined!

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f0372c00 18-Apr-2019 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: testmgr - add missing self test entries for protected keys

Mark sm4 and missing aes using protected keys which are indetical to
same algs with no HW protected keys as tested.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

877b5691 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove shash_desc::flags

The flags field in 'struct shash_desc' never actually does anything.
The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
However, no shash algorithm ever sleeps, making this flag a no-op.

With this being the case, inevitably some users who can't sleep wrongly
pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
actually started sleeping. For example, the shash_ahash_*() functions,
which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
from the ahash API to the shash API. However, the shash functions are
called under kmap_atomic(), so actually they're assumed to never sleep.

Even if it turns out that some users do need preemption points while
hashing large buffers, we could easily provide a helper function
crypto_shash_update_large() which divides the data into smaller chunks
and calls crypto_shash_update() and cond_resched() for each chunk. It's
not necessary to have a flag in 'struct shash_desc', nor is it necessary
to make individual shash algorithms aware of this at all.

Therefore, remove shash_desc::flags, and document that the
crypto_shash_*() functions can be called from any context.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

54fe792b 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove useless crypto_yield() in shash_ahash_digest()

The crypto_yield() in shash_ahash_digest() occurs after the entire
digest operation already happened, so there's no real point. Remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6a1faa4a 18-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: ccm - fix incompatibility between "ccm" and "ccm_base"

CCM instances can be created by either the "ccm" template, which only
allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base",
which allows choosing the ctr and cbcmac implementations, e.g.
"ccm_base(ctr(aes-generic),cbcmac(aes-generic))".

However, a "ccm_base" instance prevents a "ccm" instance from being
registered using the same implementations. Nor will the instance be
found by lookups of "ccm". This can be used as a denial of service.
Moreover, "ccm_base" instances are never tested by the crypto
self-tests, even if there are compatible "ccm" tests.

The root cause of these problems is that instances of the two templates
use different cra_names. Therefore, fix these problems by making
"ccm_base" instances set the same cra_name as "ccm" instances, e.g.
"ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".

This requires extracting the block cipher name from the name of the ctr
and cbcmac algorithms. It also requires starting to verify that the
algorithms are really ctr and cbcmac using the same block cipher, not
something else entirely. But it would be bizarre if anyone were
actually using non-ccm-compatible algorithms with ccm_base, so this
shouldn't break anyone in practice.

Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f699594d 18-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: gcm - fix incompatibility between "gcm" and "gcm_base"

GCM instances can be created by either the "gcm" template, which only
allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base",
which allows choosing the ctr and ghash implementations, e.g.
"gcm_base(ctr(aes-generic),ghash-generic)".

However, a "gcm_base" instance prevents a "gcm" instance from being
registered using the same implementations. Nor will the instance be
found by lookups of "gcm". This can be used as a denial of service.
Moreover, "gcm_base" instances are never tested by the crypto
self-tests, even if there are compatible "gcm" tests.

The root cause of these problems is that instances of the two templates
use different cra_names. Therefore, fix these problems by making
"gcm_base" instances set the same cra_name as "gcm" instances, e.g.
"gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)".

This requires extracting the block cipher name from the name of the ctr
algorithm. It also requires starting to verify that the algorithms are
really ctr and ghash, not something else entirely. But it would be
bizarre if anyone were actually using non-gcm-compatible algorithms with
gcm_base, so this shouldn't break anyone in practice.

Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

67cb60e4 14-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - fix missed optimization in shash_ahash_digest()

shash_ahash_digest(), which is the ->digest() method for ahash tfms that
use an shash algorithm, has an optimization where crypto_shash_digest()
is called if the data is in a single page. But an off-by-one error
prevented this path from being taken unless the user happened to provide
extra data in the scatterlist. Fix it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0a877e35 12-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: cryptd - remove ability to instantiate ablkciphers

Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create
algorithms with the deprecated "ablkcipher" type.

This has been unused since commit 0e145b477dea ("crypto: ablk_helper -
remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8c3fffe3 12-Apr-2019 Sebastian Andrzej Siewior <bigeasy@linutronix.de>

crypto: scompress - initialize per-CPU variables on each CPU

In commit 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead
multiple variables") I accidentally initialized multiple times the memory on a
random CPU. I should have initialize the memory on every CPU like it has
been done earlier. I didn't notice this because the scheduler didn't
move the task to another CPU.
Guenter managed to do that and the code crashed as expected.

Allocate / free per-CPU memory on each CPU.

Fixes: 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c4741b23 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: run initcalls for generic implementations earlier

Use subsys_initcall for registration of all templates and generic
algorithm implementations, rather than module_init. Then change
cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

This is needed so that when both a generic and optimized implementation
of an algorithm are built into the kernel (not loadable modules), the
generic implementation is registered before the optimized one.
Otherwise, the self-tests for the optimized implementation are unable to
allocate the generic implementation for the new comparison fuzz tests.

Note that on arm, a side effect of this change is that self-tests for
generic implementations may run before the unaligned access handler has
been installed. So, unaligned accesses will crash the kernel. This is
arguably a good thing as it makes it easier to detect that type of bug.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

40153b10 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - fuzz AEADs against their generic implementation

When the extra crypto self-tests are enabled, test each AEAD algorithm
against its generic implementation when one is available. This
involves: checking the algorithm properties for consistency, then
randomly generating test vectors using the generic implementation and
running them against the implementation under test. Both good and bad
inputs are tested.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d435e10e 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - fuzz skciphers against their generic implementation

When the extra crypto self-tests are enabled, test each skcipher
algorithm against its generic implementation when one is available.
This involves: checking the algorithm properties for consistency, then
randomly generating test vectors using the generic implementation and
running them against the implementation under test. Both good and bad
inputs are tested.

This has already detected a bug in the skcipher_walk API, a bug in the
LRW template, and an inconsistency in the cts implementations.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9a8a6b3f 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - fuzz hashes against their generic implementation

When the extra crypto self-tests are enabled, test each hash algorithm
against its generic implementation when one is available. This
involves: checking the algorithm properties for consistency, then
randomly generating test vectors using the generic implementation and
running them against the implementation under test. Both good and bad
inputs are tested.

This has already detected a bug in the x86 implementation of poly1305,
bugs in crct10dif, and an inconsistency in cbcmac.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f2bb770a 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add helpers for fuzzing against generic implementation

Add some helper functions in preparation for fuzz testing algorithms
against their generic implementation.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

951d1332 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - identify test vectors by name rather than number

In preparation for fuzz testing algorithms against their generic
implementation, make error messages in testmgr identify test vectors by
name rather than index. Built-in test vectors are simply "named" by
their index in testmgr.h, as before. But (in later patches) generated
test vectors will be given more descriptive names to help developers
debug problems detected with them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5283a8ee 11-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - expand ability to test for errors

Update testmgr to support testing for specific errors from setkey() and
digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and
ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs.
This is useful because algorithms usually restrict the lengths or format
of the message, key, and/or authentication tag in some way. And bad
inputs should be tested too, not just good inputs.

As part of this change, remove the ambiguously-named 'fail' flag and
replace it with 'setkey_error = -EINVAL' for the only test vector that
used it -- the DES weak key test vector. Note that this tightens the
test to require -EINVAL rather than any error code, but AFAICS this
won't cause any test failure.

Other than that, these new fields aren't set on any test vectors yet.
Later patches will do so.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

32fbdbd3 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: ecrdsa - add EC-RDSA test vectors to testmgr

Add testmgr test vectors for EC-RDSA algorithm for every of five
supported parameters (curves). Because there are no officially published
test vectors for the curves, the vectors are generated by gost-engine.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0d7a7864 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm

Add Elliptic Curve Russian Digital Signature Algorithm (GOST R
34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since
2018 the CIS countries) cryptographic standard algorithms (called GOST
algorithms). Only signature verification is supported, with intent to be
used in the IMA.

Summary of the changes:

* crypto/Kconfig:
- EC-RDSA is added into Public-key cryptography section.

* crypto/Makefile:
- ecrdsa objects are added.

* crypto/asymmetric_keys/x509_cert_parser.c:
- Recognize EC-RDSA and Streebog OIDs.

* include/linux/oid_registry.h:
- EC-RDSA OIDs are added to the enum. Also, a two currently not
implemented curve OIDs are added for possible extension later (to
not change numbering and grouping).

* crypto/ecc.c:
- Kenneth MacKay copyright date is updated to 2014, because
vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his
code from micro-ecc.
- Functions needed for ecrdsa are EXPORT_SYMBOL'ed.
- New functions:
vli_is_negative - helper to determine sign of vli;
vli_from_be64 - unpack big-endian array into vli (used for
a signature);
vli_from_le64 - unpack little-endian array into vli (used for
a public key);
vli_uadd, vli_usub - add/sub u64 value to/from vli (used for
increment/decrement);
mul_64_64 - optimized to use __int128 where appropriate, this speeds
up point multiplication (and as a consequence signature
verification) by the factor of 1.5-2;
vli_umult - multiply vli by a small value (speeds up point
multiplication by another factor of 1.5-2, depending on vli sizes);
vli_mmod_special - module reduction for some form of Pseudo-Mersenne
primes (used for the curves A);
vli_mmod_special2 - module reduction for another form of
Pseudo-Mersenne primes (used for the curves B);
vli_mmod_barrett - module reduction using pre-computed value (used
for the curve C);
vli_mmod_slow - more general module reduction which is much slower
(used when the modulus is subgroup order);
vli_mod_mult_slow - modular multiplication;
ecc_point_add - add two points;
ecc_point_mult_shamir - add two points multiplied by scalars in one
combined multiplication (this gives speed up by another factor 2 in
compare to two separate multiplications).
ecc_is_pubkey_valid_partial - additional samity check is added.
- Updated vli_mmod_fast with non-strict heuristic to call optimal
module reduction function depending on the prime value;
- All computations for the previously defined (two NIST) curves should
not unaffected.

* crypto/ecc.h:
- Newly exported functions are documented.

* crypto/ecrdsa_defs.h
- Five curves are defined.

* crypto/ecrdsa.c:
- Signature verification is implemented.

* crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1:
- Templates for BER decoder for EC-RDSA parameters and public key.

Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4a2289da 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: ecc - make ecc into separate module

ecc.c have algorithms that could be used togeter by ecdh and ecrdsa.
Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and
document to what seems appropriate. Move structs ecc_point and ecc_curve
from ecc_curve_defs.h into ecc.h.

No code changes.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3d6228a5 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: Kconfig - create Public-key cryptography section

Group RSA, DH, and ECDH into Public-key cryptography config section.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f1774cb8 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

X.509: parse public key parameters from x509 for akcipher

Some public key algorithms (like EC-DSA) keep in parameters field
important data such as digest and curve OIDs (possibly more for
different EC-DSA variants). Thus, just setting a public key (as
for RSA) is not enough.

Append parameters into the key stream for akcipher_set_{pub,priv}_key.
Appended data is: (u32) algo OID, (u32) parameters length, parameters
data.

This does not affect current akcipher API nor RSA ciphers (they could
ignore it). Idea of appending parameters to the key stream is by Herbert
Xu.

Cc: David Howells <dhowells@redhat.com>
Cc: Denis Kenzior <denkenz@gmail.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

83bc0299 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

KEYS: do not kmemdup digest in {public,tpm}_key_verify_signature

Treat (struct public_key_signature)'s digest same as its signature (s).
Since digest should be already in the kmalloc'd memory do not kmemdup
digest value before calling {public,tpm}_key_verify_signature.

Patch is split from the previous as suggested by Herbert Xu.

Suggested-by: David Howells <dhowells@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c7381b01 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: akcipher - new verify API for public key algorithms

Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().

This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.

Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.

Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.

Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.

Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3ecc9725 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: rsa - unimplement sign/verify for raw RSA backends

In preparation for new akcipher verify call remove sign/verify callbacks
from RSA backends and make PKCS1 driver call encrypt/decrypt instead.

This also complies with the well-known idea that raw RSA should never be
used for sign/verify. It only should be used with proper padding scheme
such as PKCS1 driver provides.

Cc: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Cc: qat-linux@intel.com
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

78a0324f 11-Apr-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: akcipher - default implementations for request callbacks

Because with the introduction of EC-RDSA and change in workings of RSA
in regard to sign/verify, akcipher could have not all callbacks defined,
check the presence of callbacks in crypto_register_akcipher() and
provide default implementation if the callback is not implemented.

This is suggested by Herbert Xu instead of checking the presence of the
callback on every request.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d7198ce4 11-Apr-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: des_generic - Forbid 2-key in 3DES and add helpers

This patch adds a requirement to the generic 3DES implementation
such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode.

We will also provide helpers that may be used by drivers that
implement 3DES to make the same check.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

edaf28e9 10-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: salsa20 - don't access already-freed walk.iv

If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.

salsa20-generic doesn't set an alignmask, so currently it isn't affected
by this despite unconditionally accessing walk.iv. However this is more
subtle than desired, and it was actually broken prior to the alignmask
being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup
and convert to skcipher API").

Since salsa20-generic does not update the IV and does not need any IV
alignment, update it to use req->iv instead of walk.iv.

Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aec286cd 10-Apr-2019 Eric Biggers <ebiggers@google.com>

crypto: lrw - don't access already-freed walk.iv

If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.

Fix this in the LRW template by checking the return value of
skcipher_walk_virt().

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation. When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.

Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b257b48c 15-Apr-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: lrw - Fix atomic sleep when walking skcipher

When we perform a walk in the completion function, we need to ensure
that it is atomic.

Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

44427c0f 15-Apr-2019 Herbert Xu <herbert@gondor.apana.org.au>

crypto: xts - Fix atomic sleep when walking skcipher

When we perform a walk in the completion function, we need to ensure
that it is atomic.

Reported-by: syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com
Fixes: 78105c7e769b ("crypto: xts - Drop use of auxiliary buffer")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

678cce40 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/poly1305 - fix overflow during partial reduction

The x86_64 implementation of Poly1305 produces the wrong result on some
inputs because poly1305_4block_avx2() incorrectly assumes that when
partially reducing the accumulator, the bits carried from limb 'd4' to
limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic
which processes only one block at a time. However, it's not true for
the AVX2 implementation, which processes 4 blocks at a time and
therefore can produce intermediate limbs about 4x larger.

Fix it by making the relevant calculations use 64-bit arithmetic rather
than 32-bit. Note that most of the carries already used 64-bit
arithmetic, but the d4 -> h0 carry was different for some reason.

To be safe I also made the same change to the corresponding SSE2 code,
though that only operates on 1 or 2 blocks at a time. I don't think
it's really needed for poly1305_block_sse2(), but it doesn't hurt
because it's already x86_64 code. It *might* be needed for
poly1305_2block_sse2(), but overflows aren't easy to reproduce there.

This bug was originally detected by my patches that improve testmgr to
fuzz algorithms against their generic implementation. But also add a
test vector which reproduces it directly (in the AVX2 case).

Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64")
Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64")
Cc: <stable@vger.kernel.org> # v4.3+
Cc: Martin Willi <martin@strongswan.org>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

eda69b0c 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add panic_on_fail module parameter

Add a module parameter cryptomgr.panic_on_fail which causes the kernel
to panic if any crypto self-tests fail.

Use cases:

- More easily detect crypto self-test failures by boot testing,
e.g. on KernelCI.
- Get a bug report if syzkaller manages to use the template system to
instantiate an algorithm that fails its self-tests.

The command-line option "fips=1" already does this, but it also makes
other changes not wanted for general testing, such as disabling
"unapproved" algorithms. panic_on_fail just does what it says.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c31a8719 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: cts - don't support empty messages

My patches to make testmgr fuzz algorithms against their generic
implementation detected that the arm64 implementations of
"cts(cbc(aes))" handle empty messages differently from the cts template.
Namely, the arm64 implementations forbids (with -EINVAL) all messages
shorter than the block size, including the empty message; but the cts
template permits empty messages as a special case.

No user should be CTS-encrypting/decrypting empty messages, but we need
to keep the behavior consistent. Unfortunately, as noted in the source
of OpenSSL's CTS implementation [1], there's no common specification for
CTS. This makes it somewhat debatable what the behavior should be.

However, all CTS specifications seem to agree that messages shorter than
the block size are not allowed, and OpenSSL follows this in both CTS
conventions it implements. It would also simplify the user-visible
semantics to have empty messages no longer be a special case.

Therefore, make the cts template return -EINVAL on *all* messages
shorter than the block size, including the empty message.

[1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c5c46887 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: streebog - fix unaligned memory accesses

Don't cast the data buffer directly to streebog_uint512, as this
violates alignment rules.

Fixes: fe18957e8e87 ("crypto: streebog - add Streebog hash function")
Cc: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5e27f38f 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha20poly1305 - set cra_name correctly

If the rfc7539 template is instantiated with specific implementations,
e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than
"rfc7539(chacha20,poly1305)", then the implementation names end up
included in the instance's cra_name. This is incorrect because it then
prevents all users from allocating "rfc7539(chacha20,poly1305)", if the
highest priority implementations of chacha20 and poly1305 were selected.
Also, the self-tests aren't run on an instance allocated in this way.

Fix it by setting the instance's cra_name from the underlying
algorithms' actual cra_names, rather than from the requested names.
This matches what other templates do.

Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Cc: <stable@vger.kernel.org> # v4.2+
Cc: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dcaca01a 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - don't WARN on unprocessed data after slow walk step

skcipher_walk_done() assumes it's a bug if, after the "slow" path is
executed where the next chunk of data is processed via a bounce buffer,
the algorithm says it didn't process all bytes. Thus it WARNs on this.

However, this can happen legitimately when the message needs to be
evenly divisible into "blocks" but isn't, and the algorithm has a
'walksize' greater than the block size. For example, ecb-aes-neonbs
sets 'walksize' to 128 bytes and only supports messages evenly divisible
into 16-byte blocks. If, say, 17 message bytes remain but they straddle
scatterlist elements, the skcipher_walk code will take the "slow" path
and pass the algorithm all 17 bytes in the bounce buffer. But the
algorithm will only be able to process 16 bytes, triggering the WARN.

Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code
already does, is the right behavior.

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.

Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

307508d1 31-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: crct10dif-generic - fix use via crypto_shash_digest()

The ->digest() method of crct10dif-generic reads the current CRC value
from the shash_desc context. But this value is uninitialized, causing
crypto_shash_digest() to compute the wrong result. Fix it.

Probably this wasn't noticed before because lib/crc-t10dif.c only uses
crypto_shash_update(), not crypto_shash_digest(). Likewise,
crypto_shash_digest() is not yet tested by the crypto self-tests because
those only test the ahash API which only uses shash init/update/final.

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.

Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

61abc356 29-Mar-2019 Andi Kleen <ak@linux.intel.com>

crypto: aes - Use ___cacheline_aligned for aes data

cacheline_aligned is a special section. It cannot be const at the same
time because it's not read-only. It doesn't give any MMU protection.

Mark it ____cacheline_aligned to not place it in a special section,
but just align it in .rodata

Cc: herbert@gondor.apana.org.au
Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

71052dcf 29-Mar-2019 Sebastian Andrzej Siewior <bigeasy@linutronix.de>

crypto: scompress - Use per-CPU struct instead multiple variables

Two per-CPU variables are allocated as pointer to per-CPU memory which
then are used as scratch buffers.
We could be smart about this and use instead a per-CPU struct which
contains the pointers already and then we need to allocate just the
scratch buffers.
Add a lock to the struct. By doing so we can avoid the get_cpu()
statement and gain lockdep coverage (if enabled) to ensure that the lock
is always acquired in the right context. On non-preemptible kernels the
lock vanishes.
It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
since it is protected by the spinlock.

The diffstat of this is negative and according to size scompress.o:
text data bss dec hex filename
1847 160 24 2031 7ef dbg_before.o
1754 232 4 1990 7c6 dbg_after.o
1799 64 24 1887 75f no_dbg-before.o
1703 88 4 1795 703 no_dbg-after.o

The overall size increase difference is also negative. The increase in
the data section is only four bytes without lockdep.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6a4d1b18 29-Mar-2019 Sebastian Andrzej Siewior <bigeasy@linutronix.de>

crypto: scompress - return proper error code for allocation failure

If scomp_acomp_comp_decomp() fails to allocate memory for the
destination then we never copy back the data we compressed.
It is probably best to return an error code instead 0 in case of
failure.
I haven't found any user that is using acomp_request_set_params()
without the `dst' buffer so there is probably no harm.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d99324c2 20-Mar-2019 Geert Uytterhoeven <geert+renesas@glider.be>

crypto: fips - Grammar s/options/option/, s/to/the/

Fixes: ccb778e1841ce04b ("crypto: api - Add fips_enable flag")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4e5180eb 15-Mar-2019 Ondrej Mosnacek <omosnace@redhat.com>

crypto: Kconfig - fix typos AEGSI -> AEGIS

Spotted while reviewind patches from Eric Biggers.

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f6fff170 14-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: salsa20-generic - use crypto_xor_cpy()

In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

29d97dec 14-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha-generic - use crypto_xor_cpy()

In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6570737c 12-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - test the !may_use_simd() fallback code

All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures. However, this isn't tested for by the
self-tests, causing bugs to go undetected.

Now that all algorithms have been converted to use crypto_simd_usable(),
update the self-tests to test the no-SIMD case. First, a bool
testvec_config::nosimd is added. When set, the crypto operation is
executed with preemption disabled and with crypto_simd_usable() mocked
out to return false on the current CPU.

A bool test_sg_division::nosimd is also added. For hash algorithms it's
honored by the corresponding ->update(). By setting just a subset of
these bools, the case where some ->update()s are done in SIMD context
and some are done in no-SIMD context is also tested.

These bools are then randomly set by generate_random_testvec_config().

For now, all no-SIMD testing is limited to the extra crypto self-tests,
because it might be a bit too invasive for the regular self-tests.
But this could be changed later.

This has already found bugs in the arm64 AES-GCM and ChaCha algorithms.
This would have found some past bugs as well.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8b8d91d4 12-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: simd - convert to use crypto_simd_usable()

Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b55e1a39 12-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: simd,testmgr - introduce crypto_simd_usable()

So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7aceaaef 12-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: chacha-generic - fix use as arm64 no-NEON fallback

The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested. The problem is as follows:

When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists. Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm. Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.

This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride). round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.

This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:

if (WARN_ON(err)) {
/* unexpected case; didn't process all bytes */
err = -EINVAL;
goto finish;
}

Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.

(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)

Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f808aa3f 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - remove workaround for AEADs that modify aead_request

Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e151a8d2 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/morus1280 - convert to use AEAD SIMD helpers

Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

47730958 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/morus640 - convert to use AEAD SIMD helpers

Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b6708c2d 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/aegis256 - convert to use AEAD SIMD helpers

Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d628132a 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/aegis128l - convert to use AEAD SIMD helpers

Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

de272ca7 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: x86/aegis128 - convert to use AEAD SIMD helpers

Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1661131a 10-Mar-2019 Eric Biggers <ebiggers@google.com>

crypto: simd - support wrapping AEAD algorithms

Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers. The code for each is similar.

I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this. Currently they each independently implement the same
functionality. This will not only simplify the code, but it will also
fix the bug detected by the improved self-tests: the user-provided
aead_request is modified. This is because these algorithms currently
reuse the original request, whereas the crypto_simd helpers build a new
request in the original request's context.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

45ec975e 07-Mar-2019 Dave Rodgman <dave.rodgman@arm.com>

lib/lzo: separate lzo-rle from lzo

To prevent any issues with persistent data, separate lzo-rle from lzo so
that it is treated as a separate algorithm, and lzo is still available.

Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

63bdf428 05-Mar-2019 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
"API:
- Add helper for simple skcipher modes.
- Add helper to register multiple templates.
- Set CRYPTO_TFM_NEED_KEY when setkey fails.
- Require neither or both of export/import in shash.
- AEAD decryption test vectors are now generated from encryption
ones.
- New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random
fuzzing.

Algorithms:
- Conversions to skcipher and helper for many templates.
- Add more test vectors for nhpoly1305 and adiantum.

Drivers:
- Add crypto4xx prng support.
- Add xcbc/cmac/ecb support in caam.
- Add AES support for Exynos5433 in s5p.
- Remove sha384/sha512 from artpec7 as hardware cannot do partial
hash"

[ There is a merge of the Freescale SoC tree in order to pull in changes
required by patches to the caam/qi2 driver. ]

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits)
crypto: s5p - add AES support for Exynos5433
dt-bindings: crypto: document Exynos5433 SlimSSS
crypto: crypto4xx - add missing of_node_put after of_device_is_available
crypto: cavium/zip - fix collision with generic cra_driver_name
crypto: af_alg - use struct_size() in sock_kfree_s()
crypto: caam - remove redundant likely/unlikely annotation
crypto: s5p - update iv after AES-CBC op end
crypto: x86/poly1305 - Clear key material from stack in SSE2 variant
crypto: caam - generate hash keys in-place
crypto: caam - fix DMA mapping xcbc key twice
crypto: caam - fix hash context DMA unmap size
hwrng: bcm2835 - fix probe as platform device
crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number
crypto: stm32 - drop pointless static qualifier in stm32_hash_remove()
crypto: chelsio - Fixed Traffic Stall
crypto: marvell - Remove set but not used variable 'ivsize'
crypto: ccp - Update driver messages to remove some confusion
crypto: adiantum - add 1536 and 4096-byte test vectors
crypto: nhpoly1305 - add a test vector with len % 16 != 0
crypto: arm/aes-ce - update IV after partial final CTR block
...


91e14842 20-Feb-2019 Gustavo A. R. Silva <gustavo@embeddedor.com>

crypto: af_alg - use struct_size() in sock_kfree_s()

Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.

So, change the following form:

sizeof(*sgl) + sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1)

to :

struct_size(sgl, sg, MAX_SGL_ENTS + 1)

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

333e6647 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: adiantum - add 1536 and 4096-byte test vectors

Add 1536 and 4096-byte Adiantum test vectors so that the case where
there are multiple NH hashes is tested. This is already tested by the
nhpoly1305 test vectors, but it should be tested at the Adiantum level
too. Moreover the 4096-byte case is especially important.

As with the other Adiantum test vectors, these were generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

367ecc07 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: nhpoly1305 - add a test vector with len % 16 != 0

This is needed to test that the end of the message is zero-padded when
the length is not a multiple of 16 (NH_MESSAGE_UNIT). It's already
tested indirectly by the 31-byte Adiantum test vector, but it should be
tested directly at the nhpoly1305 level too.

As with the other nhpoly1305 test vectors, this was generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e674dbc0 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add iv_out to all CTR test vectors

Test that all CTR implementations update the IV buffer to contain the
next counter block, aka the IV to continue the encryption/decryption of
a larger message. When the length processed is a multiple of the block
size, users may rely on this for chaining.

When the length processed is *not* a multiple of the block size, simple
chaining doesn't work. However, as noted in commit 88a3f582bea9
("crypto: arm64/aes - don't use IV buffer to return final keystream
block"), the generic CCM implementation assumes that the CTR IV is
handled in some sane way, not e.g. overwritten with part of the
keystream. Since this was gotten wrong once already, it's desirable to
test for it. And, the most straightforward way to do this is to enforce
that all CTR implementations have the same behavior as the generic
implementation, which returns the *next* counter following the final
partial block. This behavior also has the advantage that if someone
does misuse this case for chaining, then the keystream won't be
repeated. Thus, this patch makes the tests expect this behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cdc69469 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add iv_out to all CBC test vectors

Test that all CBC implementations update the IV buffer to contain the
last ciphertext block, aka the IV to continue the encryption/decryption
of a larger message. Users may rely on this for chaining.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8efd972e 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - support checking skcipher output IV

Allow skcipher test vectors to declare the value the IV buffer should be
updated to at the end of the encryption or decryption operation.

(This check actually used to be supported in testmgr, but it was never
used and therefore got removed except for the AES-Keywrap special case.
But it will be used by CBC and CTR now, so re-add it.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c9e1d48a 14-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - remove extra bytes from 3DES-CTR IVs

3DES only has an 8-byte block size, but the 3DES-CTR test vectors use
16-byte IVs. Remove the unused 8 bytes from the ends of the IVs.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9060cb71 17-Feb-2019 Mao Wenan <maowenan@huawei.com>

net: crypto set sk to NULL when af_alg_release.

KASAN has found use-after-free in sockfs_setattr.
The existed commit 6d8c50dcb029 ("socket: close race condition between sock_close()
and sockfs_setattr()") is to fix this simillar issue, but it seems to ignore
that crypto module forgets to set the sk to NULL after af_alg_release.

KASAN report details as below:
BUG: KASAN: use-after-free in sockfs_setattr+0x120/0x150
Write of size 4 at addr ffff88837b956128 by task syz-executor0/4186

CPU: 2 PID: 4186 Comm: syz-executor0 Not tainted xxx + #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.10.2-1ubuntu1 04/01/2014
Call Trace:
dump_stack+0xca/0x13e
print_address_description+0x79/0x330
? vprintk_func+0x5e/0xf0
kasan_report+0x18a/0x2e0
? sockfs_setattr+0x120/0x150
sockfs_setattr+0x120/0x150
? sock_register+0x2d0/0x2d0
notify_change+0x90c/0xd40
? chown_common+0x2ef/0x510
chown_common+0x2ef/0x510
? chmod_common+0x3b0/0x3b0
? __lock_is_held+0xbc/0x160
? __sb_start_write+0x13d/0x2b0
? __mnt_want_write+0x19a/0x250
do_fchownat+0x15c/0x190
? __ia32_sys_chmod+0x80/0x80
? trace_hardirqs_on_thunk+0x1a/0x1c
__x64_sys_fchownat+0xbf/0x160
? lockdep_hardirqs_on+0x39a/0x5e0
do_syscall_64+0xc8/0x580
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462589
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89
f7 48 89 d6 48 89
ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3
48 c7 c1 bc ff ff
ff f7 d8 64 89 01 48
RSP: 002b:00007fb4b2c83c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000104
RAX: ffffffffffffffda RBX: 000000000072bfa0 RCX: 0000000000462589
RDX: 0000000000000000 RSI: 00000000200000c0 RDI: 0000000000000007
RBP: 0000000000000005 R08: 0000000000001000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb4b2c846bc
R13: 00000000004bc733 R14: 00000000006f5138 R15: 00000000ffffffff

Allocated by task 4185:
kasan_kmalloc+0xa0/0xd0
__kmalloc+0x14a/0x350
sk_prot_alloc+0xf6/0x290
sk_alloc+0x3d/0xc00
af_alg_accept+0x9e/0x670
hash_accept+0x4a3/0x650
__sys_accept4+0x306/0x5c0
__x64_sys_accept4+0x98/0x100
do_syscall_64+0xc8/0x580
entry_SYSCALL_64_after_hwframe+0x49/0xbe

Freed by task 4184:
__kasan_slab_free+0x12e/0x180
kfree+0xeb/0x2f0
__sk_destruct+0x4e6/0x6a0
sk_destruct+0x48/0x70
__sk_free+0xa9/0x270
sk_free+0x2a/0x30
af_alg_release+0x5c/0x70
__sock_release+0xd3/0x280
sock_close+0x1a/0x20
__fput+0x27f/0x7f0
task_work_run+0x136/0x1b0
exit_to_usermode_loop+0x1a7/0x1d0
do_syscall_64+0x461/0x580
entry_SYSCALL_64_after_hwframe+0x49/0xbe

Syzkaller reproducer:
r0 = perf_event_open(&(0x7f0000000000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_config_ext}, 0x0, 0x0,
0xffffffffffffffff, 0x0)
r1 = socket$alg(0x26, 0x5, 0x0)
getrusage(0x0, 0x0)
bind(r1, &(0x7f00000001c0)=@alg={0x26, 'hash\x00', 0x0, 0x0,
'sha256-ssse3\x00'}, 0x80)
r2 = accept(r1, 0x0, 0x0)
r3 = accept4$unix(r2, 0x0, 0x0, 0x0)
r4 = dup3(r3, r0, 0x0)
fchownat(r4, &(0x7f00000000c0)='\x00', 0x0, 0x0, 0x1000)

Fixes: 6d8c50dcb029 ("socket: close race condition between sock_close() and sockfs_setattr()")
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

bd30cf53 08-Feb-2019 Iuliana Prodan <iuliana.prodan@nxp.com>

crypto: export arc4 defines

Some arc4 cipher algorithm defines show up in two places:
crypto/arc4.c and drivers/crypto/bcm/cipher.h.
Let's export them in a common header and update their users.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a6e5ef9b 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - check for aead_request corruption

Check that algorithms do not change the aead_request structure, as users
may rely on submitting the request again (e.g. after copying new data
into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fa353c99 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - check for skcipher_request corruption

Check that algorithms do not change the skcipher_request structure, as
users may rely on submitting the request again (e.g. after copying new
data into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4cc2dcf9 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - convert hash testing to use testvec_configs

Convert alg_test_hash() to use the new test framework, adding a list of
testvec_configs to test by default. When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves hash test coverage mainly because now all algorithms have
a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage. The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.

This already found bugs in the hash walk code and in the arm32 and arm64
implementations of crct10dif.

I removed the hash chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ed96804f 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - convert aead testing to use testvec_configs

Convert alg_test_aead() to use the new test framework, using the same
list of testvec_configs that skcipher testing uses.

This significantly improves AEAD test coverage mainly because previously
there was only very limited test coverage of the possible data layouts.
Now the data layouts to test are listed in one place for all algorithms
and optionally are also randomly generated. In fact, only one AEAD
algorithm (AES-GCM) even had a chunked test case before.

This already found bugs in all the AEGIS and MORUS implementations, the
x86 AES-GCM implementation, and the arm64 AES-CCM implementation.

I removed the AEAD chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Note: the rewritten test code allocates an aead_request just once per
algorithm rather than once per encryption/decryption, but some AEAD
algorithms incorrectly change the tfm pointer in the request. It's
nontrivial to fix these, so to move forward I'm temporarily working
around it by resetting the tfm pointer. But they'll need to be fixed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4e7babba 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - convert skcipher testing to use testvec_configs

Convert alg_test_skcipher() to use the new test framework, adding a list
of testvec_configs to test by default. When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves skcipher test coverage mainly because now all algorithms
have a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage. The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers
that cross pages.

This has already found a bug in the arm64 ctr-aes-neonbs algorithm.
It would have easily found many past bugs.

I removed the skcipher chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

25f9dddb 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - implement random testvec_config generation

Add functions that generate a random testvec_config, in preparation for
using it for randomized fuzz tests.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5b2706a4 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS

To achieve more comprehensive crypto test coverage, I'd like to add fuzz
tests that use random data layouts and request flags.

To be most effective these tests should be part of testmgr, so they
automatically run on every algorithm registered with the crypto API.
However, they will take much longer to run than the current tests and
therefore will only really be intended to be run by developers, whereas
the current tests have a wider audience.

Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
that can be set by developers to enable these extra, expensive tests.

Similar to the regular tests, also add a module parameter
cryptomgr.noextratests to support disabling the tests.

Finally, another module parameter cryptomgr.fuzz_iterations is added to
control how many iterations the fuzz tests do. Note: for now setting
this to 0 will be equivalent to cryptomgr.noextratests=1. But I opted
for separate parameters to provide more flexibility to add other types
of tests under the "extra tests" category in the future.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3f47a03d 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add testvec_config struct and helper functions

Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.

However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.

Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.

To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().

Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.

This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.

These improved tests have already exposed many bugs.

To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

77568e53 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: ahash - fix another early termination in hash walk

Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest. The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early. This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element. Fix it.

Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d644f1c8 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: morus - fix handling chunked inputs

The generic MORUS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts. The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data. In fact, this can happen before the end. Fix them.

Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0f533e67 01-Feb-2019 Eric Biggers <ebiggers@google.com>

crypto: aegis - fix handling chunked inputs

The generic AEGIS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts. The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data. In fact, this can happen before the end. Fix them.

Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e3d90e52 28-Jan-2019 Christopher Diaz Riveros <chrisadr@gentoo.org>

crypto: testmgr - use kmemdup

Fixes coccinnelle alerts:

/crypto/testmgr.c:2112:13-20: WARNING opportunity for kmemdup
/crypto/testmgr.c:2130:13-20: WARNING opportunity for kmemdup
/crypto/testmgr.c:2152:9-16: WARNING opportunity for kmemdup

Signed-off-by: Christopher Diaz Riveros <chrisadr@gentoo.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a8a34416 25-Jan-2019 Milan Broz <gmazyland@gmail.com>

crypto: testmgr - mark crc32 checksum as FIPS allowed

The CRC32 is not a cryptographic hash algorithm,
so the FIPS restrictions should not apply to it.
(The CRC32C variant is already allowed.)

This CRC32 variant is used for in dm-crypt legacy TrueCrypt
IV implementation (tcw); detected by cryptsetup test suite
failure in FIPS mode.

Signed-off-by: Milan Broz <gmazyland@gmail.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

eb5e6730 23-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - skip crc32c context test for ahash algorithms

Instantiating "cryptd(crc32c)" causes a crypto self-test failure because
the crypto_alloc_shash() in alg_test_crc32c() fails. This is because
cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can
only be accessed through the ahash API, unlike shash algorithms which
can be accessed through both the ahash and shash APIs.

As the test is testing the shash descriptor format which is only
applicable to shash algorithms, skip it for ahash algorithms.

(Note that it's still important to fix crypto self-test failures even
for weird algorithm instantiations like cryptd(crc32c) that no one
would really use; in fips_enabled mode unprivileged users can use them
to panic the kernel, and also they prevent treating a crypto self-test
failure as a bug when fuzzing the kernel.)

Fixes: 8e3ee85e68c5 ("crypto: crc32c - Test descriptor context format")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7e33d4d4 21-Jan-2019 YueHaibing <yuehaibing@huawei.com>

crypto: seqiv - Use kmemdup in seqiv_aead_encrypt()

Use kmemdup rather than duplicating its implementation

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

231baecd 18-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: clarify name of WEAK_KEY request flag

CRYPTO_TFM_REQ_WEAK_KEY confuses newcomers to the crypto API because it
sounds like it is requesting a weak key. Actually, it is requesting
that weak keys be forbidden (for algorithms that have the notion of
"weak keys"; currently only DES and XTS do).

Also it is only one letter away from CRYPTO_TFM_RES_WEAK_KEY, with which
it can be easily confused. (This in fact happened in the UX500 driver,
though just in some debugging messages.)

Therefore, make the intent clear by renaming it to
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1a5e02b6 17-Jan-2019 Xiongfeng Wang <xiongfeng.wang@linaro.org>

crypto: chacha20poly1305 - use template array registering API to simplify the code

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9f8ef365 17-Jan-2019 Xiongfeng Wang <xiongfeng.wang@linaro.org>

crypto: ctr - use template array registering API to simplify the code

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

56a00d9d 17-Jan-2019 Xiongfeng Wang <xiongfeng.wang@linaro.org>

crypto: gcm - use template array registering API to simplify the code

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0db19035 17-Jan-2019 Xiongfeng Wang <xiongfeng.wang@linaro.org>

crypto: ccm - use template array registering API to simplify the code

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9572442d 17-Jan-2019 Xiongfeng Wang <xiongfeng.wang@linaro.org>

crypto: api - add a helper to (un)register a array of templates

This patch add a helper to (un)register a array of templates. The
following patches will use this helper to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

747bd2a3 17-Jan-2019 Thomas Gleixner <tglx@linutronix.de>

crypto: morus - Convert to SPDX license identifiers

The license boiler plate text is not ideal for machine parsing. The kernel
uses SPDX license identifiers for that purpose, which replace the boiler
plate text.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bb4ce825 17-Jan-2019 Thomas Gleixner <tglx@linutronix.de>

crypto: aegis - Convert to SPDX license identifiers

The license boiler plate text is not ideal for machine parsing. The kernel
uses SPDX license identifiers for that purpose, which replace the boiler
plate text.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ea5d8cfa 17-Jan-2019 Thomas Gleixner <tglx@linutronix.de>

crypto: aegis - Cleanup license mess

Precise and non-ambiguous license information is important. The recently
added aegis header file has a SPDX license identifier, which is nice, but
at the same time it has a contradictionary license boiler plate text.

SPDX-License-Identifier: GPL-2.0

versus

* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version

Oh well.

As the other aegis related files are licensed under the GPL v2 or later,
it's assumed that the boiler plate code is correct, but the SPDX license
identifier is wrong.

Fix the SPDX identifier and remove the boiler plate as it is redundant.

Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a0d608ee 13-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - unify the AEAD encryption and decryption test vectors

Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.

Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.

For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.

In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.

The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.

BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }

/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }

Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d7250b41 13-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add rfc4543(gcm(aes)) decryption test to encryption tests

One "rfc4543(gcm(aes))" decryption test vector doesn't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add this to the encryption
test vectors, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f38e8885 13-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add gcm(aes) decryption tests to encryption tests

Some "gcm(aes)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add these to the
encryption test vectors, so we don't lose any test coverage.

In the case of the chunked test vector, I truncated the last scatterlist
element to the end of the plaintext.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

de845da9 13-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - add ccm(aes) decryption tests to encryption tests

Some "ccm(aes)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add these to the
encryption test vectors, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5bc3de58 13-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - skip AEAD encryption test vectors with novrfy set

In preparation for unifying the AEAD encryption and decryption test
vectors, skip AEAD test vectors with the 'novrfy' (verification failure
expected) flag set when testing encryption rather than decryption.
These test vectors only make sense for decryption.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6d0d6cfb 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: af_alg - remove redundant initializations of sk_family

sk_alloc() already sets sock::sk_family to PF_ALG which is passed as the
'family' argument, so there's no need to set it again.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7c39edfb 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: af_alg - use list_for_each_entry() in af_alg_count_tsgl()

af_alg_count_tsgl() iterates through a list without modifying it, so use
list_for_each_entry() rather than list_for_each_entry_safe(). Also make
the pointers 'const' to make it clearer that nothing is modified.

No actual change in behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

466e0759 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: af_alg - make some functions static

Some exported functions in af_alg.c aren't used outside of that file.
Therefore, un-export them and make them 'static'.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

554557ce 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: stat - remove unused mutex

crypto_cfg_mutex in crypto_user_stat.c is unused. Remove it.

Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f990f7fb 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: tgr192 - fix unaligned memory access

Fix an unaligned memory access in tgr192_transform() by using the
unaligned access helpers.

Fixes: 06ace7a9bafe ("[CRYPTO] Use standard byte order macros wherever possible")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e17568e1 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: user - forward declare crypto_nlsk

Move the declaration of crypto_nlsk into internal/cryptouser.h. This
fixes the following sparse warning:

crypto/crypto_user_base.c:41:13: warning: symbol 'crypto_nlsk' was not declared. Should it be static?

Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cb9dde88 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: testmgr - handle endianness correctly in alg_test_crc32c()

The crc32c context is in CPU endianness, whereas the final digest is
little endian. alg_test_crc32c() got this mixed up. Fix it.

The test passes both before and after, but this patch fixes the
following sparse warning:

crypto/testmgr.c:1912:24: warning: cast to restricted __le32

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

73381da5 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: streebog - use correct endianness type

streebog_uint512::qword needs to be __le64, not u64. This fixes a large
number of sparse warnings:

crypto/streebog_generic.c:25:9: warning: incorrect type in initializer (different base types)
crypto/streebog_generic.c:25:9: expected unsigned long long
crypto/streebog_generic.c:25:9: got restricted __le64 [usertype]
[omitted many similar warnings]

No actual change in behavior.

Cc: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a1180cff 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: rsa-pkcs1pad - include <crypto/internal/rsa.h>

Include internal/rsa.h in rsa-pkcs1pad.c to get the declaration of
rsa_pkcs1pad_tmpl. This fixes the following sparse warning:

crypto/rsa-pkcs1pad.c:698:24: warning: symbol 'rsa_pkcs1pad_tmpl' was not declared. Should it be static?

Cc: Andrzej Zaborowski <andrew.zaborowski@intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

18666550 10-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: gcm - use correct endianness type in gcm_hash_len()

In gcm_hash_len(), use be128 rather than u128. This fixes the following
sparse warnings:

crypto/gcm.c:252:19: warning: incorrect type in assignment (different base types)
crypto/gcm.c:252:19: expected unsigned long long [usertype] a
crypto/gcm.c:252:19: got restricted __be64 [usertype]
crypto/gcm.c:253:19: warning: incorrect type in assignment (different base types)
crypto/gcm.c:253:19: expected unsigned long long [usertype] b
crypto/gcm.c:253:19: got restricted __be64 [usertype]

No actual change in behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0507de94 07-Jan-2019 Vitaly Chikunov <vt@altlinux.org>

crypto: testmgr - split akcipher tests by a key type

Before this, if akcipher_testvec have `public_key_vec' set to true
(i.e. having a public key) only sign/encrypt test is performed, but
verify/decrypt test is skipped.

With a public key we could do encrypt and verify, but to sign and decrypt
a private key is required.

This logic is correct for encrypt/decrypt tests (decrypt is skipped if
no private key). But incorrect for sign/verify tests - sign is performed
no matter if there is no private key, but verify is skipped if there is
a public key.

Rework `test_akcipher_one' to arrange tests properly depending on value
of `public_key_vec` and `siggen_sigver_test'.

No tests were missed since there is only one sign/verify test (which
have `siggen_sigver_test' set to true) and it has a private key, but
future tests could benefit from this improvement.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2b091e32 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - remove pointless checks of shash_alg::{export,import}

crypto_init_shash_ops_async() only gives the ahash tfm non-NULL
->export() and ->import() if the underlying shash alg has these
non-NULL. This doesn't make sense because when an shash algorithm is
registered, shash_prepare_alg() sets a default ->export() and ->import()
if the implementor didn't provide them. And elsewhere it's assumed that
all shash algs and ahash tfms have non-NULL ->export() and ->import().

Therefore, remove these unnecessary, always-true conditions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

41a2e94f 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: shash - require neither or both ->export() and ->import()

Prevent registering shash algorithms that implement ->export() but not
->import(), or ->import() but not ->export(). Such cases don't make
sense and could confuse the check that shash_prepare_alg() does for just
->export().

I don't believe this affects any existing algorithms; this is just
preventing future mistakes.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6ebc9700 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: aead - set CRYPTO_TFM_NEED_KEY if ->setkey() fails

Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.

For example, in gcm.c, if the kzalloc() fails due to lack of memory,
then the CTR part of GCM will have the new key but GHASH will not.

It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being
used until a new key is set.

[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]

Fixes: dc26c17f743a ("crypto: aead - prevent using AEADs without setting key")
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b1f6b4bf 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - set CRYPTO_TFM_NEED_KEY if ->setkey() fails

Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.

For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of
memory, then priv::table will be left NULL. After that, encryption with
that tfm will cause a NULL pointer dereference.

It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.

[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]

Fixes: f8d33fac8480 ("crypto: skcipher - prevent using skciphers without setting key")
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ba7d7433 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() fails

Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.

It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.

Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
->setkey() for those must nevertheless be atomic. That's fine for now
since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
not intended that OPTIONAL_KEY be used much.

[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]

Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6b476662 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - reject NULL crypto_spawn::inst

It took me a while to notice the bug where the adiantum template left
crypto_spawn::inst == NULL, because this only caused problems in certain
cases where algorithms are dynamically loaded/unloaded.

More improvements are needed, but for now make crypto_init_spawn()
reject this case and WARN(), so this type of bug will be noticed
immediately in the future.

Note: I checked all callers and the adiantum template was the only place
that had this wrong. So this WARN shouldn't trigger anymore.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

14aa1a83 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: algapi - remove crypto_alloc_instance()

Now that all "blkcipher" templates have been converted to "skcipher",
crypto_alloc_instance() is no longer used. And it's not useful any
longer as it creates an old-style weakly typed instance rather than a
new-style strongly typed instance. So remove it, and now that the name
is freed up rename crypto_alloc_instance2() to crypto_alloc_instance().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

31d40c20 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: null - convert ecb-cipher_null to skcipher API

Convert the "ecb-cipher_null" algorithm from the deprecated "blkcipher"
API to the "skcipher" API.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

426bcb50 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: arc4 - convert to skcipher API

Convert the "ecb(arc4)" algorithm from the deprecated "blkcipher" API to
the "skcipher" API.

(Note that this is really a stream cipher and not a block cipher in ECB
mode as the name implies, but that's a problem for another day...)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0be487ba 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: pcbc - convert to skcipher_alloc_instance_simple()

The PCBC template just wraps a single block cipher algorithm, so
simplify it by converting it to use skcipher_alloc_instance_simple().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fb6de25c 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: pcbc - remove ability to wrap internal ciphers

Following commit 944585a64f5e ("crypto: x86/aes-ni - remove special
handling of AES in PCBC mode"), it's no longer needed for the PCBC
template to support wrapping a cipher that has the CRYPTO_ALG_INTERNAL
flag set. Thus, remove this now-unused functionality to make PCBC
consistent with the other single block cipher templates.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

21f3ca6c 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: ofb - convert to skcipher_alloc_instance_simple()

The OFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6b611d98 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: keywrap - convert to skcipher API

Convert the keywrap template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Cc: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

52e9368f 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: ecb - convert to skcipher API

Convert the ECB template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

11f14630 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: ctr - convert to skcipher API

Convert the CTR template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

03b8302d 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: cfb - convert to skcipher_alloc_instance_simple()

The CFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a5a84a9d 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: cbc - convert to skcipher_alloc_instance_simple()

The CBC template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0872da16 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: skcipher - add helper for simple block cipher modes

The majority of skcipher templates (including both the existing ones and
the ones remaining to be converted from the "blkcipher" API) just wrap a
single block cipher algorithm. This includes cbc, cfb, ctr, ecb, kw,
ofb, and pcbc. Add a helper function skcipher_alloc_instance_simple()
that handles allocating an skcipher instance for this common case.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

251b7aea 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: pcbc - remove bogus memcpy()s with src == dest

The memcpy()s in the PCBC implementation use walk->iv as both the source
and destination, which has undefined behavior. These memcpy()'s are
actually unneeded, because walk->iv is already used to hold the previous
plaintext block XOR'd with the previous ciphertext block. Thus,
walk->iv is already updated to its final value.

So remove the broken and unnecessary memcpy()s.

Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template")
Cc: <stable@vger.kernel.org> # v2.6.21+
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b3e3e2db 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: ofb - fix handling partial blocks and make thread-safe

Fix multiple bugs in the OFB implementation:

1. It stored the per-request state 'cnt' in the tfm context, which can be
used by multiple threads concurrently (e.g. via AF_ALG).
2. It didn't support messages not a multiple of the block cipher size,
despite being a stream cipher.
3. It didn't set cra_blocksize to 1 to indicate it is a stream cipher.

To fix these, set the 'chunksize' property to the cipher block size to
guarantee that when walking through the scatterlist, a partial block can
only occur at the end. Then change the implementation to XOR a block at
a time at first, then XOR the partial block at the end if needed. This
is the same way CTR and CFB are implemented. As a bonus, this also
improves performance in most cases over the current approach.

Fixes: e497c51896b3 ("crypto: ofb - add output feedback mode")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6c2e322b 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: cfb - remove bogus memcpy() with src == dest

The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the
source and destination, which has undefined behavior. It is unneeded
because walk->iv is already used to hold the previous ciphertext block;
thus, walk->iv is already updated to its final value. So, remove it.

Also, note that in-place decryption is the only case where the previous
ciphertext block is not directly available. Therefore, as a related
cleanup I also updated crypto_cfb_encrypt_segment() to directly use the
previous ciphertext block rather than save it into walk->iv. This makes
it consistent with in-place encryption and out-of-place decryption; now
only in-place decryption is different, because it has to be.

Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

394a9e04 03-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: cfb - add missing 'chunksize' property

Like some other block cipher mode implementations, the CFB
implementation assumes that while walking through the scatterlist, a
partial block does not occur until the end. But the walk is incorrectly
being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since
CFB is a stream cipher) but no 'chunksize' is set. This bug causes
incorrect encryption/decryption for some scatterlist layouts.

Fix it by setting the 'chunksize'. Also extend the CFB test vectors to
cover this bug as well as cases where the message length is not a
multiple of the block size.

Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

af8cb01f 28-Dec-2018 haco <minhaco@msn.com>

crypto: Kconfig - Fix typo in "pclmul"

Fix typo "plcmul" to "pclmul"

Signed-off-by: Huaxuan Mao <minhaco@msn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d45a90cb 08-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: sm3 - fix undefined shift by >= width of value

sm3_compress() calls rol32() with shift >= 32, which causes undefined
behavior. This is easily detected by enabling CONFIG_UBSAN.

Explicitly AND with 31 to make the behavior well defined.

Fixes: 4f0fc1600edb ("crypto: sm3 - add OSCCA SM3 secure hash")
Cc: <stable@vger.kernel.org> # v4.15+
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6db43410 06-Jan-2019 Eric Biggers <ebiggers@google.com>

crypto: adiantum - initialize crypto_spawn::inst

crypto_grab_*() doesn't set crypto_spawn::inst, so templates must set it
beforehand. Otherwise it will be left NULL, which causes a crash in
certain cases where algorithms are dynamically loaded/unloaded. E.g.
with CONFIG_CRYPTO_CHACHA20_X86_64=m, the following caused a crash:

insmod chacha-x86_64.ko
python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))'
rmmod chacha-x86_64.ko
python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))'

Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a7773363 03-Jan-2019 Harsh Jain <harsh@chelsio.com>

crypto: authencesn - Avoid twice completion call in decrypt path

Authencesn template in decrypt path unconditionally calls aead_request_complete
after ahash_verify which leads to following kernel panic in after decryption.

[ 338.539800] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[ 338.548372] PGD 0 P4D 0
[ 338.551157] Oops: 0000 [#1] SMP PTI
[ 338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G W I 4.19.7+ #13
[ 338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10
[ 338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4]
[ 338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff <8b> 04 25 04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b
[ 338.598547] RSP: 0018:ffff911c97803c00 EFLAGS: 00010246
[ 338.604268] RAX: 0000000000000002 RBX: ffff911c4469ee00 RCX: 0000000000000000
[ 338.612090] RDX: 0000000000000000 RSI: 0000000000000130 RDI: ffff911b87c20400
[ 338.619874] RBP: 0000000000000000 R08: ffff911b87c20498 R09: 000000000000000a
[ 338.627610] R10: 0000000000000001 R11: 0000000000000004 R12: 0000000000000000
[ 338.635402] R13: ffff911c89590000 R14: ffff911c91730000 R15: 0000000000000000
[ 338.643234] FS: 0000000000000000(0000) GS:ffff911c97800000(0000) knlGS:0000000000000000
[ 338.652047] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 338.658299] CR2: 0000000000000004 CR3: 00000001ec20a000 CR4: 00000000000006f0
[ 338.666382] Call Trace:
[ 338.669051] <IRQ>
[ 338.671254] esp_input_done+0x12/0x20 [esp4]
[ 338.675922] chcr_handle_resp+0x3b5/0x790 [chcr]
[ 338.680949] cpl_fw6_pld_handler+0x37/0x60 [chcr]
[ 338.686080] chcr_uld_rx_handler+0x22/0x50 [chcr]
[ 338.691233] uldrx_handler+0x8c/0xc0 [cxgb4]
[ 338.695923] process_responses+0x2f0/0x5d0 [cxgb4]
[ 338.701177] ? bitmap_find_next_zero_area_off+0x3a/0x90
[ 338.706882] ? matrix_alloc_area.constprop.7+0x60/0x90
[ 338.712517] ? apic_update_irq_cfg+0x82/0xf0
[ 338.717177] napi_rx_handler+0x14/0xe0 [cxgb4]
[ 338.722015] net_rx_action+0x2aa/0x3e0
[ 338.726136] __do_softirq+0xcb/0x280
[ 338.730054] irq_exit+0xde/0xf0
[ 338.733504] do_IRQ+0x54/0xd0
[ 338.736745] common_interrupt+0xf/0xf

Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...")
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8f9c4693 17-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: authenc - fix parsing key with misaligned rta_len

Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte
'enckeylen', followed by an authentication key and an encryption key.
crypto_authenc_extractkeys() parses the key to find the inner keys.

However, it fails to consider the case where the rtattr's payload is
longer than 4 bytes but not 4-byte aligned, and where the key ends
before the next 4-byte aligned boundary. In this case, 'keylen -=
RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This
causes a buffer overread and crash during crypto_ahash_setkey().

Fix it by restricting the rtattr payload to the expected size.

Reproducer using AF_ALG:

#include <linux/if_alg.h>
#include <linux/rtnetlink.h>
#include <sys/socket.h>

int main()
{
int fd;
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "authenc(hmac(sha256),cbc(aes))",
};
struct {
struct rtattr attr;
__be32 enckeylen;
char keys[1];
} __attribute__((packed)) key = {
.attr.rta_len = sizeof(key),
.attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */,
};

fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key));
}

It caused:

BUG: unable to handle kernel paging request at ffff88007ffdc000
PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0
Oops: 0000 [#1] SMP
CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155
[...]
Call Trace:
sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321
crypto_shash_finup+0x1a/0x30 crypto/shash.c:178
shash_digest_unaligned+0x45/0x60 crypto/shash.c:186
crypto_shash_digest+0x24/0x40 crypto/shash.c:202
hmac_setkey+0x135/0x1e0 crypto/hmac.c:66
crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66
shash_async_setkey+0x10/0x20 crypto/shash.c:223
crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202
crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96
crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62
aead_setkey+0xc/0x10 crypto/algif_aead.c:526
alg_setkey crypto/af_alg.c:223 [inline]
alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256
__sys_setsockopt+0x6d/0xd0 net/socket.c:1902
__do_sys_setsockopt net/socket.c:1913 [inline]
__se_sys_setsockopt net/socket.c:1910 [inline]
__x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910
do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe

Fixes: e236d4a89a2f ("[CRYPTO] authenc: Move enckeylen into key itself")
Cc: <stable@vger.kernel.org> # v2.6.25+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

769e4709 29-Dec-2018 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'kconfig-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull Kconfig updates from Masahiro Yamada:

- support -y option for merge_config.sh to avoid downgrading =y to =m

- remove S_OTHER symbol type, and touch include/config/*.h files correctly

- fix file name and line number in lexer warnings

- fix memory leak when EOF is encountered in quotation

- resolve all shift/reduce conflicts of the parser

- warn no new line at end of file

- make 'source' statement more strict to take only string literal

- rewrite the lexer and remove the keyword lookup table

- convert to SPDX License Identifier

- compile C files independently instead of including them from zconf.y

- fix various warnings of gconfig

- misc cleanups

* tag 'kconfig-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (39 commits)
kconfig: surround dbg_sym_flags with #ifdef DEBUG to fix gconf warning
kconfig: split images.c out of qconf.cc/gconf.c to fix gconf warnings
kconfig: add static qualifiers to fix gconf warnings
kconfig: split the lexer out of zconf.y
kconfig: split some C files out of zconf.y
kconfig: convert to SPDX License Identifier
kconfig: remove keyword lookup table entirely
kconfig: update current_pos in the second lexer
kconfig: switch to ASSIGN_VAL state in the second lexer
kconfig: stop associating kconf_id with yylval
kconfig: refactor end token rules
kconfig: stop supporting '.' and '/' in unquoted words
treewide: surround Kconfig file paths with double quotes
microblaze: surround string default in Kconfig with double quotes
kconfig: use T_WORD instead of T_VARIABLE for variables
kconfig: use specific tokens instead of T_ASSIGN for assignments
kconfig: refactor scanning and parsing "option" properties
kconfig: use distinct tokens for type and default properties
kconfig: remove redundant token defines
kconfig: rename depends_list to comment_option_list
...


b71acb0e 27-Dec-2018 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Add 1472-byte test to tcrypt for IPsec
- Reintroduced crypto stats interface with numerous changes
- Support incremental algorithm dumps

Algorithms:
- Add xchacha12/20
- Add nhpoly1305
- Add adiantum
- Add streebog hash
- Mark cts(cbc(aes)) as FIPS allowed

Drivers:
- Improve performance of arm64/chacha20
- Improve performance of x86/chacha20
- Add NEON-accelerated nhpoly1305
- Add SSE2 accelerated nhpoly1305
- Add AVX2 accelerated nhpoly1305
- Add support for 192/256-bit keys in gcmaes AVX
- Add SG support in gcmaes AVX
- ESN for inline IPsec tx in chcr
- Add support for CryptoCell 703 in ccree
- Add support for CryptoCell 713 in ccree
- Add SM4 support in ccree
- Add SM3 support in ccree
- Add support for chacha20 in caam/qi2
- Add support for chacha20 + poly1305 in caam/jr
- Add support for chacha20 + poly1305 in caam/qi2
- Add AEAD cipher support in cavium/nitrox"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (130 commits)
crypto: skcipher - remove remnants of internal IV generators
crypto: cavium/nitrox - Fix build with !CONFIG_DEBUG_FS
crypto: salsa20-generic - don't unnecessarily use atomic walk
crypto: skcipher - add might_sleep() to skcipher_walk_virt()
crypto: x86/chacha - avoid sleeping under kernel_fpu_begin()
crypto: cavium/nitrox - Added AEAD cipher support
crypto: mxc-scc - fix build warnings on ARM64
crypto: api - document missing stats member
crypto: user - remove unused dump functions
crypto: chelsio - Fix wrong error counter increments
crypto: chelsio - Reset counters on cxgb4 Detach
crypto: chelsio - Handle PCI shutdown event
crypto: chelsio - cleanup:send addr as value in function argument
crypto: chelsio - Use same value for both channel in single WR
crypto: chelsio - Swap location of AAD and IV sent in WR
crypto: chelsio - remove set but not used variable 'kctx_len'
crypto: ux500 - Use proper enum in hash_set_dma_transfer
crypto: ux500 - Use proper enum in cryp_set_dma_transfer
crypto: aesni - Add scatter/gather avx stubs, and use them in C
crypto: aesni - Introduce partial block macro
..


792bf4d8 26-Dec-2018 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
"The biggest RCU changes in this cycle were:

- Convert RCU's BUG_ON() and similar calls to WARN_ON() and similar.

- Replace calls of RCU-bh and RCU-sched update-side functions to
their vanilla RCU counterparts. This series is a step towards
complete removal of the RCU-bh and RCU-sched update-side functions.

( Note that some of these conversions are going upstream via their
respective maintainers. )

- Documentation updates, including a number of flavor-consolidation
updates from Joel Fernandes.

- Miscellaneous fixes.

- Automate generation of the initrd filesystem used for rcutorture
testing.

- Convert spin_is_locked() assertions to instead use lockdep.

( Note that some of these conversions are going upstream via their
respective maintainers. )

- SRCU updates, especially including a fix from Dennis Krein for a
bag-on-head-class bug.

- RCU torture-test updates"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (112 commits)
rcutorture: Don't do busted forward-progress testing
rcutorture: Use 100ms buckets for forward-progress callback histograms
rcutorture: Recover from OOM during forward-progress tests
rcutorture: Print forward-progress test age upon failure
rcutorture: Print time since GP end upon forward-progress failure
rcutorture: Print histogram of CB invocation at OOM time
rcutorture: Print GP age upon forward-progress failure
rcu: Print per-CPU callback counts for forward-progress failures
rcu: Account for nocb-CPU callback counts in RCU CPU stall warnings
rcutorture: Dump grace-period diagnostics upon forward-progress OOM
rcutorture: Prepare for asynchronous access to rcu_fwd_startat
torture: Remove unnecessary "ret" variables
rcutorture: Affinity forward-progress test to avoid housekeeping CPUs
rcutorture: Break up too-long rcu_torture_fwd_prog() function
rcutorture: Remove cbflood facility
torture: Bring any extra CPUs online during kernel startup
rcutorture: Add call_rcu() flooding forward-progress tests
rcutorture/formal: Replace synchronize_sched() with synchronize_rcu()
tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()
net/decnet: Replace rcu_barrier_bh() with rcu_barrier()
...


c79b411e 16-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: skcipher - remove remnants of internal IV generators

Remove dead code related to internal IV generators, which are no longer
used since they've been replaced with the "seqiv" and "echainiv"
templates. The removed code includes:

- The "givcipher" (GIVCIPHER) algorithm type. No algorithms are
registered with this type anymore, so it's unneeded.

- The "const char *geniv" member of aead_alg, ablkcipher_alg, and
blkcipher_alg. A few algorithms still set this, but it isn't used
anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
Just hardcode "<default>" or "<none>" in those cases.

- The 'skcipher_givcrypt_request' structure, which is never used.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

101b53d9 15-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: salsa20-generic - don't unnecessarily use atomic walk

salsa20-generic doesn't use SIMD instructions or otherwise disable
preemption, so passing atomic=true to skcipher_walk_virt() is
unnecessary.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

bb648291 15-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: skcipher - add might_sleep() to skcipher_walk_virt()

skcipher_walk_virt() can still sleep even with atomic=true, since that
only affects the later calls to skcipher_walk_done(). But,
skcipher_walk_virt() only has to allocate memory for some input data
layouts, so incorrectly calling it with preemption disabled can go
undetected. Use might_sleep() so that it's detected reliably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0c99c2a0 13-Dec-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - remove unused dump functions

This patch removes unused dump functions for crypto_user_stats.
There are remains of the copy/paste of crypto_user_base to
crypto_user_stat and I forgot to remove them.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8636a1f9 11-Dec-2018 Masahiro Yamada <yamada.masahiro@socionext.com>

treewide: surround Kconfig file paths with double quotes

The Kconfig lexer supports special characters such as '.' and '/' in
the parameter context. In my understanding, the reason is just to
support bare file paths in the source statement.

I do not see a good reason to complicate Kconfig for the room of
ambiguity.

The majority of code already surrounds file paths with double quotes,
and it makes sense since file paths are constant string literals.

Make it treewide consistent now.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Ingo Molnar <mingo@kernel.org>

00c9fe37 10-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: adiantum - fix leaking reference to hash algorithm

crypto_alg_mod_lookup() takes a reference to the hash algorithm but
crypto_init_shash_spawn() doesn't take ownership of it, hence the
reference needs to be dropped in adiantum_create().

Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0ac6b8fb 06-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: user - support incremental algorithm dumps

CRYPTO_MSG_GETALG in NLM_F_DUMP mode sometimes doesn't return all
registered crypto algorithms, because it doesn't support incremental
dumps. crypto_dump_report() only permits itself to be called once, yet
the netlink subsystem allocates at most ~64 KiB for the skb being dumped
to. Thus only the first recvmsg() returns data, and it may only include
a subset of the crypto algorithms even if the user buffer passed to
recvmsg() is large enough to hold all of them.

Fix this by using one of the arguments in the netlink_callback structure
to keep track of the current position in the algorithm list. Then
userspace can do multiple recvmsg() on the socket after sending the dump
request. This is the way netlink dumps work elsewhere in the kernel;
it's unclear why this was different (probably just an oversight).

Also fix an integer overflow when calculating the dump buffer size hint.

Fixes: a38f7907b926 ("crypto: Add userspace configuration API")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c6018e1a 06-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: adiantum - adjust some comments to match latest paper

The 2018-11-28 revision of the Adiantum paper has revised some notation:

- 'M' was replaced with 'L' (meaning "Left", for the left-hand part of
the message) in the definition of Adiantum hashing, to avoid confusion
with the full message
- ε-almost-∆-universal is now abbreviated as ε-∆U instead of εA∆U
- "block" is now used only to mean block cipher and Poly1305 blocks

Also, Adiantum hashing was moved from the appendix to the main paper.

To avoid confusion, update relevant comments in the code to match.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

282c1485 06-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: xchacha20 - fix comments for test vectors

The kernel's ChaCha20 uses the RFC7539 convention of the nonce being 12
bytes rather than 8, so actually I only appended 12 random bytes (not
16) to its test vectors to form 24-byte nonces for the XChaCha20 test
vectors. The other 4 bytes were just from zero-padding the stream
position to 8 bytes. Fix the comments above the test vectors.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5569e8c0 06-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: xchacha - add test vector from XChaCha20 draft RFC

There is a draft specification for XChaCha20 being worked on. Add the
XChaCha20 test vector from the appendix so that we can be extra sure the
kernel's implementation is compatible.

I also recomputed the ciphertext with XChaCha12 and added it there too,
to keep the tests for XChaCha20 and XChaCha12 in sync.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7a507d62 04-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: x86/chacha - add XChaCha12 support

Now that the x86_64 SIMD implementations of ChaCha20 and XChaCha20 have
been refactored to support varying the number of rounds, add support for
XChaCha12. This is identical to XChaCha20 except for the number of
rounds, which is 12 instead of 20. This can be used by Adiantum.

Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4af78261 04-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: x86/chacha20 - add XChaCha20 support

Add an XChaCha20 implementation that is hooked up to the x86_64 SIMD
implementations of ChaCha20. This can be used by Adiantum.

An SSSE3 implementation of single-block HChaCha20 is also added so that
XChaCha20 can use it rather than the generic implementation. This
required refactoring the ChaCha permutation into its own function.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0f961f9f 04-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305

Add a 64-bit AVX2 implementation of NHPoly1305, an ε-almost-∆-universal
hash function used in the Adiantum encryption mode. For now, only the
NH portion is actually AVX2-accelerated; the Poly1305 part is less
performance-critical so is just implemented in C.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

012c8238 04-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: x86/nhpoly1305 - add SSE2 accelerated NHPoly1305

Add a 64-bit SSE2 implementation of NHPoly1305, an ε-almost-∆-universal
hash function used in the Adiantum encryption mode. For now, only the
NH portion is actually SSE2-accelerated; the Poly1305 part is less
performance-critical so is just implemented in C.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b299362e 04-Dec-2018 Eric Biggers <ebiggers@google.com>

crypto: adiantum - propagate CRYPTO_ALG_ASYNC flag to instance

If the stream cipher implementation is asynchronous, then the Adiantum
instance must be flagged as asynchronous as well. Otherwise someone
asking for a synchronous algorithm can get an asynchronous algorithm.

There are no asynchronous xchacha12 or xchacha20 implementations yet
which makes this largely a theoretical issue, but it should be fixed.

Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ee5bbc9f 04-Dec-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: tcrypt - add block size of 1472 to skcipher template

In order to have better coverage of algorithms operating on block
sizes that are in the ballpark of a VPN packet, add 1472 to the
block_sizes array.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1f6669b9 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - Add crypto_stats_init

This patch add the crypto_stats_init() function.
This will permit to remove some ifdef from __crypto_register_alg().

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

44f13133 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - rename err_cnt parameter

Since now all crypto stats are on their own structures, it is now
useless to have the algorithm name in the err_cnt member.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

17c18f9e 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - Split stats in multiple structures

Like for userspace, this patch splits stats into multiple structures,
one for each algorithm class.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5fff8172 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - remove intermediate variable

The use of the v64 intermediate variable is useless, and removing it
bring to much readable code.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b0af91c1 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - Fix invalid stat reporting

Some error count use the wrong name for getting this data.
But this had not caused any reporting problem, since all error count are shared in the same
union.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f7d76e05 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - fix use_after_free of struct xxx_request

All crypto_stats functions use the struct xxx_request for feeding stats,
but in some case this structure could already be freed.

For fixing this, the needed parameters (len and alg) will be stored
before the request being executed.
Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics")
Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com>

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7f0a9d5c 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - split user space crypto stat structures

It is cleaner to have each stat in their own structures.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

6e8e72cd 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - convert all stats from u32 to u64

All the 32-bit fields need to be 64-bit. In some cases, UINT32_MAX crypto
operations can be done in seconds.

Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a6a31385 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - CRYPTO_STATS should depend on CRYPTO_USER

CRYPTO_STATS is using CRYPTO_USER stuff, so it should depends on it.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2ced2607 29-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - made crypto_user_stat optional

Even if CRYPTO_STATS is set to n, some part of CRYPTO_STATS are
compiled.
This patch made all part of crypto_user_stat uncompiled in that case.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

946dca8f 06-Dec-2018 Herbert Xu <herbert@gondor.apana.org.au>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Merge crypto tree to pick up crypto stats API revert.


e61efff4 06-Dec-2018 Herbert Xu <herbert@gondor.apana.org.au>

crypto: user - Disable statistics interface

Since this user-space API is still undergoing significant changes,
this patch disables it for the current merge window.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4bbfd746 03-Dec-2018 Ingo Molnar <mingo@kernel.org>

Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Pull RCU changes from Paul E. McKenney:

- Convert RCU's BUG_ON() and similar calls to WARN_ON() and similar.

- Replace calls of RCU-bh and RCU-sched update-side functions
to their vanilla RCU counterparts. This series is a step
towards complete removal of the RCU-bh and RCU-sched update-side
functions.

( Note that some of these conversions are going upstream via their
respective maintainers. )

- Documentation updates, including a number of flavor-consolidation
updates from Joel Fernandes.

- Miscellaneous fixes.

- Automate generation of the initrd filesystem used for
rcutorture testing.

- Convert spin_is_locked() assertions to instead use lockdep.

( Note that some of these conversions are going upstream via their
respective maintainers. )

- SRCU updates, especially including a fix from Dennis Krein
for a bag-on-head-class bug.

- RCU torture-test updates.

Signed-off-by: Ingo Molnar <mingo@kernel.org>


e5bde04c 22-Nov-2018 Pan Bian <bianpan2016@163.com>

crypto: do not free algorithm before using

In multiple functions, the algorithm fields are read after its reference
is dropped through crypto_mod_put. In this case, the algorithm memory
may be freed, resulting in use-after-free bugs. This patch delays the
put operation until the algorithm is never used.

Fixes: 79c65d179a40 ("crypto: cbc - Convert to skcipher")
Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Fixes: 043a44001b9e ("crypto: pcbc - Convert to skcipher")
Cc: <stable@vger.kernel.org>
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a0076e17 05-Nov-2018 Paul E. McKenney <paulmck@kernel.org>

crypto/pcrypt: Replace synchronize_rcu_bh() with synchronize_rcu()

Now that synchronize_rcu() waits for bh-disable regions of code as
well as RCU read-side critical sections, the synchronize_rcu_bh() in
pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu().
This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: <linux-crypto@vger.kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>

059c2a4d 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: adiantum - add Adiantum support

Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:

Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)

See our paper for full details; this patch only provides an overview.

Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.

Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.

This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".

Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".

For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.

We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.

The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.

Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

26609a21 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: nhpoly1305 - add NHPoly1305 support

Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash
function used in the Adiantum encryption mode.

CONFIG_NHPOLY1305 is not selectable by itself since there won't be any
real reason to enable it without also enabling Adiantum support.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1b6fd3d5 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: poly1305 - add Poly1305 core API

Expose a low-level Poly1305 API which implements the
ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC
and supports block-aligned inputs only.

This is needed for Adiantum hashing, which builds an εA∆U hash function
from NH and a polynomial evaluation in GF(2^{130}-5); this polynomial
evaluation is identical to the one the Poly1305 MAC does. However, the
crypto_shash Poly1305 API isn't very appropriate for this because its
calling convention assumes it is used as a MAC, with a 32-byte "one-time
key" provided for every digest.

But by design, in Adiantum hashing the performance of the polynomial
evaluation isn't nearly as critical as NH. So it suffices to just have
some C helper functions. Thus, this patch adds such functions.

Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

878afc35 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: poly1305 - use structures for key and accumulator

In preparation for exposing a low-level Poly1305 API which implements
the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305
MAC and supports block-aligned inputs only, create structures
poly1305_key and poly1305_state which hold the limbs of the Poly1305
"r" key and accumulator, respectively.

These structures could actually have the same type (e.g. poly1305_val),
but different types are preferable, to prevent misuse.

Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

aa762409 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: chacha - add XChaCha12 support

Now that the generic implementation of ChaCha20 has been refactored to
allow varying the number of rounds, add support for XChaCha12, which is
the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
three ciphers specified by the original ChaCha paper
(https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
ChaCha20 but has a lower, but still large, security margin.

We need XChaCha12 support so that it can be used in the Adiantum
encryption mode, which enables disk/file encryption on low-end mobile
devices where AES-XTS is too slow as the CPUs lack AES instructions.

We'd prefer XChaCha20 (the more popular variant), but it's too slow on
some of our target devices, so at least in some cases we do need the
XChaCha12-based version. In more detail, the problem is that Adiantum
is still much slower than we're happy with, and encryption still has a
quite noticeable effect on the feel of low-end devices. Users and
vendors push back hard against encryption that degrades the user
experience, which always risks encryption being disabled entirely. So
we need to choose the fastest option that gives us a solid margin of
security, and here that's XChaCha12. The best known attack on ChaCha
breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
security margin is still better than AES-256's. Much has been learned
about cryptanalysis of ARX ciphers since Salsa20 was originally designed
in 2005, and it now seems we can be comfortable with a smaller number of
rounds. The eSTREAM project also suggests the 12-round version of
Salsa20 as providing the best balance among the different variants:
combining very good performance with a "comfortable margin of security".

Note that it would be trivial to add vanilla ChaCha12 in addition to
XChaCha12. However, it's unneeded for now and therefore is omitted.

As discussed in the patch that introduced XChaCha20 support, I
considered splitting the code into separate chacha-common, chacha20,
xchacha20, and xchacha12 modules, so that these algorithms could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

1ca1b917 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: chacha20-generic - refactor to allow varying number of rounds

In preparation for adding XChaCha12 support, rename/refactor
chacha20-generic to support different numbers of rounds. The
justification for needing XChaCha12 support is explained in more detail
in the patch "crypto: chacha - add XChaCha12 support".

The only difference between ChaCha{8,12,20} are the number of rounds
itself; all other parts of the algorithm are the same. Therefore,
remove the "20" from all definitions, structures, functions, files, etc.
that will be shared by all ChaCha versions.

Also make ->setkey() store the round count in the chacha_ctx (previously
chacha20_ctx). The generic code then passes the round count through to
chacha_block(). There will be a ->setkey() function for each explicitly
allowed round count; the encrypt/decrypt functions will be the same. I
decided not to do it the opposite way (same ->setkey() function for all
round counts, with different encrypt/decrypt functions) because that
would have required more boilerplate code in architecture-specific
implementations of ChaCha and XChaCha.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

de61d7ae 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: chacha20-generic - add XChaCha20 support

Add support for the XChaCha20 stream cipher. XChaCha20 is the
application of the XSalsa20 construction
(https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than
to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or
96 bits, depending on convention) to 192 bits, while provably retaining
ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the
key and first 128 nonce bits to a 256-bit subkey. Then, it does the
ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce.

We need XChaCha support in order to add support for the Adiantum
encryption mode. Note that to meet our performance requirements, we
actually plan to primarily use the variant XChaCha12. But we believe
it's wise to first add XChaCha20 as a baseline with a higher security
margin, in case there are any situations where it can be used.
Supporting both variants is straightforward.

Since XChaCha20's subkey differs for each request, XChaCha20 can't be a
template that wraps ChaCha20; that would require re-keying the
underlying ChaCha20 for every request, which wouldn't be thread-safe.
Instead, we make XChaCha20 its own top-level algorithm which calls the
ChaCha20 streaming implementation internally.

Similar to the existing ChaCha20 implementation, we define the IV to be
the nonce and stream position concatenated together. This allows users
to seek to any position in the stream.

I considered splitting the code into separate chacha20-common, chacha20,
and xchacha20 modules, so that chacha20 and xchacha20 could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity of separate modules.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5e04542a 16-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: chacha20-generic - don't unnecessarily use atomic walk

chacha20-generic doesn't use SIMD instructions or otherwise disable
preemption, so passing atomic=true to skcipher_walk_virt() is
unnecessary.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

d4165590 14-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: remove useless initializations of cra_list

Some algorithms initialize their .cra_list prior to registration.
But this is unnecessary since crypto_register_alg() will overwrite
.cra_list when adding the algorithm to the 'crypto_alg_list'.
Apparently the useless assignment has just been copy+pasted around.

So, remove the useless assignments.

Exception: paes_s390.c uses cra_list to check whether the algorithm is
registered or not, so I left that as-is for now.

This patch shouldn't change any actual behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3da2c1df 11-Nov-2018 Vitaly Chikunov <vt@altlinux.org>

crypto: ecc - regularize scalar for scalar multiplication

ecc_point_mult is supposed to be used with a regularized scalar,
otherwise, it's possible to deduce the position of the top bit of the
scalar with timing attack. This is important when the scalar is a
private key.

ecc_point_mult is already using a regular algorithm (i.e. having an
operation flow independent of the input scalar) but regularization step
is not implemented.

Arrange scalar to always have fixed top bit by adding a multiple of the
curve order (n).

References:
The constant time regularization step is based on micro-ecc by Kenneth
MacKay and also referenced in the literature (Bernstein, D. J., & Lange,
T. (2017). Montgomery curves and the Montgomery ladder. (Cryptology
ePrint Archive; Vol. 2017/293). s.l.: IACR. Chapter 4.6.2.)

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Cc: kernel-hardening@lists.openwall.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

193188e5 08-Nov-2018 Cristian Stoica <cristian.stoica@nxp.com>

crypto: chacha20poly1305 - export CHACHAPOLY_IV_SIZE

Move CHACHAPOLY_IV_SIZE to header file, so it can be reused.

Signed-off-by: Cristian Stoica <cristian.stoica@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

25a0b9d4 06-Nov-2018 Vitaly Chikunov <vt@altlinux.org>

crypto: streebog - add Streebog test vectors

Add testmgr and tcrypt tests and vectors for Streebog hash function
from RFC 6986 and GOST R 34.11-2012, for HMAC-Streebog vectors are
from RFC 7836 and R 50.1.113-2016.

Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dfdda82e 06-Nov-2018 Vitaly Chikunov <vt@altlinux.org>

crypto: streebog - register Streebog in hash info for IMA

Register Streebog hash function in Hash Info arrays to let IMA use
it for its purposes.

Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Mimi Zohar <zohar@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fe18957e 06-Nov-2018 Vitaly Chikunov <vt@altlinux.org>

crypto: streebog - add Streebog hash function

Add GOST/IETF Streebog hash function (GOST R 34.11-2012, RFC 6986)
generic hash transformation.

Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ecd6d5c9 04-Nov-2018 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: cts - document NIST standard status

cts(cbc(aes)) as used in the kernel has been added to NIST
standard as CBC-CS3. Document it as such.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Suggested-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

2eb4942b 05-Nov-2018 Vitaly Chikunov <vt@altlinux.org>

crypto: ecc - check for invalid values in the key verification test

Currently used scalar multiplication algorithm (Matthieu Rivain, 2011)
have invalid values for scalar == 1, n-1, and for regularized version
n-2, which was previously not checked. Verify that they are not used as
private keys.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

196ad604 04-Nov-2018 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: testmgr - mark cts(cbc(aes)) as FIPS allowed

As per Sp800-38A addendum from Oct 2010[1], cts(cbc(aes)) is
allowed as a FIPS mode algorithm. Mark it as such.

[1] https://csrc.nist.gov/publications/detail/sp/800-38a/addendum/final

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

37db69e0 03-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: user - clean up report structure copying

There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks. Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:

- https://lore.kernel.org/patchwork/patch/954991/
- https://patchwork.kernel.org/patch/10434351/

Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace. A fix was proposed, but it was
originally incomplete.

Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:

- Start by memsetting the report structure to 0. This guarantees it's
always initialized, regardless of what happens later.
- Initialize all strings using strscpy(). This is safe after the
memset, ensures null termination of long strings, avoids unnecessary
work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type). This is more robust against
copy+paste errors.

For simplicity, also reuse the -EMSGSIZE return value from nla_put().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ed848b65 03-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: user - remove redundant reporting functions

The acomp, akcipher, and kpp algorithm types already have .report
methods defined, so there's no need to duplicate this functionality in
crypto_user itself; the duplicate functions are actually never executed.
Remove the unused code.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b1e3874c 27-Oct-2018 Colin Ian King <colin.king@canonical.com>

pcrypt: use format specifier in kobject_add

Passing string 'name' as the format specifier is potentially hazardous
because name could (although very unlikely to) have a format specifier
embedded in it causing issues when parsing the non-existent arguments
to these. Follow best practice by using the "%s" format string for
the string 'name'.

Cleans up clang warning:
crypto/pcrypt.c:397:40: warning: format string is not a string literal
(potentially insecure) [-Wformat-security]

Fixes: a3fb1e330dd2 ("pcrypt: Added sysfs interface to pcrypt")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

7da66670 19-Oct-2018 Dmitry Baryshkov <dbaryshkov@gmail.com>

crypto: testmgr - add AES-CFB tests

Add AES128/192/256-CFB testvectors from NIST SP800-38A.

Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fa460073 19-Oct-2018 Dmitry Baryshkov <dbaryshkov@gmail.com>

crypto: cfb - fix decryption

crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream with
IV, rather than with data stream, resulting in incorrect decryption.
Test vectors will be added in the next patch.

Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

913a3aa0 17-Oct-2018 Eric Biggers <ebiggers@google.com>

crypto: arm/aes - add some hardening against cache-timing attacks

Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache. This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used by most AES implementations.

On ARM Cortex-A7, the speed loss is only about 5%. The resulting code
is still over twice as fast as aes_ti.c. Responsiveness is potentially
a concern, but interrupts are only disabled for a single AES block.

Note that even after these changes, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.

Much of this patch is based on patches suggested by Ard Biesheuvel.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

0a6a40c2 17-Oct-2018 Eric Biggers <ebiggers@google.com>

crypto: aes_ti - disable interrupts while accessing S-box

In the "aes-fixed-time" AES implementation, disable interrupts while
accessing the S-box, in order to make cache-timing attacks more
difficult. Previously it was possible for the CPU to be interrupted
while the S-box was loaded into L1 cache, potentially evicting the
cachelines and causing later table lookups to be time-variant.

In tests I did on x86 and ARM, this doesn't affect performance
significantly. Responsiveness is potentially a concern, but interrupts
are only disabled for a single AES block.

Note that even after this change, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

9f4debe3 03-Nov-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - Zeroize whole structure given to user space

For preventing uninitialized data to be given to user-space (and so leak
potential useful data), the crypto_stat structure must be correctly
initialized.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
[EB: also fix it in crypto_reportstat_one()]
[EB: use sizeof(var) rather than sizeof(type)]
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

f43f3995 03-Nov-2018 Eric Biggers <ebiggers@google.com>

crypto: user - fix leaking uninitialized memory to userspace

All bytes of the NETLINK_CRYPTO report structures must be initialized,
since they are copied to userspace. The change from strncpy() to
strlcpy() broke this. As a minimal fix, change it back.

Fixes: 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

508a1c4d 08-Nov-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: simd - correctly take reqsize of wrapped skcipher into account

The simd wrapper's skcipher request context structure consists
of a single subrequest whose size is taken from the subordinate
skcipher. However, in simd_skcipher_init(), the reqsize that is
retrieved is not from the subordinate skcipher but from the
cryptd request structure, whose size is completely unrelated to
the actual wrapped skcipher.

Reported-by: Qian Cai <cai@gmx.us>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Qian Cai <cai@gmx.us>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

64ae16df 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Add support for the sign operation [ver #2]

The sign operation can operate in a non-hashed mode by running the RSA
sign operation directly on the input. This assumes that the input is
less than key_size_in_bytes - 11. Since the TPM performs its own PKCS1
padding, it isn't possible to support 'raw' mode, only 'pkcs1'.

Alternatively, a hashed version is also possible. In this variant the
input is hashed (by userspace) via the selected hash function first.
Then this implementation takes care of converting the hash to ASN.1
format and the sign operation is performed on the result. This is
similar to the implementation inside crypto/rsa-pkcs1pad.c.

ASN1 templates were copied from crypto/rsa-pkcs1pad.c. There seems to
be no easy way to expose that functionality, but likely the templates
should be shared somehow.

The sign operation is implemented via TPM_Sign operation on the TPM.
It is assumed that the TPM wrapped key provided uses
TPM_SS_RSASSAPKCS1v15_DER signature scheme. This allows the TPM_Sign
operation to work on data up to key_len_in_bytes - 11 bytes long.

In theory, we could also use TPM_Unbind instead of TPM_Sign, but we would
have to manually pkcs1 pad the digest first.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

e73d170f 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement tpm_sign [ver #2]

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

e08e6891 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement signature verification [ver #2]

This patch implements the verify_signature operation. The public key
portion extracted from the TPM key blob is used. The operation is
performed entirely in software using the crypto API.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

a335974a 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement the decrypt operation [ver #2]

This patch implements the pkey_decrypt operation using the private key
blob. The blob is first loaded into the TPM via tpm_loadkey2. Once the
handle is obtained, tpm_unbind operation is used to decrypt the data on
the TPM and the result is returned. The key loaded by tpm_loadkey2 is
then evicted via tpm_flushspecific operation.

This patch assumes that the SRK authorization is a well known 20-byte of
zeros and the same holds for the key authorization of the provided key.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

f884fe5a 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement tpm_unbind [ver #2]

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Reviewed-by: James Morris <james.morris@microsoft.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

0c36264a 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Add loadkey2 and flushspecific [ver #2]

This commit adds TPM_LoadKey2 and TPM_FlushSpecific operations.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: James Morris <james.morris@microsoft.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

e1ea9f86 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: trusted: Expose common functionality [ver #2]

This patch exposes some common functionality needed to send TPM commands.
Several functions from keys/trusted.c are exposed for use by the new tpm
key subtype and a module dependency is introduced.

In the future, common functionality between the trusted key type and the
asym_tpm subtype should be factored out into a common utility library.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

ad4b1eb5 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement encryption operation [ver #2]

This patch impelements the pkey_encrypt operation. The public key
portion extracted from the TPM key blob is used. The operation is
performed entirely in software using the crypto API.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

dff5a61a 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: Implement pkey_query [ver #2]

This commit implements the pkey_query operation. This is accomplished
by utilizing the public key portion to obtain max encryption size
information for the operations that utilize the public key (encrypt,
verify). The private key size extracted from the TPM_Key data structure
is used to fill the information where the private key is used (decrypt,
sign).

The kernel uses a DER/BER format for public keys and does not support
setting the key via the raw binary form. To get around this a simple
DER/BER formatter is implemented which stores the DER/BER formatted key
and exponent in a temporary buffer for use by the crypto API.

The only exponent supported currently is 65537. This holds true for
other Linux TPM tools such as 'create_tpm_key' and
trousers-openssl_tpm_engine.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

d5e72745 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: Add parser for TPM-based keys [ver #2]

For TPM based keys, the only standard seems to be described here:
http://david.woodhou.se/draft-woodhouse-cert-best-practice.html#rfc.section.4.4

Quote from the relevant section:
"Rather, a common form of storage for "wrapped" keys is to encode the
binary TCPA_KEY structure in a single ASN.1 OCTET-STRING, and store the
result in PEM format with the tag "-----BEGIN TSS KEY BLOB-----". "

This patch implements the above behavior. It is assumed that the PEM
encoding is stripped out by userspace and only the raw DER/BER format is
provided. This is similar to how PKCS7, PKCS8 and X.509 keys are
handled.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

f8c54e1a 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: extract key size & public key [ver #2]

The parsed BER/DER blob obtained from user space contains a TPM_Key
structure. This structure has some information about the key as well as
the public key portion.

This patch extracts this information for future use.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

903be6bb 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

KEYS: asym_tpm: add skeleton for asym_tpm [ver #2]

This patch adds the basic skeleton for the asym_tpm asymmetric key
subtype.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

b3a8c8a5 09-Oct-2018 Denis Kenzior <denkenz@gmail.com>

crypto: rsa-pkcs1pad: Allow hash to be optional [ver #2]

The original pkcs1pad implementation allowed to pad/unpad raw RSA
output. However, this has been taken out in commit:
commit c0d20d22e0ad ("crypto: rsa-pkcs1pad - Require hash to be present")

This patch restored this ability as it is needed by the asymmetric key
implementation.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>

3c58b236 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Implement PKCS#8 RSA Private Key parser [ver #2]

Implement PKCS#8 RSA Private Key format [RFC 5208] parser for the
asymmetric key type. For the moment, this will only support unencrypted
DER blobs. PEM and decryption can be added later.

PKCS#8 keys can be loaded like this:

openssl pkcs8 -in private_key.pem -topk8 -nocrypt -outform DER | \
keyctl padd asymmetric foo @s

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

c08fed73 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Implement encrypt, decrypt and sign for software asymmetric key [ver #2]

Implement the encrypt, decrypt and sign operations for the software
asymmetric key subtype. This mostly involves offloading the call to the
crypto layer.

Note that the decrypt and sign operations require a private key to be
supplied. Encrypt (and also verify) will work with either a public or a
private key. A public key can be supplied with an X.509 certificate and a
private key can be supplied using a PKCS#8 blob:

# j=`openssl pkcs8 -in ~/pkcs7/firmwarekey2.priv -topk8 -nocrypt -outform DER | keyctl padd asymmetric foo @s`
# keyctl pkey_query $j - enc=pkcs1
key_size=4096
max_data_size=512
max_sig_size=512
max_enc_size=512
max_dec_size=512
encrypt=y
decrypt=y
sign=y
verify=y
# keyctl pkey_encrypt $j 0 data enc=pkcs1 >/tmp/enc
# keyctl pkey_decrypt $j 0 /tmp/enc enc=pkcs1 >/tmp/dec
# cmp data /tmp/dec
# keyctl pkey_sign $j 0 data enc=pkcs1 hash=sha1 >/tmp/sig
# keyctl pkey_verify $j 0 data /tmp/sig enc=pkcs1 hash=sha1
#

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

f7c4e06e 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Allow the public_key struct to hold a private key [ver #2]

Put a flag in the public_key struct to indicate if the structure is holding
a private key. The private key must be held ASN.1 encoded in the format
specified in RFC 3447 A.1.2. This is the form required by crypto/rsa.c.

The software encryption subtype's verification and query functions then
need to select the appropriate crypto function to set the key.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

82f94f24 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Provide software public key query function [ver #2]

Provide a query function for the software public key implementation. This
permits information about such a key to be obtained using
query_asymmetric_key() or KEYCTL_PKEY_QUERY.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

03988490 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Make the X.509 and PKCS7 parsers supply the sig encoding type [ver #2]

Make the X.509 and PKCS7 parsers fill in the signature encoding type field
recently added to the public_key_signature struct.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

5a307718 09-Oct-2018 David Howells <dhowells@redhat.com>

KEYS: Provide missing asymmetric key subops for new key type ops [ver #2]

Provide the missing asymmetric key subops for new key type ops. This
include query, encrypt, decrypt and create signature. Verify signature
already exists. Also provided are accessor functions for this:

int query_asymmetric_key(const struct key *key,
struct kernel_pkey_query *info);

int encrypt_blob(struct kernel_pkey_params *params,
const void *data, void *enc);
int decrypt_blob(struct kernel_pkey_params *params,
const void *enc, void *data);
int create_signature(struct kernel_pkey_params *params,
const void *data, void *enc);

The public_key_signature struct gains an encoding field to carry the
encoding for verify_signature().

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>

62606c22 25-Oct-2018 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Remove VLA usage
- Add cryptostat user-space interface
- Add notifier for new crypto algorithms

Algorithms:
- Add OFB mode
- Remove speck

Drivers:
- Remove x86/sha*-mb as they are buggy
- Remove pcbc(aes) from x86/aesni
- Improve performance of arm/ghash-ce by up to 85%
- Implement CTS-CBC in arm64/aes-blk, faster by up to 50%
- Remove PMULL based arm64/crc32 driver
- Use PMULL in arm64/crct10dif
- Add aes-ctr support in s5p-sss
- Add caam/qi2 driver

Others:
- Pick better transform if one becomes available in crc-t10dif"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (124 commits)
crypto: chelsio - Update ntx queue received from cxgb4
crypto: ccree - avoid implicit enum conversion
crypto: caam - add SPDX license identifier to all files
crypto: caam/qi - simplify CGR allocation, freeing
crypto: mxs-dcp - make symbols 'sha1_null_hash' and 'sha256_null_hash' static
crypto: arm64/aes-blk - ensure XTS mask is always loaded
crypto: testmgr - fix sizeof() on COMP_BUF_SIZE
crypto: chtls - remove set but not used variable 'csk'
crypto: axis - fix platform_no_drv_owner.cocci warnings
crypto: x86/aes-ni - fix build error following fpu template removal
crypto: arm64/aes - fix handling sub-block CTS-CBC inputs
crypto: caam/qi2 - avoid double export
crypto: mxs-dcp - Fix AES issues
crypto: mxs-dcp - Fix SHA null hashes and output length
crypto: mxs-dcp - Implement sha import/export
crypto: aegis/generic - fix for big endian systems
crypto: morus/generic - fix for big endian systems
crypto: lrw - fix rebase error after out of bounds fix
crypto: cavium/nitrox - use pci_alloc_irq_vectors() while enabling MSI-X.
crypto: cavium/nitrox - NITROX command queue changes.
...


89ab066d 23-Oct-2018 Karsten Graul <kgraul@linux.ibm.com>

Revert "net: simplify sock_poll_wait"

This reverts commit dd979b4df817e9976f18fb6f9d134d6bc4a3c317.

This broke tcp_poll for SMC fallback: An AF_SMC socket establishes an
internal TCP socket for the initial handshake with the remote peer.
Whenever the SMC connection can not be established this TCP socket is
used as a fallback. All socket operations on the SMC socket are then
forwarded to the TCP socket. In case of poll, the file->private_data
pointer references the SMC socket because the TCP socket has no file
assigned. This causes tcp_poll to wait on the wrong socket.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

22a8118d 07-Oct-2018 Michael Schupikov <michael@schupikov.de>

crypto: testmgr - fix sizeof() on COMP_BUF_SIZE

After allocation, output and decomp_output both point to memory chunks of
size COMP_BUF_SIZE. Then, only the first bytes are zeroed out using
sizeof(COMP_BUF_SIZE) as parameter to memset(), because
sizeof(COMP_BUF_SIZE) provides the size of the constant and not the size of
allocated memory.

Instead, the whole allocated memory is meant to be zeroed out. Use
COMP_BUF_SIZE as parameter to memset() directly in order to accomplish
this.

Fixes: 336073840a872 ("crypto: testmgr - Allow different compression results")

Signed-off-by: Michael Schupikov <michael@schupikov.de>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4a34e3c2 01-Oct-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: aegis/generic - fix for big endian systems

Use the correct __le32 annotation and accessors to perform the
single round of AES encryption performed inside the AEGIS transform.
Otherwise, tcrypt reports:

alg: aead: Test 1 failed on encryption for aegis128-generic
00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e
alg: aead: Test 1 failed on encryption for aegis128l-generic
00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28
alg: aead: Test 1 failed on encryption for aegis256-generic
00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c

Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

5a8dedfa 01-Oct-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: morus/generic - fix for big endian systems

Omit the endian swabbing when folding the lengths of the assoc and
crypt input buffers into the state to finalize the tag. This is not
necessary given that the memory representation of the state is in
machine native endianness already.

This fixes an error reported by tcrypt running on a big endian system:

alg: aead: Test 2 failed on encryption for morus640-generic
00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b
00000010: 21
alg: aead: Test 2 failed on encryption for morus1280-generic
00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee
00000010: 5f

Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fd27b571 30-Sep-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: lrw - fix rebase error after out of bounds fix

Due to an unfortunate interaction between commit fbe1a850b3b1
("crypto: lrw - Fix out-of bounds access on counter overflow") and
commit c778f96bf347 ("crypto: lrw - Optimize tweak computation"),
we ended up with a version of next_index() that always returns 127.

Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

944585a6 24-Sep-2018 Ard Biesheuvel <ardb@kernel.org>

crypto: x86/aes-ni - remove special handling of AES in PCBC mode

For historical reasons, the AES-NI based implementation of the PCBC
chaining mode uses a special FPU chaining mode wrapper template to
amortize the FPU start/stop overhead over multiple blocks.

When this FPU wrapper was introduced, it supported widely used
chaining modes such as XTS and CTR (as well as LRW), but currently,
PCBC is the only remaining user.

Since there are no known users of pcbc(aes) in the kernel, let's remove
this special driver, and rely on the generic pcbc driver to encapsulate
the AES-NI core cipher.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dfb89ab3 20-Sep-2018 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: tcrypt - add OFB functional tests

We already have OFB test vectors and tcrypt OFB speed tests.
Add OFB functional tests to tcrypt as well.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

e497c518 20-Sep-2018 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: ofb - add output feedback mode

Add a generic version of output feedback mode. We already have support of
several hardware based transformations of this mode and the needed test
vectors but we somehow missed adding a generic software one. Fix this now.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

95ba5973 20-Sep-2018 Gilad Ben-Yossef <gilad@benyossef.com>

crypto: testmgr - update sm4 test vectors

Add additional test vectors from "The SM4 Blockcipher Algorithm And Its
Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed
tests for sm4.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

4d407b04 19-Sep-2018 Horia Geantă <horia.geanta@nxp.com>

crypto: tcrypt - remove remnants of pcomp-based zlib

Commit 110492183c4b ("crypto: compress - remove unused pcomp interface")
removed pcomp interface but missed cleaning up tcrypt.

Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

cac5818c 19-Sep-2018 Corentin Labbe <clabbe@baylibre.com>

crypto: user - Implement a generic crypto statistics

This patch implement a generic way to get statistics about all crypto
usages.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

36b3875a 18-Sep-2018 Kees Cook <keescook@chromium.org>

crypto: cryptd - Remove VLA usage of skcipher

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

8d605398 18-Sep-2018 Kees Cook <keescook@chromium.org>

crypto: null - Remove VLA usage of skcipher

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

b350bee5 18-Sep-2018 Kees Cook <keescook@chromium.org>

crypto: skcipher - Introduce crypto_sync_skcipher

In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.

The crypto API additions:

struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
crypto_alloc_sync_skcipher()
crypto_free_sync_skcipher()
crypto_sync_skcipher_setkey()
crypto_sync_skcipher_get_flags()
crypto_sync_skcipher_set_flags()
crypto_sync_skcipher_clear_flags()
crypto_sync_skcipher_blocksize()
crypto_sync_skcipher_ivsize()
crypto_sync_skcipher_reqtfm()
skcipher_request_set_sync_tfm()
SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

3944f139 17-Sep-2018 Dan Aloni <dan@kernelim.com>

crypto: fix a memory leak in rsa-kcs1pad's encryption mode

The encryption mode of pkcs1pad never uses out_sg and out_buf, so
there's no need to allocate the buffer, which presently is not even
being freed.

CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: linux-crypto@vger.kernel.org
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

ac3c8f36 13-Sep-2018 Ondrej Mosnacek <omosnace@redhat.com>

crypto: lrw - Do not use auxiliary buffer

This patch simplifies the LRW template to recompute the LRW tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().

As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.

PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)

The results show that the new code has about the same performance as the
old code. For 512-byte message it seems to be even slightly faster, but
that might be just noise.

Before:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 200 203
lrw(aes) 320 64 202 204
lrw(aes) 384 64 204 205
lrw(aes) 256 512 415 415
lrw(aes) 320 512 432 440
lrw(aes) 384 512 449 451
lrw(aes) 256 4096 1838 1995
lrw(aes) 320 4096 2123 1980
lrw(aes) 384 4096 2100 2119
lrw(aes) 256 16384 7183 6954
lrw(aes) 320 16384 7844 7631
lrw(aes) 384 16384 8256 8126
lrw(aes) 256 32768 14772 14484
lrw(aes) 320 32768 15281 15431
lrw(aes) 384 32768 16469 16293

After:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 197 196
lrw(aes) 320 64 200 197
lrw(aes) 384 64 203 199
lrw(aes) 256 512 385 380
lrw(aes) 320 512 401 395
lrw(aes) 384 512 415 415
lrw(aes) 256 4096 1869 1846
lrw(aes) 320 4096 2080 1981
lrw(aes) 384 4096 2160 2109
lrw(aes) 256 16384 7077 7127
lrw(aes) 320 16384 7807 7766
lrw(aes) 384 16384 8108 8357
lrw(aes) 256 32768 14111 14454
lrw(aes) 320 32768 15268 15082
lrw(aes) 384 32768 16581 16250

[1] https://lkml.org/lkml/2018/8/23/1315

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

c778f96b 13-Sep-2018 Ondrej Mosnacek <omosnace@redhat.com>

crypto: lrw - Optimize tweak computation

This patch rewrites the tweak computation to a slightly simpler method
that performs less bswaps. Based on performance measurements the new
code seems to provide slightly better performance than the old one.

PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)

Before:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 204 286
lrw(aes) 320 64 227 203
lrw(aes) 384 64 208 204
lrw(aes) 256 512 441 439
lrw(aes) 320 512 456 455
lrw(aes) 384 512 469 483
lrw(aes) 256 4096 2136 2190
lrw(aes) 320 4096 2161 2213
lrw(aes) 384 4096 2295 2369
lrw(aes) 256 16384 7692 7868
lrw(aes) 320 16384 8230 8691
lrw(aes) 384 16384 8971 8813
lrw(aes) 256 32768 15336 15560
lrw(aes) 320 32768 16410 16346
lrw(aes) 384 32768 18023 17465

After:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 200 203
lrw(aes) 320 64 202 204
lrw(aes) 384 64 204 205
lrw(aes) 256 512 415 415
lrw(aes) 320 512 432 440
lrw(aes) 384 512 449 451
lrw(aes) 256 4096 1838 1995
lrw(aes) 320 4096 2123 1980
lrw(aes) 384 4096 2100 2119
lrw(aes) 256 16384 7183 6954
lrw(aes) 320 16384 7844 7631
lrw(aes) 384 16384 8256 8126
lrw(aes) 256 32768 14772 14484
lrw(aes) 320 32768 15281 15431
lrw(aes) 384 32768 16469 16293

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

dc6d6d5a 13-Sep-2018 Ondrej Mosnacek <omosnace@redhat.com>

crypto: testmgr - Add test for LRW counter wrap-around

This patch adds a test vector for lrw(aes) that triggers wrap-around of
the counter, which is a tricky corner case.

Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

fbe1a850 13-Sep-2018 Ondrej Mosnacek <omosnace@redhat.com>

crypto: lrw - Fix out-of bounds access on counter overflow

When the LRW block counter overflows, the current implementation returns
128 as the index to the precomputed multiplication table, which has 128
entries. This patch fixes it to return the correct value (127).

Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
Cc: <stable@vger.kernel.org> # 2.6.20+
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

331351f8 12-Sep-2018 Horia Geantă <horia.geanta@nxp.com>

crypto: tcrypt - fix ghash-generic speed test

ghash is a keyed hash algorithm, thus setkey needs to be called.
Otherwise the following error occurs:
$ modprobe tcrypt mode=318 sec=1
testing speed of async ghash-generic (ghash-generic)
tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
tcrypt: hashing failed ret=-126

Cc: <stable@vger.kernel.org> # 4.6+
Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash")
Tested-by: Franck Lenormand <franck.lenormand@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

a5e9f557 11-Sep-2018 Eric Biggers <ebiggers@google.com>

crypto: chacha20 - Fix chacha20_block() keystream alignment (again)

In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment. So, while my commit didn't break anything, it didn't fully
solve the alignment problems.

Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.

But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.

Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

78105c7e 11-Sep-2018 Ondrej Mosnacek