Right now `secp256k1_ec_pubkey_decompress` takes an in/out pointer to
a public key and replaces the input key with its decompressed variant.
This forces users who store compressed keys in small (<65 byte) fixed
size buffers (for example, the Rust bindings do this) to explicitly
and wastefully copy their key to a larger buffer.
[API BREAK]
This computes (n-b)G + bG with random value b, in place of nG in
ecmult_gen() for signing.
This is intended to reduce exposure to potential power/EMI sidechannels
during signing and pubkey generation by blinding the secret value with
another value which is hopefully unknown to the attacker.
It may not be very helpful if the attacker is able to observe the setup
or if even the scalar addition has an unacceptable leak, but it has low
overhead in any case and the security should be purely additive on top
of the existing defenses against sidechannels.
Goto, multiple returns, continue, and/or multiple breaks in a
loop are often used to build complex or non-local control
flow in software.
(They're all basically the same thing, and anyone axiomatically
opposing goto and not the rest is probably cargo-culting from
the title of Dijkstra's essay without thinking hard about it.)
Personally, I think the current use of these constructs in the
code base is fine: no where are we using them to create control-
flow that couldn't easily be described in plain English, which
is hard to read or reason about, or which looks like a trap for
future developers.
Some, however, prefer a more rules based approach to software
quality. In particular, MISRA forbids all of these constructs,
and for good experience based reasons. Rules also have the
benefit of being machine checkable and surviving individual
developers.
(To be fair-- MISRA also has a process for accommodating code that
breaks the rules for good reason).
I think that in general we should also try to satisfy the rules-
based measures of software quality, except where there is an
objective reason not do: a measurable performance difference,
logic that turns to spaghetti, etc.
Changing out all the multiple returns in secp256k1.c appears to
be basically neutral: Some parts become slightly less clear,
some parts slightly more.
GCC (and clang) supports extensions to annotate functions so that their
results must be used and so that their arguments can't be statically
provable to be null. If a caller violates these requirements they
get a warning, so this helps them write correct code.
I deployed this in libopus a couple years ago with good success, and
the implementation here is basically copied straight from that.
One consideration is that the non-null annotation teaches the optimizer
and will actually compile out runtime non-nullness checks as dead-code.
Since this is usually not whats wanted, the non-null annotations are
disabled when compiling the library itself.
The commit also removes some dead inclusions of assert.h and introduces
compatibility macros for restrict and inline in preparation for some
portability improvements.
This makes a basic effort and has not been audited.
Doesn't appear to have a measurable performance impact on bench.
It also adds a secp256k1_num_free to secp256k1_ecdsa_pubkey_create.
The documentation implies that this check is happening, so make it so.
Without this check, passing an invalid nonce will trigger an internal assertion.