32db15450a gui: make '-min' minimize wallet loading dialog (furszy)
Pull request description:
Simple fix for #748.
When '-min' is enabled, no loading dialog should
be presented on screen during startup.
ACKs for top commit:
hebasto:
ACK 32db15450a, tested on Debian 11 + XFCE.
Tree-SHA512: d08060b044938c67e8309db77b49ca645850fc21fdd7d78d5368d336fb9f602dcc66ea398a7505b00bf7d43afa07108347c7260480319fad3ec84cb41332f780
Today, this code only has one spot where it needs well-formed pubkeys,
but future PRs will want to reuse this code.
Add a function which creates a well-formed byte array that can be turned
into a pubkey. It is not required that the pubkey is valid, just that it
can be recognized as a compressed or uncompressed pubkey.
Note: while the main intent of this commit is to wrap the existing
logic into a function, it also switches to `PickValueFromArray` so that
we are only choosing one of 0x04, 0x06, or 0x07. The previous code,
`ConsumeIntegralInRange` would have also picked 0x05, which is not
definied in the context of compressed vs uncompressed keys.
See https://bitcoin.stackexchange.com/questions/57855/c-secp256k1-what-do-prefixes-0x06-and-0x07-in-an-uncompressed-public-key-signif
for more details.
This comment isn't in the right place, as detection of a tx in
recent_rejects would cause the function to exit much earlier.
Move the comment to the right place and tweak the first sentence for
accuracy.
- Whenever a tx is erased. Allows somebody to see which transactions
have been erased due to expiry/overflow, not just how many.
- Whenever a tx is added to a peer's workset.
- AcceptToMemoryPool when a tx is accepted, mirroring the one logged for
a tx received from a peer. This allows someone to see all of the
transactions that are accepted to mempool just by looking for ATMP logs.
- MEMPOOLREJ when a tx is rejected, mirroring the one logged for
a tx received from a peer. This allows someone to see all of the
transaction rejections by looking at MEMPOOLREJ logs.
13eb8aa572 doc: Release notes for testnet defaulting to -acceptnonstdtxn=0 (Anthony Towns)
e1dc15d690 config: default acceptnonstdtxn=0 on all chains (Anthony Towns)
Pull request description:
Changes `-acceptnonstxtxn` to default to 0 on testnet, matching the other chains. Allowing non-standard txs on testnet by default contributed to the difficulties RSK described in #26348: "We see that there are two script paths and, to reduce the script size, a single CHECKMULTISIG is used for the two paths, separating the signer count from the CHECKMULTISIG opcode. This script worked on testnet, because it lacks the standard checks performed in Mainnet."
ACKs for top commit:
MarcoFalke:
lgtm ACK 13eb8aa572
sipa:
utACK 13eb8aa572
instagibbs:
utACK 13eb8aa572
theStack:
Code-review ACK 13eb8aa572
Tree-SHA512: eff7a3f9fc9b94003a730beb96e6f3399bc8b8e93fde4b15f20a11eda61d9a3e076f4423989f98b794b32681abecbc3756a54cd0d37b136e2fb2ffbb47ee7774
The moved part can be reviewed with the git options
--ignore-all-space --color-moved=dimmed-zebra --color-moved-ws=ignore-all-space
(Modified by Marco Falke)
Co-authored-by: Pieter Wuille <pieter@wuille.net>
Previously, the default for acceptnonstdtxn defaulted to 0 on all
chains except testnet. Change this to be consistent across all
chains, and remove the parameter from chainparams entirely.
`Sock::Get()` was used only in `sock.{cpp,h}`. Remove it and access
`Sock::m_socket` directly.
Unit tests that used `Get()` to test for equality still verify that the
behavior is correct by using the added `operator==()`.
When estimating the maximum size of an input, we were assuming the
number of elements on the witness stack could be encode in a single
byte. This is a valid approximation for all the descriptors we support
(including P2WSH Miniscript ones), but may not hold anymore once we
support Miniscript within Taproot descriptors (since the max standard
witness stack size of 100 gets lifted).
It's a low-hanging fruit to account for it correctly, so just do it now.
Instead of using the dummysigner to compute a placeholder satisfaction,
infer a descriptor on the scriptPubKey of the coin being spent and use
the estimation of the satisfaction size given by the descriptor
directly.
Note this (almost, see next paragraph) exactly conserves the previous
behaviour. For instance CalculateMaximumSignedInputSize was previously
assuming the input to be spent in a transaction that spends at least one
Segwit coin, since it was always accounting for the serialization of the
number of witness elements.
In this commit we use a placeholder for the size of the serialization of
the witness stack size (1 byte). Since the logic in this commit is
already tricky enough to review, and that it is only a very tiny
approximation not observable through the existing tests, it is addressed
in the next commit.
It is sometimes useful to interface with multiple signing providers at
once. For instance when inferring a descriptor with solving information
being provided from multiple sources (see next commit).
Instead of inneficiently copying the information from one provider into
the other, introduce a new signing provider that takes a list of
pointers to existing providers.
In the wallet code, we are currently estimating the size of a signed
input by doing a dry run of the signing logic. This is unnecessary as
all outputs we are able to sign for can be represented by a descriptor,
and we can derive the size of a satisfaction ("signature") from the
descriptor itself directly.
In addition, this approach does not scale: getting the size of a
satisfaction through a dry run of the signing logic is only possible for
the most basic scripts.
This commit introduces the computation of the size of satisfaction per
descriptor. It's a bit intricate for 2 main reasons:
- We want to conserve the behaviour of the current dry-run logic used by
the wallet that sometimes assumes ECDSA signatures will be low-r,
sometimes not (when we don't create them).
- We need to account for the witness discount. A single descriptor may
sometimes benefit of it, sometimes not (for instance `pk()` if used as
top-level versus if used inside `wsh()`).
Similarly to how we compute the maximum stack size.
Also note how it would be quite expensive to recompute it recursively
by accounting for different ECDSA signature sizes. So we just assume
high-R everywhere. It's only a trivial difference anyways.
as string_view is optimized to be trivially copiable, and in these use cases we
only perform read operations on the passed object.
These utility methods are called by quite a few RPCs and tests, as well as by each other.
$ git grep "ParseHashV\|ParseHashO\|ParseHexV\|ParseHexO" | wc -l
61
b3a93b409e test: add functional test for deadlock situation (Martin Zumsande)
3557aa4d0a test: add basic tests for sendmsgtopeer to rpc_net.py (Martin Zumsande)
a9a1d69391 rpc: add test-only sendmsgtopeer rpc (Martin Zumsande)
Pull request description:
This adds a `sendmsgtopeer` rpc (for testing only) that allows a node to send a message (provided in hex) to a peer.
While we would usually use a `p2p` object instead of a node for this in the test framework, that isn't possible in situations where this message needs to trigger an actual interaction of multiple nodes.
Use this rpc to add test coverage for the bug fixed in #27981 (that just got merged):
The test lets two nodes (almost) simultaneously send a single large (4MB) p2p message to each other, which would have caused a deadlock previously (making this test fail), but succeeds now.
As can be seen from the discussion in #27981, it was not easy to reproduce this bug without `sendmsgtopeer`. I would imagine that `sendmsgtopeer` could also be helpful in various other test constellations.
ACKs for top commit:
ajtowns:
ACK b3a93b409e
sipa:
ACK b3a93b409e
achow101:
ACK b3a93b409e
Tree-SHA512: 6e22e72402f3c4dd70cddb9e96ea988444720f7a164031df159fbdd48056c8ac77ac53def045d9208a3ca07437c7c8e34f8b4ebc7066c0a84d81cd53f2f4fa5f
c8e066461b doc: Improve documentation of rpcallowip rpchelp (willcl-ark)
Pull request description:
Closes #21070
v21.0 introduced a behaviour changed noted in #21070 where using a config value `rpcallowip=::0` no longer also permitted ipv4 ip addresses.
The rpc_bind.py functional test covers this new behaviour already by checking that the list of bind addresses exactly matches what is expected so this commit only updates the documentation.
ACKs for top commit:
achow101:
ACK c8e066461b
pinheadmz:
ACK c8e066461b
jonatack:
ACK c8e066461b
Tree-SHA512: 332060cf0df0427c6637a9fd1e0783ce0b0940abdb41b0df13f03bfbdc28af067cec8f0b1bbc4e47b3d54fa1b2f110418442b05b39d5e7c7e0b96744ddd7c003
bf26f978ff fuzz: coinselection, fix `m_cost_of_change` (brunoerg)
6d9b26d56a fuzz: coinselection, BnB should never produce change (brunoerg)
b2eb558407 fuzz: coinselection, compare `GetSelectedValue` with target (brunoerg)
0df0438c60 fuzz: coinselection, improve `ComputeAndSetWaste` (brunoerg)
1e351e5db1 fuzz: coinselection, add coverage for `Merge` (brunoerg)
f0244a8614 fuzz: coinselection, add coverage for `GetShuffledInputVector`/`GetInputSet` (brunoerg)
808618b8a2 fuzz: coinselection, add coverage for `AddInputs` (brunoerg)
90c4e6a241 fuzz: coinselection, add coverage for `EligibleForSpending` (brunoerg)
2a031cb2c2 fuzz: coinselection, add `CreateCoins` (brunoerg)
Pull request description:
This PR:
- Moves coin creation to its own function called `CreateCoins`.
- Add coverage for `EligibleForSpending`
- Add coverage for `AddInputs`: get result of each algorithm (srd, knapsack and bnb), call `CreateCoins` and add into them.
- Add coverage for `GetShuffledInputVector` and `GetInputSet` using the result of each algorithm (srd, knapsack and bnb).
- Add coverage for `Merge`: Call SRD with the new utxos and, if successful, try to merge with the previous SRD result.
ACKs for top commit:
murchandamus:
reACK with some minimal fuzzing bf26f978ff
achow101:
ACK bf26f978ff
furszy:
re-ACK bf26f97
Tree-SHA512: bdd2b0a39de37be0a9b21a7c51260b6b8abe538cc0ea74312eb658b90a121a1ae07306c09fb0e75e93b531ce9ea2402feb041b0d852902d07739257f792e64ab
8a3b6f3387 refactor: make Transport::ReceivedBytes just return success/fail (Pieter Wuille)
bb4aab90fd net: move message conversion to wire bytes from PushMessage to SocketSendData (Pieter Wuille)
a1a1060fd6 net: measure send buffer fullness based on memory usage (Pieter Wuille)
009ff8d650 fuzz: add bidirectional fragmented transport test (Pieter Wuille)
fb2c5edb79 net: make V1Transport implicitly use current chainparams (Pieter Wuille)
0de48fe858 net: abstract sending side of transport serialization further (Pieter Wuille)
649a83c7f7 refactor: rename Transport class receive functions (Pieter Wuille)
27f9ba23ef net: add V1Transport lock protecting receive state (Pieter Wuille)
93594e42c3 refactor: merge transport serializer and deserializer into Transport class (Pieter Wuille)
Pull request description:
This PR furthers the P2P message serialization/deserialization abstraction introduced in #16202 and #16562, in preparation for introducing the BIP324 v2 transport (making this part of #27634). However, nothing in this PR is BIP324-specific, and it contains a number of independently useful improvements.
The overall idea is to have a single object in every `CNode` (called `m_transport`) that is responsible for converting sent messages to wire bytes, and for converting received wire bytes back to messages, while having as little as possible knowledge about this conversion process in higher-level net code. To accomplish that, there is an abstract `Transport` class with (currently) a single `V1Transport` implementation.
Structurally, the above is accomplished by:
* Merging the `TransportDeserializer` and `TransportSerializer` classes into a single `Transport` class, which encompasses both the sending and receiving side. For `V1Transport` these two sides are entirely separate, but this assumption doesn't hold for the BIP324 transport where e.g. the sending encryption key depends on the DH key negotiation data received from the other side. Merging the two means a future `V2Transport` can handle all this interaction without callers needing to be aware.
* Removing the assumption that each message is sent using a computed header followed by (unmodified) data bytes. To achieve that, the sending side of `Transport` mirrors what the receiver side does: callers can set a message to be sent, then ask what bytes must be sent out, and then allowing them to transition to the next message.
* Adding internal locks to protect the sending and receiving state of the `V1Transport` implementation. I believe these aren't strictly needed (opinions welcome) as there is no real way to use `Transport` objects in a multi-threaded fashion without some form of external synchronization (e.g. "get next bytes to send" isn't meaningful to call from multiple threads at the same time without mechanism to control the order they'll actually get sent). Still, I feel it's cleaner to make the object responsible for its own consistency (as we definitely do not want the entire object to be under a single external GUARDED_BY, as that'd prevent simultaneous sending and receiving).
* Moving the conversion of messages to bytes on the sending side from `PushMessage` to `SocketSendData`, which is needed to deal with the fact that a transport may not immediately be able to send messages.
This PR is not a refactor, though some commits are. Among the semantic changes are:
* Changing the send buffer pushback mechanism to trigger based on the memory usage of the buffer rather than the amount of bytes to be sent. This is both closer to the desired behavior, and makes the buffering independent from transport details (which is why it's included here).
* When optimistic send is not applicable, the V1 message checksum calculation now runs in the net thread rather than the message handling thread. I believe that's generally an improvement, as the message handling thread is far more computationally bottlenecked already.
* The checksum calculation now runs under the `CNode::cs_vSend` lock, which does mean no two checksum calculations for messages sent to the same node can run in parallel, even if running in separate threads. Despite that limitation, having the checksum for non-optimistic sends moved in the net thread is still an improvement, I believe.
* Statistics for per-message-type sent bytes are now updated when the bytes are actually handed to the OS rather than in `PushMessage`. This is because the actual serialized sizes aren't known until they've gone through the transport object.
A fuzz test of the entire `V1Transport` is included. More elaborate rationale for each of the changes can be found in the commit messages.
ACKs for top commit:
theStack:
re-ACK 8a3b6f3387
vasild:
ACK 8a3b6f3387
dergoegge:
Code review ACK 8a3b6f3387
Tree-SHA512: 26e9a6df47f1dd3e3f3edb4874edf365728e5a8bbc9d0d4d71fb6000cb2dfde5574902c47ffcf825af6743922f2ff9d31a5a38942a196f4ca6669122e15e42e4
Peeking at the underlying socket file descriptor of `Sock` and checkig
if it is `INVALID_SOCKET` is bad encapsulation and stands in the way of
testing/mocking/fuzzing.
Instead use an empty unique_ptr to denote that there is no valid socket.
This furthers transport abstraction by removing the assumption that a message
can always immediately be converted to wire bytes. This assumption does not hold
for the v2 transport proposed by BIP324, as no messages can be sent before the
handshake completes.
This is done by only keeping (complete) CSerializedNetMsg objects in vSendMsg,
rather than the resulting bytes (for header and payload) that need to be sent.
In SocketSendData, these objects are handed to the transport as permitted by it,
and sending out the bytes the transport tells us to send. This also removes the
nSendOffset member variable in CNode, as keeping track of how much has been sent
is now a responsability of the transport.
This is not a pure refactor, and has the following effects even for the current
v1 transport:
* Checksum calculation now happens in SocketSendData rather than PushMessage.
For non-optimistic-send messages, that means this computation now happens in
the network thread rather than the message handler thread (generally a good
thing, as the message handler thread is more of a computational bottleneck).
* Checksum calculation now happens while holding the cs_vSend lock. This is
technically unnecessary for the v1 transport, as messages are encoded
independent from one another, but is untenable for the v2 transport anyway.
* Statistics updates about per-message sent bytes now happen when those bytes
are actually handed to the OS, rather than at PushMessage time.
This more accurately captures the intent of limiting send buffer size, as
many small messages can have a larger overhead that is not counted with the
current approach.
It also means removing the dependency on the header size (which will become
a function of the transport choice) from the send buffer calculations.
This adds a simulation test, with two V1Transport objects, which send messages
to each other, with sending and receiving fragmented into multiple pieces that
may be interleaved. It primarily verifies that the sending and receiving side
are compatible with each other, plus a few sanity checks.
The rest of net.cpp already uses Params() to determine chainparams in many
places (and even V1Transport itself does so in some places).
Since the only chainparams dependency is through the message start characters,
just store those directly in the transport.
This makes the sending side of P2P transports mirror the receiver side: caller provides
message (consisting of type and payload) to be sent, and then asks what bytes must be
sent. Once the message has been fully sent, a new message can be provided.
This removes the assumption that P2P serialization of messages follows a strict structure
of header (a function of type and payload), followed by (unmodified) payload, and instead
lets transports decide the structure themselves.
It also removes the assumption that a message must always be sent at once, or that no
bytes are even sent on the wire when there is no message. This opens the door for
supporting traffic shaping mechanisms in the future.
Now that the Transport class deals with both the sending and receiving side
of things, make the receive side have function names that clearly indicate
they're about receiving.
* Transport::Read() -> Transport::ReceivedBytes()
* Transport::Complete() -> Transport::ReceivedMessageComplete()
* Transport::GetMessage() -> Transport::GetReceivedMessage()
* Transport::SetVersion() -> Transport::SetReceiveVersion()
Further, also update the comments on these functions to (among others) remove
the "deserialization" terminology. That term is better reserved for just the
serialization/deserialization between objects and bytes (see serialize.h), and
not the conversion from/to wire bytes as performed by the Transport.
Rather than relying on the caller to prevent concurrent calls to the
various receive-side functions of Transport, introduce a private m_cs_recv
inside the implementation to protect the lock state.
Of course, this does not remove the need for callers to synchronize calls
entirely, as it is a stateful object, and e.g. the order in which Receive(),
Complete(), and GetMessage() are called matters. It seems impossible to use
a Transport object in a meaningful way in a multi-threaded way without some
form of external synchronization, but it still feels safer to make the
transport object itself responsible for protecting its internal state.
This allows state that is shared between both directions to be encapsulated
into a single object. Specifically the v2 transport protocol introduced by
BIP324 has sending state (the encryption keys) that depends on received
messages (the DH key exchange). Having a single object for both means it can
hide logic from callers related to that key exchange and other interactions.
27b168b81f Update help text for spend and rawtransaction rpcs (Michael Tidwell)
Pull request description:
The "data" field without outputs was marked as "required" in the help docs when using bitcoin-cli. This field when left off worked as an intended optional OP_RETURN. closes #27828.
Motivation: It is hard to understand that "data" is actually optional for commands like `createpsbt` and `walletcreatefundedpsbt`.
ACKs for top commit:
achow101:
ACK 27b168b81f
Sjors:
tACK 27b168b81f
Tree-SHA512: 235e7ed4af69880880c04015b3f7de72c8f31ae035485c4c64c483e282948f3ea3f1eef16f15e260a1aaf21114150713516ba6a99967ccad9ecd91ff67cb0450
1b09cc5959 Make post-p2sh consensus rules mandatory for tx relay (Anthony Towns)
69c31bc748 doc, policy: Clarify comment on STANDARD_SCRIPT_VERIFY_FLAGS (Anthony Towns)
Pull request description:
The `MANDATORY_SCRIPT_VERIFY_FLAGS` constant was introduced in #3843 to distinguish between block consensus rules and relay standardness rules. However it was not actually used in the consensus code path: instead it only differentiates between the failure being reported as `TX_CONSENSUS` and `mandatory-script-verify-flag-failed` vs `TX_NOT_STANDARD` and `non-mandatory-script-verify-flag`.
This updates the list of mandatory flags to include the post-p2sh soft forks that are enforced as consensus rules via `GetBlockScriptFlags()`. The effect of this change is that validation.cpp will report `TX_CONSENSUS` failures for txs that fail dersig/csv/cltv/nulldummy/witness/taproot checks, instead of `TX_NOT_STANDARD`, which in turn adds `Misbehaving(100)` via `MaybePunishNodeForTx` in `net_processing`.
ACKs for top commit:
Sjors:
Code review ACK 1b09cc5959
darosior:
ACK 1b09cc5959
achow101:
ACK 1b09cc5959
theStack:
Concept and code-review ACK 1b09cc5959
Tree-SHA512: d3e5868e8cece478f2e934956ba0c231d8bb9c2daefd0df1f817774e292049902cfc1d0cd76dbd2e7722627a93eab2d7046ff678199aac70a2b01642e69349f1
The valid results should have a target below the sum of
the selected inputs amounts. Also, it increases the
minimum value for target to make it more realistic.
Instead of using `cost_of_change` for `min_viable_change`
and `change_cost`, and 0 for `change_fee`, use values from
`coin_params`. The previous values don't generate any effects
that is relevant for that context.