Exposing address manager table entries in a hidden RPC allows to introspect
addrman tables in tests and during development.
As response JSON object the following FORMAT1 is choosen:
{
"table": {
"<bucket>/<position>": { "address": "..", "port": .., ... },
"<bucket>/<position>": { "address": "..", "port": .., ... },
"<bucket>/<position>": { "address": "..", "port": .., ... },
...
}
}
An alternative would be FORMAT2
{
"table": {
"bucket": {
"position": { "address": "..", "port": .., ... },
"position": { "address": "..", "port": .., ... },
..
},
"bucket": {
"position": { "address": "..", "port": .., ... },
..
},
}
}
FORMAT1 and FORMAT2 have different encodings for the location of the
address in the address manager. While FORMAT2 might be easier to process
for downstream tools, it also mimics internal addrman mappings, which
might change at some point. Users not interested in the address location
can ignore the location key. They don't have to adapt to a new RPC
response format, when the internal addrman layout changes. Additionally,
FORMAT1 is also slightly easier to to iterate in downstream tools. The
RPC response-building implemenation complexcity is lower with FORMAT1
as we can more easily build a "<bucket>/<position>" key than a multiple
"bucket" objects with multiple "position" objects (FORMAT2).
782701ce7d test: Test loading wallets with conflicts without a chain (Andrew Chow)
4660fc82a1 wallet: Check last block and conflict height are valid in MarkConflicted (Andrew Chow)
Pull request description:
`MarkConflicted` assumes that `m_last_block_processed_height` is always valid. However it may not be valid when a chain is not attached, as happens in the wallet tool and during migration. In such situations, when the conflicting height is also negative (which occurs on loading when no chain is available), the calculation of the number of conflict confirms results in a non-negative value which passes the existing check for valid values. This will subsequently hit an assertion in `GetTxDepthInMainChain`.
Furthermore, `MarkConflicted` is also only called on loading a transaction whose parent has a stored state of `TxStateConflicted` and was loaded before the child transaction. This depends on the loading order, which for both sqlite and bdb depends on the txids.
We can avoid this by explicitly checking that both `m_last_block_processed_height` and `conflicting_height` are non-negative. Both `tool_wallet.py` and `wallet_migration.py` are updated to create wallets with a state that triggers the assertion.
Fixes #28510
ACKs for top commit:
ryanofsky:
Code review ACK 782701ce7d. Nice catch, and clever test (grinding the txid)
furszy:
ACK 782701ce
Tree-SHA512: 1344e0279ec5413a43a2819d101fb571fbf4821de2d13958a0fdffc99f57082ef3243ec454c8343f97dc02ed1fce8c8b0fd89388420ab2e55618af42ad5630a9
fac29a0ab1 Remove SER_GETHASH, hard-code client version in CKeyPool serialize (MarcoFalke)
fa72f09d6f Remove CHashWriter type (MarcoFalke)
fa4a9c0f43 Remove unused GetType() from OverrideStream, CVectorWriter, SpanReader (MarcoFalke)
Pull request description:
Removes a bunch of redundant, dead or duplicate code.
Uses the idea from and finishes the idea https://github.com/bitcoin/bitcoin/pull/28428 by theuni
ACKs for top commit:
ajtowns:
ACK fac29a0ab1
kevkevinpal:
added one nit but otherwise ACK [fac29a0](fac29a0ab1)
Tree-SHA512: cc805e2f38e73869a6691fdb5da09fa48524506b87fc93f05d32c336ad3033425a2d7608e317decd3141fde3f084403b8de280396c0c39132336fe0f7510af9e
6ef405ddb1 key: don't allocate secure mem for null (invalid) key (Pieter Wuille)
d9841a7ac6 Add make_secure_unique helper (Anthony Towns)
Pull request description:
Bitcoin Core has `secure_allocator`, which allocates inside special "secure" (non-swappable) memory pages, which may be limited in availability. Currently, every `CKey` object uses 32 such secure bytes, even when the `CKey` object contains the (invalid) value zero.
Change this to not use memory when the `CKey` is invalid. This is particularly relevant for `BIP324Cipher` which briefly holds a `CKey`, but after receiving the remote's public key and initializing the encryption ciphers, the key is wiped. In case secure memory usage is in high demand, it'd be silly to waste it on P2P encryption keys instead of wallet keys.
ACKs for top commit:
ajtowns:
ACK 6ef405ddb1
john-moffett:
ACK 6ef405ddb1
Tree-SHA512: 987f4376ed825daf034ea4d7c4b4952fe664b25b48f1c09fbcfa6257a40b06c4da7c2caaafa35c346c86bdf298ae21f16c68ea4b1039836990d1a205de2034fd
Many edge cases exist when parents in a child-with-parents package can
spend each other. However, this pattern should also be uncommon in
normal use cases.
This ensures that we avoid any unexpected conditions inherent in
transferring non-empty mempools across chainstates.
Note that this should never happen in practice given that snapshot
activation will not occur outside of IBD, based upon the height checks
in `loadtxoutset`.
When using an assumedvalid (snapshot) chainstate along with a background
chainstate, we are syncing two very different regions of the chain
simultaneously. If we use the same blockfile space for both of these
syncs, wildly different height blocks will be stored alongside one
another, making pruning ineffective.
This change implements a separate blockfile cursor for the assumedvalid
chainstate when one is in use.
Use the expected AssumeutxoData in order to bootstrap nChainTx values
for assumedvalid blockindex entries in the snapshot chainstate. This
is necessary because nChainTx is normally built up from nTx values,
which are populated using blockdata which the snapshot chainstate
does not yet have.
In future commits, loading the block index while making use of a
snapshot is contingent on the snapshot being recognized by chainparams.
Ensure all existing unittests that use snapshots use a recognized
snapshot (at height 110).
Co-authored-by: Ryan Ofsky <ryan@ofsky.org>
Introduces ChainstateManager::GetPruneRange().
The prune budget is split evenly between the number of chainstates,
however the prune budget may be exceeded if the resulting shares are
beneath `MIN_DISK_SPACE_FOR_BLOCK_FILES`.
When using an assumedvalid chainstate, only process validationinterface
callbacks from the background chainstate within indexes. This ensures
that all indexes are built in-order.
Later, we can possibly designate indexes which can be built out of order
and continue their operation during snapshot use.
Once the background sync has completed, restart the indexes so that
they continue to index the now-validated snapshot chainstate.
This allows us to reference assumeutxo configuration by blockhash as
well as height; this is helpful in future changes when we want to
reference assumeutxo configurations before the block index is loaded.
Add new PeerManagerImpl::TryDownloadingHistoricalBlocks method and use it to
request background chain blocks in addition to blocks normally requested by
FindNextBlocksToDownload.
Co-authored-by: Ryan Ofsky <ryan@ofsky.org>
Co-authored-by: James O'Beirne <james.obeirne@gmail.com>
d8041d4e04 blockstorage: Return on fatal undo file flush error (TheCharlatan)
f0207e0030 blockstorage: Return on fatal block file flush error (TheCharlatan)
5671c15f45 blockstorage: Mark FindBlockPos as nodiscard (TheCharlatan)
Pull request description:
The goal of this PR is to establish that fatal blockstorage flush errors should be treated as errors at their call site.
Prior to this patch `FlushBlockFile` may have failed without returning in `Chainstate::FlushStateToDisk`, leading to a potential write from `WriteBlockIndexDB` that may refer to a block that is not fully flushed to disk yet. By returning if either `FlushUndoFile` or `FlushBlockFile` fail, we ensure that no further write operations take place that may lead to an inconsistent database when crashing. Add `[[nodiscard]]` annotations to them such that they are not ignored in future.
Functions that call either `FlushUndoFile` or `FlushBlockFile`, need to handle these extra abort cases properly. Since `Chainstate::FlushStateToDisk` already produces an abort error in case of `WriteBlockIndexDB` failing, no extra logic for functions calling `Chainstate::FlushStateToDisk` is required.
Besides `Chainstate::FlushStateToDisk`, `FlushBlockFile` is also called by `FindBlockPos`, while `FlushUndoFile` is only called by `FlushBlockFile` and `WriteUndoDataForBlock`. For both these cases, the flush error is not further bubbled up. Instead, the error is logged and a comment is provided why bubbling up an error would be less desirable in these cases.
---
This pull request is part of a larger effort towards improving the shutdown / abort / fatal error handling in validation code. It is a first step towards implementing proper fatal error return type enforcement similar as proposed by theuni in this pull request [comment](https://github.com/bitcoin/bitcoin/pull/27711#issuecomment-1563561502). For ease of review of these critical changes, a first step would be checking that `AbortNode` leads to early and error-conveying returns at its call site. Further work for enforcing returns when `AbortNode` is called is done in https://github.com/bitcoin/bitcoin/pull/27862.
ACKs for top commit:
stickies-v:
re-ACK d8041d4
ryanofsky:
Code review ACK d8041d4e04
Tree-SHA512: 47ade9b873b15e567c8f60ca538d5a0daf32163e1031be3212a3a45eb492b866664b225f2787c9e40f3e0c089140157d8fd1039abc00c7bdfeec1b52ecd7e219
e3720bca39 net: Simplify v2 recv logic by decoupling AAD from state machine (Tim Ruffing)
b0f5175c04 net: Drop v2 garbage authentication packet (Tim Ruffing)
Pull request description:
Note that this is a breaking change, see also https://github.com/bitcoin/bips/pull/1498
The benefit is a simpler implementation:
- The protocol state machine does not need separate states for garbage authentication and version phases.
- The special case of "ignoring the ignore bit" is removed.
- The freedom to choose the contents of the garbage authentication packet is removed. This simplifies testing.
ACKs for top commit:
naumenkogs:
ACK e3720bca39
sipa:
ACK e3720bca39. Re-ran the v2 transport fuzzer overnight.
ajtowns:
ACK e3720bca39 - simpler and more flexible, nice
achow101:
ACK e3720bca39
Sjors:
utACK e3720bca39
theStack:
Code-review ACK e3720bca39
Tree-SHA512: 16600ed868c8a346828de075c4072e37cf86440751d08ab099fe8b58ff71d8b371a90397d6a4247096796db68794275e7e0403f218859567d04838e0411dadd6
262ab8ef78 Add package evaluation fuzzer (Greg Sanders)
Pull request description:
This fuzzer target caught the issue in https://github.com/bitcoin/bitcoin/pull/28251 within 5 minutes on master branch, and an additional issue which I've applied a preliminary patch to cover.
Fuzzer target does the following:
1) Picks mempool confgs, including max package size, count, mempool size, etc
2) Generates 1 to 26 transactions with arbitrary coins/fees, the first N-1 spending only confirmed outpoints
3) Nth transaction, if >1, sweeps all unconfirmed outpoints in mempool
4) If N==1, it may submit it through single-tx submission path, to allow for more interesting topologies
5) Otherwise submits through package submission interface
6) Repeat 1-5 a few hundred times per mempool instance
In other words, it ends up building chains of txns in the mempool using parents-and-children packages, which is currently the topology supported on master.
The test itself is a direct rip of tx_pool.cpp, with a number of assertions removed because they were failing for unknown reasons, likely due to the notification changes of single tx submission to package, which is used to track addition/removal of transactions in the test. I'll continue working on re-adding these assertions for further invariant testing.
ACKs for top commit:
murchandamus:
ACK 262ab8ef78
glozow:
reACK 262ab8ef78
dergoegge:
tACK 262ab8ef78
Tree-SHA512: 190784777d0f2361b051b3271db8f79b7927e3cab88596d2c30e556da721510bd17f6cc96f6bb03403bbf0589ad3f799fa54e63c1b2bd92a2084485b5e3e96a5
b3db8c9d5c rpc: bumpfee, improve doc for 'reduce_output' arg (furszy)
Pull request description:
Fixes #28180. Resulted from discussions with S3RK, achow101, and Murch.
The current argument name and description are dangerous as it don't
describe the case where the user selects the recipient output as the
change address. This one could end up been increased by the inputs
minus outputs remainder. Which, when `bumpfee` adds new inputs
to the transaction, leads the process to send more coins to the
recipient. Which is not what the user would expect from a
'reduce_output' param naming.
ACKs for top commit:
S3RK:
ACK b3db8c9d5c
achow101:
ACK b3db8c9d5c
murchandamus:
ACK b3db8c9d5c
Tree-SHA512: 91f607e2f5849041d7c099afdddae11af8bed5b1ac90c9d22921267f272e21b44e107d6968e037f05f958a61fe29e94e5fb44b224fb3606f197f83ec4ba3b1e7
Instead of storing the key material as an std::vector (with secure allocator),
use a secure_unique_ptr to a 32-byte array, and use nullptr for invalid keys.
This means a smaller CKey type, and no secure/dynamic memory usage for invalid
keys.
See also https://github.com/bitcoin/bips/pull/1498
The benefit is a simpler implementation:
- The protocol state machine does not need separate states for garbage
authentication and version phases.
- The special case of "ignoring the ignore bit" is removed.
- The freedom to choose the contents of the garbage authentication
packet is removed. This simplifies testing.
MarkConflicted calculates conflict confirmations incorrectly when both
the last block processed height and the conflicting height are negative
(i.e. uninitialized). If either are negative, we should not be marking
conflicts and should exit early.
The current argument name and description are dangerous as it don't
describe the case where the user selects the recipient output as the
change address. This one could end up been increased by the inputs
minus outputs remainder. Which, when bumpfee adds new inputs
to the transaction, leads the process to send more coins to the
recipient. Which is not what the user would expect from a
'reduce_output' param naming.
Co-authored-by: Murch <murch@murch.one>
fa56c421be Return CAutoFile from BlockManager::Open*File() (MarcoFalke)
9999b89cd3 Make BufferedFile to be a CAutoFile wrapper (MarcoFalke)
fa389d902f refactor: Drop unused fclose() from BufferedFile (MarcoFalke)
Pull request description:
This is required for https://github.com/bitcoin/bitcoin/pull/28052, but makes sense on its own, because offloading logic to `CAutoFile` instead of re-implementing it allows to delete code and complexity.
ACKs for top commit:
TheCharlatan:
Re-ACK fa56c421be
willcl-ark:
tACK fa56c421be
Tree-SHA512: fe4638f3a6bd3f9d968cfb9ae3259c9d6cd278fe2912cbc90289851311c8c781099db4c160e775960975c4739098d9af801a8d2d12603f371f8edfe134d8f85a
4313c77400 make DisconnectedBlockTransactions responsible for its own memory management (glozow)
cf5f1faa03 MOVEONLY: DisconnectedBlockTransactions to its own file (glozow)
2765d6f343 rewrite DisconnectedBlockTransactions as a list + map (glozow)
79ce9f0aa4 add std::list to memusage (glozow)
59a35a7398 [bench] DisconnectedBlockTransactions (glozow)
925bb723ca [refactor] batch-add transactions to DisconnectedBlockTransactions (glozow)
Pull request description:
Motivation
- I think it's preferable to use stdlib data structures instead of depending on boost if we can achieve the same thing.
- Also see #28335 for further context/motivation. This PR simplifies that one.
Things done in this PR:
- Add a bench for `DisconnectedBlockTransactions` where we reorg and the new chain has {100%, 90%, 10%} of the same transactions. AFAIU in practice, it's usually close to 100%.
- Rewrite `DisconnectedBlockTransactions` as a `std::list` + `unordered_map` instead of a boost multi index container.
- On my machine, the bench suggests the performance is very similar.
- Move `DisconnectedBlockTransactions` from txmempool.h to its own kernel/disconnected_transactions.h. This struct isn't used by txmempool and doesn't have much to do with txmempool. My guess is that it's been living there for convenience since the boost includes are there.
ACKs for top commit:
ismaelsadeeq:
Tested ACK 4313c77400
stickies-v:
ACK 4313c77400
TheCharlatan:
ACK 4313c77400
Tree-SHA512: 273c80866bf3acd39b2a039dc082b7719d2d82e0940e1eb6c402f1c0992e997256722b85c7e310c9811238a770cfbdeb122ea4babbc23835d17128f214a1ef9e
a99e9e655a doc: add release note (ismaelsadeeq)
2b4edf889a test: check `descriptorprocesspsbt` return hex encoded tx (ismaelsadeeq)
c405207a18 rpc: `descriptorprocesspsbt` return hex encoded tx (ismaelsadeeq)
Pull request description:
Coming from [#28414 comment](https://github.com/bitcoin/bitcoin/pull/28414#pullrequestreview-1618684391) Same thing also for `descriptorprocesspsbt`.
Before this PR `descriptorprocesspsbt` returns a boolean `complete` which indicates that the psbt is final, users then have to call `finalizepsbt` to get the hex encoded network transaction.
In this PR if the psbt is complete the return object also has the hex encoded network transaction ready for broadcast with `sendrawtransaction`.
This save users calling `finalizepsbt` with the descriptor, if it is already complete.
ACKs for top commit:
achow101:
ACK a99e9e655a
pinheadmz:
ACK a99e9e655a
ishaanam:
ACK a99e9e655a
Tree-SHA512: c3f1b1391d4df05216c463127cd593f8703840430a99febb54890bc66fadabf9d9530860605f347ec54c1694019173247a0e7a9eb879d3cbb420f9e8d9839b75