0
0
Fork 0
mirror of https://github.com/bitcoin/bitcoin.git synced 2025-02-01 09:35:52 -05:00

Compare commits

...

22 commits

Author SHA1 Message Date
Martin Zumsande
dad51bd862
Merge 5e1ff82251 into 85f96b01b7 2025-01-31 21:49:42 +01:00
Ava Chow
85f96b01b7
Merge bitcoin/bitcoin#30909: wallet, assumeutxo: Don't Assume m_chain_tx_count, Improve wallet RPC errors
9d2d9f7ce2 rpc: Include assumeutxo as a failure reason of rescanblockchain (Fabian Jahr)
595edee169 test, assumeutxo: import descriptors during background sync (Alfonso Roman Zubeldia)
d73ae603d4 rpc: Improve importdescriptor RPC error messages (Fabian Jahr)
27f99b6d63 validation: Don't assume m_chain_tx_count in GuessVerificationProgress (Fabian Jahr)
42d5d53363 interfaces: Add helper function for wallet on pruning (Fabian Jahr)

Pull request description:

  A test that is added as part of #30455 uncovered this issue: The `GuessVerificationProgress` function is used during during descriptor import and relies on `m_chain_tx_count`. In #29370 an [`Assume` was added](0fd915ee6b) expecting the `m_chaint_tx_count` to be set. However, as the test uncovered, `GuessVerificationProgress` is called with background sync blocks that have `m_chaint_tx_count = 0` when they have not been downloaded and processed yet.

  The simple fix is to remove the `Assume`. Users should not be thrown off by the `Internal bug detected` error. The behavior of `importdescriptor` is kept consistent with the behavior for blocks missing due to pruning.

  The test by alfonsoromanz is cherry-picked here to show that the [CI errors](https://cirrus-ci.com/task/5110045812195328?logs=ci#L2535) should be fixed by this change.

  This PR also improves error messages returned by the `importdescriptors` and `rescanblockchain` RPCs. The error message now changes depending on the situation of the node, i.e. if pruning is happening or an assumutxo backgroundsync is active.

ACKs for top commit:
  achow101:
    ACK 9d2d9f7ce2
  mzumsande:
    Code Review ACK 9d2d9f7ce2
  furszy:
    Code review ACK 9d2d9f7ce2

Tree-SHA512: b841a9b371e5eb8eb3bfebca35645ff2fdded7a3e5e06308d46a33a51ca42cc4c258028c9958fbbb6cda9bb990e07ab8d8504dd9ec6705ef78afe0435912b365
2025-01-31 15:45:14 -05:00
Ava Chow
601a6a6917
Merge bitcoin/bitcoin#30965: kernel: Move block tree db open to block manager
0cdddeb224 kernel: Move block tree db open to BlockManager constructor (TheCharlatan)
7fbb1bc44b kernel: Move block tree db open to block manager (TheCharlatan)
57ba59c0cd refactor: Remove redundant reindex check (TheCharlatan)

Pull request description:

  Before this change the block tree db was needlessly re-opened during startup when loading a completed snapshot. Improve this by letting the block manager open it on construction. This also simplifies the test code a bit.

  The change was initially motivated to make it easier for users of the kernel library to instantiate a BlockManager that may be used to read data from disk without loading the block index into a cache.

ACKs for top commit:
  maflcko:
    re-ACK 0cdddeb224 🏪
  achow101:
    ACK 0cdddeb224
  mzumsande:
    re-ACK 0cdddeb224

Tree-SHA512: fe3d557a725367e549e6a0659f64259cfef6aaa565ec867d9a177be0143ff18a2c4a20dd57e35e15f97cf870df476d88c05b03b6a7d9e8d51c568d9eda8947ef
2025-01-31 15:28:06 -05:00
Ava Chow
eaf4b928e7
Merge bitcoin/bitcoin#31746: test: Added coverage to the waitfornewblock rpc
93747d934b test: Added coverage to the waitfornewblock rpc (kevkevinpal)

Pull request description:

  Added a test for the Negative timeout error if the rpc is given a negative value for its timeout arg

  This adds coverage to the `waitfornewblock` rpc

  you can check to see there is no coverage for this error by doing
  `grep -nri "Negative timeout" ./test/`

  and nothing shows up, you can also see by manually checking where we call `waitfornewblock` in the functional tests

ACKs for top commit:
  Sjors:
    tACK 93747d934b
  achow101:
    ACK 93747d934b
  brunoerg:
    code review ACK 93747d934b
  tdb3:
    ACK 93747d934b

Tree-SHA512: 45cf34312412d3691a39f003bcd54791ea16542aa3f5a2674d7499c9cc4039550b2cbd32cc3d4c5fe100d65cb05690594b10a0c42dfab63bcca3dac121bb195b
2025-01-31 14:45:50 -05:00
Ava Chow
992f37f2e1
Merge bitcoin/bitcoin#31600: rpc: have getblocktemplate mintime account for timewarp
e1676b08f7 doc: release notes (Sjors Provoost)
0082f6acc1 rpc: have mintime account for timewarp rule (Sjors Provoost)
79d45b10f1 rpc: clarify BIP94 behavior for curtime (Sjors Provoost)
0713548137 refactor: add GetMinimumTime() helper (Sjors Provoost)

Pull request description:

  #30681 fixed the `curtime` field of `getblocktemplate` to take the timewarp rule into account. However I forgot to do the same for the `mintime` field, which was hardcoded to use `pindexPrev->GetMedianTimePast()+1`.

  This PR adds a helper `GetMinimumTime()` and uses it for the `mintime` field.

  #31376 changed the `curtime` field to always account for the timewarp rule. This PR maintains that behavior.

  Note that `mintime` now always applies BIP94, including on mainnet. This makes future softfork activation safer.

  It could be backported to v28.

ACKs for top commit:
  fjahr:
    tACK e1676b08f7
  achow101:
    ACK e1676b08f7
  darosior:
    utACK e1676b08f7 on the code changes
  tdb3:
    brief code review re ACK e1676b08f7
  TheCharlatan:
    ACK e1676b08f7

Tree-SHA512: 0e322d8cc3b8ff770849bce211edcb5b6f55d04e5e0dee0657805049663d758f27423b047ee6363bd8f6c6fead13f974760f48b3321ea86f514f446e1b23231c
2025-01-31 14:39:36 -05:00
Sjors Provoost
e1676b08f7
doc: release notes 2025-01-29 09:39:32 +01:00
Sjors Provoost
0082f6acc1
rpc: have mintime account for timewarp rule
Previously in getblocktemplate only curtime took the timewarp rule into account.

Mining pool software could use either, though in general it should use curtime.
2025-01-29 09:39:32 +01:00
Sjors Provoost
79d45b10f1
rpc: clarify BIP94 behavior for curtime 2025-01-29 09:39:32 +01:00
Sjors Provoost
0713548137
refactor: add GetMinimumTime() helper
Before bip94 there was an assumption that the minimum permitted
timestamp is GetMedianTimePast() + 1.

This commit splits a helper function out of UpdateTime() to
obtain the minimum time in a way that takes the
timewarp attack rule into account.
2025-01-29 09:39:32 +01:00
kevkevinpal
93747d934b
test: Added coverage to the waitfornewblock rpc
Added a test for the Negative timeout error if the rpc is given a
negative value for its timeout arg
2025-01-28 10:14:01 -05:00
TheCharlatan
0cdddeb224
kernel: Move block tree db open to BlockManager constructor
Make the block db open RAII style by calling it in the BlockManager
constructor.

Before this change the block tree db was needlessly re-opened during
startup when loading a completed snapshot. Improve this by letting the
block manager open it on construction. This also simplifies the test
code a bit.

The change was initially motivated to make it easier for users of the
kernel library to instantiate a BlockManager that may be used to read
data from disk without loading the block index into a cache.
2025-01-20 21:27:50 +01:00
TheCharlatan
7fbb1bc44b
kernel: Move block tree db open to block manager
This commit is done in preparation for the next commit. Here, the block
tree options are moved to the blockmanager options and the block tree is
instantiated through a helper method of the BlockManager, which is
removed again in the next commit.

Co-authored-by: MarcoFalke <*~=`'#}+{/-|&$^_@721217.xyz>
2025-01-20 21:19:39 +01:00
TheCharlatan
57ba59c0cd
refactor: Remove redundant reindex check
The check for whether the block tree db has been wiped before calling
NeedsRedownload() is confusing. The boolean is set in case of a reindex.
It was originally introduced to guard NeedsRedownload in case of a
reindex in #21009. However NeedsRedownload already returns early if the
chain's tip is not loaded. Since that is the case during a reindex, the
pre-check is redundant.
2025-01-16 16:35:38 +01:00
Fabian Jahr
9d2d9f7ce2
rpc: Include assumeutxo as a failure reason of rescanblockchain 2025-01-05 17:28:34 +01:00
Alfonso Roman Zubeldia
595edee169
test, assumeutxo: import descriptors during background sync 2025-01-05 17:28:34 +01:00
Fabian Jahr
d73ae603d4
rpc: Improve importdescriptor RPC error messages
Particularly add more details in the case of pruning or assumeutxo.
2025-01-05 17:28:34 +01:00
Fabian Jahr
27f99b6d63
validation: Don't assume m_chain_tx_count in GuessVerificationProgress
In the context of an a descriptor import during assumeutxo background sync, the progress can not be estimated due to m_chain_tx_count being set to 0.
2025-01-05 17:28:34 +01:00
Fabian Jahr
42d5d53363
interfaces: Add helper function for wallet on pruning 2025-01-05 17:28:19 +01:00
Martin Zumsande
5e1ff82251 test: Add functional test for stalling at tip
When at the tip, the near-tip stalling still applies, but
it will interact with compact block requests. Add tests for this.
2024-10-11 14:09:06 -04:00
Martin Zumsande
9019c08e4e p2p: Add additional peers to download from when close to the tip
If we are 1024 or less blocks away from the tip and haven't requested or received
a block from any peer for 30 seconds, add another peer to download the critical
block from. Add up to two additional peers this way.

Also adds test for the new behavior (co-authored by Greg Sanders)

Co-authored-by: Greg Sanders <gsanders87@gmail.com>
2024-10-11 14:09:06 -04:00
Martin Zumsande
c3d98815cc test: add sub-test for near-tip stalling 2024-10-11 14:08:59 -04:00
Martin Zumsande
75668d079c test: make p2p_ibd_stalling.py more modular
This is in preparation to add more subtests.
Also adjust waiting condition to replace the
total_bytes_recv_for_blocks magical number hack.
2024-10-11 14:01:05 -04:00
26 changed files with 374 additions and 117 deletions

View file

@ -0,0 +1,11 @@
Updated RPCs
---
- the `getblocktemplate` RPC `curtime` (BIP22) and `mintime` (BIP23) fields now
account for the timewarp fix proposed in BIP94 on all networks. This ensures
that, in the event a timewarp fix softfork activates on mainnet, un-upgraded
miners will not accidentally violate the timewarp rule. (#31376, #31600)
As a reminder, it's important that any software which uses the `getblocktemplate`
RPC takes these values into account (either `curtime` or `mintime` is fine).
Relying only on a clock can lead to invalid blocks under some circumstances,
especially once a timewarp fix is deployed.

View file

@ -106,6 +106,7 @@ int main(int argc, char* argv[])
}; };
auto notifications = std::make_unique<KernelNotifications>(); auto notifications = std::make_unique<KernelNotifications>();
kernel::CacheSizes cache_sizes{DEFAULT_KERNEL_CACHE};
// SETUP: Chainstate // SETUP: Chainstate
auto chainparams = CChainParams::Main(); auto chainparams = CChainParams::Main();
@ -119,11 +120,14 @@ int main(int argc, char* argv[])
.chainparams = chainman_opts.chainparams, .chainparams = chainman_opts.chainparams,
.blocks_dir = abs_datadir / "blocks", .blocks_dir = abs_datadir / "blocks",
.notifications = chainman_opts.notifications, .notifications = chainman_opts.notifications,
.block_tree_db_params = DBParams{
.path = abs_datadir / "blocks" / "index",
.cache_bytes = cache_sizes.block_tree_db,
},
}; };
util::SignalInterrupt interrupt; util::SignalInterrupt interrupt;
ChainstateManager chainman{interrupt, chainman_opts, blockman_opts}; ChainstateManager chainman{interrupt, chainman_opts, blockman_opts};
kernel::CacheSizes cache_sizes{DEFAULT_KERNEL_CACHE};
node::ChainstateLoadOptions options; node::ChainstateLoadOptions options;
auto [status, error] = node::LoadChainstate(chainman, cache_sizes, options); auto [status, error] = node::LoadChainstate(chainman, cache_sizes, options);
if (status != node::ChainstateLoadStatus::SUCCESS) { if (status != node::ChainstateLoadStatus::SUCCESS) {

View file

@ -1057,6 +1057,10 @@ bool AppInitParameterInteraction(const ArgsManager& args)
.chainparams = chainman_opts_dummy.chainparams, .chainparams = chainman_opts_dummy.chainparams,
.blocks_dir = args.GetBlocksDirPath(), .blocks_dir = args.GetBlocksDirPath(),
.notifications = chainman_opts_dummy.notifications, .notifications = chainman_opts_dummy.notifications,
.block_tree_db_params = DBParams{
.path = args.GetDataDirNet() / "blocks" / "index",
.cache_bytes = 0,
},
}; };
auto blockman_result{ApplyArgsManOptions(args, blockman_opts_dummy)}; auto blockman_result{ApplyArgsManOptions(args, blockman_opts_dummy)};
if (!blockman_result) { if (!blockman_result) {
@ -1203,18 +1207,33 @@ static ChainstateLoadResult InitAndLoadChainstate(
.signals = node.validation_signals.get(), .signals = node.validation_signals.get(),
}; };
Assert(ApplyArgsManOptions(args, chainman_opts)); // no error can happen, already checked in AppInitParameterInteraction Assert(ApplyArgsManOptions(args, chainman_opts)); // no error can happen, already checked in AppInitParameterInteraction
BlockManager::Options blockman_opts{ BlockManager::Options blockman_opts{
.chainparams = chainman_opts.chainparams, .chainparams = chainman_opts.chainparams,
.blocks_dir = args.GetBlocksDirPath(), .blocks_dir = args.GetBlocksDirPath(),
.notifications = chainman_opts.notifications, .notifications = chainman_opts.notifications,
.block_tree_db_params = DBParams{
.path = args.GetDataDirNet() / "blocks" / "index",
.cache_bytes = cache_sizes.block_tree_db,
.wipe_data = do_reindex,
},
}; };
Assert(ApplyArgsManOptions(args, blockman_opts)); // no error can happen, already checked in AppInitParameterInteraction Assert(ApplyArgsManOptions(args, blockman_opts)); // no error can happen, already checked in AppInitParameterInteraction
// Creating the chainstate manager internally creates a BlockManager, opens
// the blocks tree db, and wipes existing block files in case of a reindex.
// The coinsdb is opened at a later point on LoadChainstate.
try { try {
node.chainman = std::make_unique<ChainstateManager>(*Assert(node.shutdown_signal), chainman_opts, blockman_opts); node.chainman = std::make_unique<ChainstateManager>(*Assert(node.shutdown_signal), chainman_opts, blockman_opts);
} catch (dbwrapper_error& e) {
LogError("%s", e.what());
return {ChainstateLoadStatus::FAILURE, _("Error opening block database")};
} catch (std::exception& e) { } catch (std::exception& e) {
return {ChainstateLoadStatus::FAILURE_FATAL, Untranslated(strprintf("Failed to initialize ChainstateManager: %s", e.what()))}; return {ChainstateLoadStatus::FAILURE_FATAL, Untranslated(strprintf("Failed to initialize ChainstateManager: %s", e.what()))};
} }
ChainstateManager& chainman = *node.chainman; ChainstateManager& chainman = *node.chainman;
if (chainman.m_interrupt) return {ChainstateLoadStatus::INTERRUPTED, {}};
// This is defined and set here instead of inline in validation.h to avoid a hard // This is defined and set here instead of inline in validation.h to avoid a hard
// dependency between validation and index/base, since the latter is not in // dependency between validation and index/base, since the latter is not in
// libbitcoinkernel. // libbitcoinkernel.
@ -1237,7 +1256,6 @@ static ChainstateLoadResult InitAndLoadChainstate(
}; };
node::ChainstateLoadOptions options; node::ChainstateLoadOptions options;
options.mempool = Assert(node.mempool.get()); options.mempool = Assert(node.mempool.get());
options.wipe_block_tree_db = do_reindex;
options.wipe_chainstate_db = do_reindex || do_reindex_chainstate; options.wipe_chainstate_db = do_reindex || do_reindex_chainstate;
options.prune = chainman.m_blockman.IsPruneMode(); options.prune = chainman.m_blockman.IsPruneMode();
options.check_blocks = args.GetIntArg("-checkblocks", DEFAULT_CHECKBLOCKS); options.check_blocks = args.GetIntArg("-checkblocks", DEFAULT_CHECKBLOCKS);

View file

@ -289,6 +289,9 @@ public:
//! Check if any block has been pruned. //! Check if any block has been pruned.
virtual bool havePruned() = 0; virtual bool havePruned() = 0;
//! Get the current prune height.
virtual std::optional<int> getPruneHeight() = 0;
//! Check if the node is ready to broadcast transactions. //! Check if the node is ready to broadcast transactions.
virtual bool isReadyToBroadcast() = 0; virtual bool isReadyToBroadcast() = 0;

View file

@ -5,6 +5,7 @@
#ifndef BITCOIN_KERNEL_BLOCKMANAGER_OPTS_H #ifndef BITCOIN_KERNEL_BLOCKMANAGER_OPTS_H
#define BITCOIN_KERNEL_BLOCKMANAGER_OPTS_H #define BITCOIN_KERNEL_BLOCKMANAGER_OPTS_H
#include <dbwrapper.h>
#include <kernel/notifications_interface.h> #include <kernel/notifications_interface.h>
#include <util/fs.h> #include <util/fs.h>
@ -27,6 +28,7 @@ struct BlockManagerOpts {
bool fast_prune{false}; bool fast_prune{false};
const fs::path blocks_dir; const fs::path blocks_dir;
Notifications& notifications; Notifications& notifications;
DBParams block_tree_db_params;
}; };
} // namespace kernel } // namespace kernel

View file

@ -42,7 +42,6 @@ struct ChainstateManagerOpts {
std::optional<uint256> assumed_valid_block{}; std::optional<uint256> assumed_valid_block{};
//! If the tip is older than this, the node is considered to be in initial block download. //! If the tip is older than this, the node is considered to be in initial block download.
std::chrono::seconds max_tip_age{DEFAULT_MAX_TIP_AGE}; std::chrono::seconds max_tip_age{DEFAULT_MAX_TIP_AGE};
DBOptions block_tree_db{};
DBOptions coins_db{}; DBOptions coins_db{};
CoinsViewOptions coins_view{}; CoinsViewOptions coins_view{};
Notifications& notifications; Notifications& notifications;

View file

@ -100,6 +100,8 @@ static const int MAX_BLOCKS_IN_TRANSIT_PER_PEER = 16;
static constexpr auto BLOCK_STALLING_TIMEOUT_DEFAULT{2s}; static constexpr auto BLOCK_STALLING_TIMEOUT_DEFAULT{2s};
/** Maximum timeout for stalling block download. */ /** Maximum timeout for stalling block download. */
static constexpr auto BLOCK_STALLING_TIMEOUT_MAX{64s}; static constexpr auto BLOCK_STALLING_TIMEOUT_MAX{64s};
/** Timeout for stalling when close to the tip, after which we may add additional peers to download from */
static constexpr auto BLOCK_NEARTIP_TIMEOUT_MAX{30s};
/** Maximum depth of blocks we're willing to serve as compact blocks to peers /** Maximum depth of blocks we're willing to serve as compact blocks to peers
* when requested. For older blocks, a regular BLOCK response will be sent. */ * when requested. For older blocks, a regular BLOCK response will be sent. */
static const int MAX_CMPCTBLOCK_DEPTH = 5; static const int MAX_CMPCTBLOCK_DEPTH = 5;
@ -746,7 +748,10 @@ private:
std::atomic<int> m_best_height{-1}; std::atomic<int> m_best_height{-1};
/** The time of the best chain tip block */ /** The time of the best chain tip block */
std::atomic<std::chrono::seconds> m_best_block_time{0s}; std::atomic<std::chrono::seconds> m_best_block_time{0s};
/** The last time we requested a block from any peer */
std::atomic<std::chrono::seconds> m_last_block_requested{0s};
/** The last time we received a block from any peer */
std::atomic<std::chrono::seconds> m_last_block_received{0s};
/** Next time to check for stale tip */ /** Next time to check for stale tip */
std::chrono::seconds m_stale_tip_check_time GUARDED_BY(cs_main){0s}; std::chrono::seconds m_stale_tip_check_time GUARDED_BY(cs_main){0s};
@ -1213,6 +1218,7 @@ bool PeerManagerImpl::BlockRequested(NodeId nodeid, const CBlockIndex& block, st
if (pit) { if (pit) {
*pit = &itInFlight->second.second; *pit = &itInFlight->second.second;
} }
m_last_block_requested = GetTime<std::chrono::seconds>();
return true; return true;
} }
@ -1461,6 +1467,30 @@ void PeerManagerImpl::FindNextBlocks(std::vector<const CBlockIndex*>& vBlocks, c
if (waitingfor == -1) { if (waitingfor == -1) {
// This is the first already-in-flight block. // This is the first already-in-flight block.
waitingfor = mapBlocksInFlight.lower_bound(pindex->GetBlockHash())->second.first; waitingfor = mapBlocksInFlight.lower_bound(pindex->GetBlockHash())->second.first;
// Decide whether to request this block from additional peers in parallel.
// This is done if we are close (<=1024 blocks) from the tip, so that the usual
// stalling mechanism doesn't work. To reduce excessive waste of bandwith, do this only
// 30 seconds (BLOCK_NEARTIP_TIMEOUT_MAX) after a block was requested or received from any peer,
// and only with up to 3 peers in parallel.
bool already_requested_from_peer{false};
auto range{mapBlocksInFlight.equal_range(pindex->GetBlockHash())};
while (range.first != range.second) {
if (range.first->second.first == peer.m_id) {
already_requested_from_peer = true;
break;
}
range.first++;
}
if (nMaxHeight <= nWindowEnd && // we have 1024 or less blocks left to download
m_last_block_requested.load() > 0s &&
GetTime<std::chrono::microseconds>() > m_last_block_requested.load() + BLOCK_NEARTIP_TIMEOUT_MAX &&
GetTime<std::chrono::microseconds>() > m_last_block_received.load() + BLOCK_NEARTIP_TIMEOUT_MAX &&
!already_requested_from_peer &&
mapBlocksInFlight.count(pindex->GetBlockHash()) <= 2) {
LogDebug(BCLog::NET, "Possible stalling close to tip: Requesting block %s additionally from peer %d\n", pindex->GetBlockHash().ToString(), peer.m_id);
vBlocks.push_back(pindex);
}
} }
continue; continue;
} }
@ -3269,6 +3299,7 @@ void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlo
m_chainman.ProcessNewBlock(block, force_processing, min_pow_checked, &new_block); m_chainman.ProcessNewBlock(block, force_processing, min_pow_checked, &new_block);
if (new_block) { if (new_block) {
node.m_last_block_time = GetTime<std::chrono::seconds>(); node.m_last_block_time = GetTime<std::chrono::seconds>();
m_last_block_received = GetTime<std::chrono::seconds>();
// In case this block came from a different peer than we requested // In case this block came from a different peer than we requested
// from, we can erase the block request now anyway (as we just stored // from, we can erase the block request now anyway (as we just stored
// this block to disk). // this block to disk).

View file

@ -6,6 +6,7 @@
#include <common/args.h> #include <common/args.h>
#include <node/blockstorage.h> #include <node/blockstorage.h>
#include <node/database_args.h>
#include <tinyformat.h> #include <tinyformat.h>
#include <util/result.h> #include <util/result.h>
#include <util/translation.h> #include <util/translation.h>
@ -34,6 +35,8 @@ util::Result<void> ApplyArgsManOptions(const ArgsManager& args, BlockManager::Op
if (auto value{args.GetBoolArg("-fastprune")}) opts.fast_prune = *value; if (auto value{args.GetBoolArg("-fastprune")}) opts.fast_prune = *value;
ReadDatabaseArgs(args, opts.block_tree_db_params.options);
return {}; return {};
} }
} // namespace node } // namespace node

View file

@ -36,6 +36,7 @@
#include <util/translation.h> #include <util/translation.h>
#include <validation.h> #include <validation.h>
#include <cstddef>
#include <map> #include <map>
#include <ranges> #include <ranges>
#include <unordered_map> #include <unordered_map>
@ -1169,7 +1170,19 @@ BlockManager::BlockManager(const util::SignalInterrupt& interrupt, Options opts)
m_opts{std::move(opts)}, m_opts{std::move(opts)},
m_block_file_seq{FlatFileSeq{m_opts.blocks_dir, "blk", m_opts.fast_prune ? 0x4000 /* 16kB */ : BLOCKFILE_CHUNK_SIZE}}, m_block_file_seq{FlatFileSeq{m_opts.blocks_dir, "blk", m_opts.fast_prune ? 0x4000 /* 16kB */ : BLOCKFILE_CHUNK_SIZE}},
m_undo_file_seq{FlatFileSeq{m_opts.blocks_dir, "rev", UNDOFILE_CHUNK_SIZE}}, m_undo_file_seq{FlatFileSeq{m_opts.blocks_dir, "rev", UNDOFILE_CHUNK_SIZE}},
m_interrupt{interrupt} {} m_interrupt{interrupt}
{
m_block_tree_db = std::make_unique<BlockTreeDB>(m_opts.block_tree_db_params);
if (m_opts.block_tree_db_params.wipe_data) {
m_block_tree_db->WriteReindexing(true);
m_blockfiles_indexed = false;
// If we're reindexing in prune mode, wipe away unusable block files and all undo data files
if (m_prune_mode) {
CleanupBlockRevFiles();
}
}
}
class ImportingNow class ImportingNow
{ {

View file

@ -23,10 +23,7 @@
#include <validation.h> #include <validation.h>
#include <algorithm> #include <algorithm>
#include <atomic>
#include <cassert> #include <cassert>
#include <limits>
#include <memory>
#include <vector> #include <vector>
using kernel::CacheSizes; using kernel::CacheSizes;
@ -36,34 +33,8 @@ namespace node {
// to ChainstateManager::InitializeChainstate(). // to ChainstateManager::InitializeChainstate().
static ChainstateLoadResult CompleteChainstateInitialization( static ChainstateLoadResult CompleteChainstateInitialization(
ChainstateManager& chainman, ChainstateManager& chainman,
const CacheSizes& cache_sizes,
const ChainstateLoadOptions& options) EXCLUSIVE_LOCKS_REQUIRED(::cs_main) const ChainstateLoadOptions& options) EXCLUSIVE_LOCKS_REQUIRED(::cs_main)
{ {
auto& pblocktree{chainman.m_blockman.m_block_tree_db};
// new BlockTreeDB tries to delete the existing file, which
// fails if it's still open from the previous loop. Close it first:
pblocktree.reset();
try {
pblocktree = std::make_unique<BlockTreeDB>(DBParams{
.path = chainman.m_options.datadir / "blocks" / "index",
.cache_bytes = cache_sizes.block_tree_db,
.memory_only = options.block_tree_db_in_memory,
.wipe_data = options.wipe_block_tree_db,
.options = chainman.m_options.block_tree_db});
} catch (dbwrapper_error& err) {
LogError("%s\n", err.what());
return {ChainstateLoadStatus::FAILURE, _("Error opening block database")};
}
if (options.wipe_block_tree_db) {
pblocktree->WriteReindexing(true);
chainman.m_blockman.m_blockfiles_indexed = false;
//If we're reindexing in prune mode, wipe away unusable block files and all undo data files
if (options.prune) {
chainman.m_blockman.CleanupBlockRevFiles();
}
}
if (chainman.m_interrupt) return {ChainstateLoadStatus::INTERRUPTED, {}}; if (chainman.m_interrupt) return {ChainstateLoadStatus::INTERRUPTED, {}};
// LoadBlockIndex will load m_have_pruned if we've ever removed a // LoadBlockIndex will load m_have_pruned if we've ever removed a
@ -155,14 +126,12 @@ static ChainstateLoadResult CompleteChainstateInitialization(
} }
} }
if (!options.wipe_block_tree_db) {
auto chainstates{chainman.GetAll()}; auto chainstates{chainman.GetAll()};
if (std::any_of(chainstates.begin(), chainstates.end(), if (std::any_of(chainstates.begin(), chainstates.end(),
[](const Chainstate* cs) EXCLUSIVE_LOCKS_REQUIRED(cs_main) { return cs->NeedsRedownload(); })) { [](const Chainstate* cs) EXCLUSIVE_LOCKS_REQUIRED(cs_main) { return cs->NeedsRedownload(); })) {
return {ChainstateLoadStatus::FAILURE, strprintf(_("Witness data for blocks after height %d requires validation. Please restart with -reindex."), return {ChainstateLoadStatus::FAILURE, strprintf(_("Witness data for blocks after height %d requires validation. Please restart with -reindex."),
chainman.GetConsensus().SegwitHeight)}; chainman.GetConsensus().SegwitHeight)};
}; };
}
// Now that chainstates are loaded and we're able to flush to // Now that chainstates are loaded and we're able to flush to
// disk, rebalance the coins caches to desired levels based // disk, rebalance the coins caches to desired levels based
@ -208,7 +177,7 @@ ChainstateLoadResult LoadChainstate(ChainstateManager& chainman, const CacheSize
} }
} }
auto [init_status, init_error] = CompleteChainstateInitialization(chainman, cache_sizes, options); auto [init_status, init_error] = CompleteChainstateInitialization(chainman, options);
if (init_status != ChainstateLoadStatus::SUCCESS) { if (init_status != ChainstateLoadStatus::SUCCESS) {
return {init_status, init_error}; return {init_status, init_error};
} }
@ -244,7 +213,7 @@ ChainstateLoadResult LoadChainstate(ChainstateManager& chainman, const CacheSize
// for the fully validated chainstate. // for the fully validated chainstate.
chainman.ActiveChainstate().ClearBlockIndexCandidates(); chainman.ActiveChainstate().ClearBlockIndexCandidates();
auto [init_status, init_error] = CompleteChainstateInitialization(chainman, cache_sizes, options); auto [init_status, init_error] = CompleteChainstateInitialization(chainman, options);
if (init_status != ChainstateLoadStatus::SUCCESS) { if (init_status != ChainstateLoadStatus::SUCCESS) {
return {init_status, init_error}; return {init_status, init_error};
} }

View file

@ -22,12 +22,7 @@ namespace node {
struct ChainstateLoadOptions { struct ChainstateLoadOptions {
CTxMemPool* mempool{nullptr}; CTxMemPool* mempool{nullptr};
bool block_tree_db_in_memory{false};
bool coins_db_in_memory{false}; bool coins_db_in_memory{false};
// Whether to wipe the block tree database when loading it. If set, this
// will also set a reindexing flag so any existing block data files will be
// scanned and added to the database.
bool wipe_block_tree_db{false};
// Whether to wipe the chainstate database when loading it. If set, this // Whether to wipe the chainstate database when loading it. If set, this
// will cause the chainstate database to be rebuilt starting from genesis. // will cause the chainstate database to be rebuilt starting from genesis.
bool wipe_chainstate_db{false}; bool wipe_chainstate_db{false};

View file

@ -49,7 +49,6 @@ util::Result<void> ApplyArgsManOptions(const ArgsManager& args, ChainstateManage
if (auto value{args.GetIntArg("-maxtipage")}) opts.max_tip_age = std::chrono::seconds{*value}; if (auto value{args.GetIntArg("-maxtipage")}) opts.max_tip_age = std::chrono::seconds{*value};
ReadDatabaseArgs(args, opts.block_tree_db);
ReadDatabaseArgs(args, opts.coins_db); ReadDatabaseArgs(args, opts.coins_db);
ReadCoinsViewArgs(args, opts.coins_view); ReadCoinsViewArgs(args, opts.coins_view);

View file

@ -46,6 +46,7 @@
#include <policy/settings.h> #include <policy/settings.h>
#include <primitives/block.h> #include <primitives/block.h>
#include <primitives/transaction.h> #include <primitives/transaction.h>
#include <rpc/blockchain.h>
#include <rpc/protocol.h> #include <rpc/protocol.h>
#include <rpc/server.h> #include <rpc/server.h>
#include <support/allocators/secure.h> #include <support/allocators/secure.h>
@ -770,6 +771,11 @@ public:
LOCK(::cs_main); LOCK(::cs_main);
return chainman().m_blockman.m_have_pruned; return chainman().m_blockman.m_have_pruned;
} }
std::optional<int> getPruneHeight() override
{
LOCK(chainman().GetMutex());
return GetPruneHeight(chainman().m_blockman, chainman().ActiveChain());
}
bool isReadyToBroadcast() override { return !chainman().m_blockman.LoadingBlocks() && !isInitialBlockDownload(); } bool isReadyToBroadcast() override { return !chainman().m_blockman.LoadingBlocks() && !isInitialBlockDownload(); }
bool isInitialBlockDownload() override bool isInitialBlockDownload() override
{ {

View file

@ -28,16 +28,25 @@
#include <utility> #include <utility>
namespace node { namespace node {
int64_t GetMinimumTime(const CBlockIndex* pindexPrev, const int64_t difficulty_adjustment_interval)
{
int64_t min_time{pindexPrev->GetMedianTimePast() + 1};
// Height of block to be mined.
const int height{pindexPrev->nHeight + 1};
// Account for BIP94 timewarp rule on all networks. This makes future
// activation safer.
if (height % difficulty_adjustment_interval == 0) {
min_time = std::max<int64_t>(min_time, pindexPrev->GetBlockTime() - MAX_TIMEWARP);
}
return min_time;
}
int64_t UpdateTime(CBlockHeader* pblock, const Consensus::Params& consensusParams, const CBlockIndex* pindexPrev) int64_t UpdateTime(CBlockHeader* pblock, const Consensus::Params& consensusParams, const CBlockIndex* pindexPrev)
{ {
int64_t nOldTime = pblock->nTime; int64_t nOldTime = pblock->nTime;
int64_t nNewTime{std::max<int64_t>(pindexPrev->GetMedianTimePast() + 1, TicksSinceEpoch<std::chrono::seconds>(NodeClock::now()))}; int64_t nNewTime{std::max<int64_t>(GetMinimumTime(pindexPrev, consensusParams.DifficultyAdjustmentInterval()),
TicksSinceEpoch<std::chrono::seconds>(NodeClock::now()))};
// Height of block to be mined.
const int height{pindexPrev->nHeight + 1};
if (height % consensusParams.DifficultyAdjustmentInterval() == 0) {
nNewTime = std::max<int64_t>(nNewTime, pindexPrev->GetBlockTime() - MAX_TIMEWARP);
}
if (nOldTime < nNewTime) { if (nOldTime < nNewTime) {
pblock->nTime = nNewTime; pblock->nTime = nNewTime;

View file

@ -211,6 +211,13 @@ private:
void SortForBlock(const CTxMemPool::setEntries& package, std::vector<CTxMemPool::txiter>& sortedEntries); void SortForBlock(const CTxMemPool::setEntries& package, std::vector<CTxMemPool::txiter>& sortedEntries);
}; };
/**
* Get the minimum time a miner should use in the next block. This always
* accounts for the BIP94 timewarp rule, so does not necessarily reflect the
* consensus limit.
*/
int64_t GetMinimumTime(const CBlockIndex* pindexPrev, const int64_t difficulty_adjustment_interval);
int64_t UpdateTime(CBlockHeader* pblock, const Consensus::Params& consensusParams, const CBlockIndex* pindexPrev); int64_t UpdateTime(CBlockHeader* pblock, const Consensus::Params& consensusParams, const CBlockIndex* pindexPrev);
/** Update an old GenerateCoinbaseCommitment from CreateNewBlock after the block txs have changed */ /** Update an old GenerateCoinbaseCommitment from CreateNewBlock after the block txs have changed */

View file

@ -49,6 +49,7 @@
using interfaces::BlockTemplate; using interfaces::BlockTemplate;
using interfaces::Mining; using interfaces::Mining;
using node::BlockAssembler; using node::BlockAssembler;
using node::GetMinimumTime;
using node::NodeContext; using node::NodeContext;
using node::RegenerateCommitments; using node::RegenerateCommitments;
using node::UpdateTime; using node::UpdateTime;
@ -674,7 +675,7 @@ static RPCHelpMan getblocktemplate()
{RPCResult::Type::NUM, "coinbasevalue", "maximum allowable input to coinbase transaction, including the generation award and transaction fees (in satoshis)"}, {RPCResult::Type::NUM, "coinbasevalue", "maximum allowable input to coinbase transaction, including the generation award and transaction fees (in satoshis)"},
{RPCResult::Type::STR, "longpollid", "an id to include with a request to longpoll on an update to this template"}, {RPCResult::Type::STR, "longpollid", "an id to include with a request to longpoll on an update to this template"},
{RPCResult::Type::STR, "target", "The hash target"}, {RPCResult::Type::STR, "target", "The hash target"},
{RPCResult::Type::NUM_TIME, "mintime", "The minimum timestamp appropriate for the next block time, expressed in " + UNIX_EPOCH_TIME}, {RPCResult::Type::NUM_TIME, "mintime", "The minimum timestamp appropriate for the next block time, expressed in " + UNIX_EPOCH_TIME + ". Adjusted for the proposed BIP94 timewarp rule."},
{RPCResult::Type::ARR, "mutable", "list of ways the block template may be changed", {RPCResult::Type::ARR, "mutable", "list of ways the block template may be changed",
{ {
{RPCResult::Type::STR, "value", "A way the block template may be changed, e.g. 'time', 'transactions', 'prevblock'"}, {RPCResult::Type::STR, "value", "A way the block template may be changed, e.g. 'time', 'transactions', 'prevblock'"},
@ -683,7 +684,7 @@ static RPCHelpMan getblocktemplate()
{RPCResult::Type::NUM, "sigoplimit", "limit of sigops in blocks"}, {RPCResult::Type::NUM, "sigoplimit", "limit of sigops in blocks"},
{RPCResult::Type::NUM, "sizelimit", "limit of block size"}, {RPCResult::Type::NUM, "sizelimit", "limit of block size"},
{RPCResult::Type::NUM, "weightlimit", /*optional=*/true, "limit of block weight"}, {RPCResult::Type::NUM, "weightlimit", /*optional=*/true, "limit of block weight"},
{RPCResult::Type::NUM_TIME, "curtime", "current timestamp in " + UNIX_EPOCH_TIME}, {RPCResult::Type::NUM_TIME, "curtime", "current timestamp in " + UNIX_EPOCH_TIME + ". Adjusted for the proposed BIP94 timewarp rule."},
{RPCResult::Type::STR, "bits", "compressed target of next block"}, {RPCResult::Type::STR, "bits", "compressed target of next block"},
{RPCResult::Type::NUM, "height", "The height of the next block"}, {RPCResult::Type::NUM, "height", "The height of the next block"},
{RPCResult::Type::STR_HEX, "signet_challenge", /*optional=*/true, "Only on signet"}, {RPCResult::Type::STR_HEX, "signet_challenge", /*optional=*/true, "Only on signet"},
@ -977,7 +978,7 @@ static RPCHelpMan getblocktemplate()
result.pushKV("coinbasevalue", (int64_t)block.vtx[0]->vout[0].nValue); result.pushKV("coinbasevalue", (int64_t)block.vtx[0]->vout[0].nValue);
result.pushKV("longpollid", tip.GetHex() + ToString(nTransactionsUpdatedLast)); result.pushKV("longpollid", tip.GetHex() + ToString(nTransactionsUpdatedLast));
result.pushKV("target", hashTarget.GetHex()); result.pushKV("target", hashTarget.GetHex());
result.pushKV("mintime", (int64_t)pindexPrev->GetMedianTimePast()+1); result.pushKV("mintime", GetMinimumTime(pindexPrev, consensusParams.DifficultyAdjustmentInterval()));
result.pushKV("mutable", std::move(aMutable)); result.pushKV("mutable", std::move(aMutable));
result.pushKV("noncerange", "00000000ffffffff"); result.pushKV("noncerange", "00000000ffffffff");
int64_t nSigOpLimit = MAX_BLOCK_SIGOPS_COST; int64_t nSigOpLimit = MAX_BLOCK_SIGOPS_COST;

View file

@ -33,6 +33,10 @@ BOOST_AUTO_TEST_CASE(blockmanager_find_block_pos)
.chainparams = *params, .chainparams = *params,
.blocks_dir = m_args.GetBlocksDirPath(), .blocks_dir = m_args.GetBlocksDirPath(),
.notifications = notifications, .notifications = notifications,
.block_tree_db_params = DBParams{
.path = m_args.GetDataDirNet() / "blocks" / "index",
.cache_bytes = 0,
},
}; };
BlockManager blockman{*Assert(m_node.shutdown_signal), blockman_opts}; BlockManager blockman{*Assert(m_node.shutdown_signal), blockman_opts};
// simulate adding a genesis block normally // simulate adding a genesis block normally
@ -140,6 +144,10 @@ BOOST_AUTO_TEST_CASE(blockmanager_flush_block_file)
.chainparams = Params(), .chainparams = Params(),
.blocks_dir = m_args.GetBlocksDirPath(), .blocks_dir = m_args.GetBlocksDirPath(),
.notifications = notifications, .notifications = notifications,
.block_tree_db_params = DBParams{
.path = m_args.GetDataDirNet() / "blocks" / "index",
.cache_bytes = 0,
},
}; };
BlockManager blockman{*Assert(m_node.shutdown_signal), blockman_opts}; BlockManager blockman{*Assert(m_node.shutdown_signal), blockman_opts};

View file

@ -62,7 +62,6 @@
#include <stdexcept> #include <stdexcept>
using namespace util::hex_literals; using namespace util::hex_literals;
using kernel::BlockTreeDB;
using node::ApplyArgsManOptions; using node::ApplyArgsManOptions;
using node::BlockAssembler; using node::BlockAssembler;
using node::BlockManager; using node::BlockManager;
@ -252,14 +251,14 @@ ChainTestingSetup::ChainTestingSetup(const ChainType chainType, TestOpts opts)
.chainparams = chainman_opts.chainparams, .chainparams = chainman_opts.chainparams,
.blocks_dir = m_args.GetBlocksDirPath(), .blocks_dir = m_args.GetBlocksDirPath(),
.notifications = chainman_opts.notifications, .notifications = chainman_opts.notifications,
}; .block_tree_db_params = DBParams{
m_node.chainman = std::make_unique<ChainstateManager>(*Assert(m_node.shutdown_signal), chainman_opts, blockman_opts);
LOCK(m_node.chainman->GetMutex());
m_node.chainman->m_blockman.m_block_tree_db = std::make_unique<BlockTreeDB>(DBParams{
.path = m_args.GetDataDirNet() / "blocks" / "index", .path = m_args.GetDataDirNet() / "blocks" / "index",
.cache_bytes = m_kernel_cache_sizes.block_tree_db, .cache_bytes = m_kernel_cache_sizes.block_tree_db,
.memory_only = true, .memory_only = opts.block_tree_db_in_memory,
}); .wipe_data = m_args.GetBoolArg("-reindex", false),
},
};
m_node.chainman = std::make_unique<ChainstateManager>(*Assert(m_node.shutdown_signal), chainman_opts, blockman_opts);
}; };
m_make_chainman(); m_make_chainman();
} }
@ -285,9 +284,7 @@ void ChainTestingSetup::LoadVerifyActivateChainstate()
auto& chainman{*Assert(m_node.chainman)}; auto& chainman{*Assert(m_node.chainman)};
node::ChainstateLoadOptions options; node::ChainstateLoadOptions options;
options.mempool = Assert(m_node.mempool.get()); options.mempool = Assert(m_node.mempool.get());
options.block_tree_db_in_memory = m_block_tree_db_in_memory;
options.coins_db_in_memory = m_coins_db_in_memory; options.coins_db_in_memory = m_coins_db_in_memory;
options.wipe_block_tree_db = m_args.GetBoolArg("-reindex", false);
options.wipe_chainstate_db = m_args.GetBoolArg("-reindex", false) || m_args.GetBoolArg("-reindex-chainstate", false); options.wipe_chainstate_db = m_args.GetBoolArg("-reindex", false) || m_args.GetBoolArg("-reindex-chainstate", false);
options.prune = chainman.m_blockman.IsPruneMode(); options.prune = chainman.m_blockman.IsPruneMode();
options.check_blocks = m_args.GetIntArg("-checkblocks", DEFAULT_CHECKBLOCKS); options.check_blocks = m_args.GetIntArg("-checkblocks", DEFAULT_CHECKBLOCKS);

View file

@ -393,6 +393,11 @@ struct SnapshotTestSetup : TestChain100Setup {
.chainparams = chainman_opts.chainparams, .chainparams = chainman_opts.chainparams,
.blocks_dir = m_args.GetBlocksDirPath(), .blocks_dir = m_args.GetBlocksDirPath(),
.notifications = chainman_opts.notifications, .notifications = chainman_opts.notifications,
.block_tree_db_params = DBParams{
.path = chainman.m_options.datadir / "blocks" / "index",
.cache_bytes = m_kernel_cache_sizes.block_tree_db,
.memory_only = m_block_tree_db_in_memory,
},
}; };
// For robustness, ensure the old manager is destroyed before creating a // For robustness, ensure the old manager is destroyed before creating a
// new one. // new one.

View file

@ -5623,9 +5623,8 @@ double ChainstateManager::GuessVerificationProgress(const CBlockIndex* pindex) c
return 0.0; return 0.0;
} }
if (!Assume(pindex->m_chain_tx_count > 0)) { if (pindex->m_chain_tx_count == 0) {
LogWarning("Internal bug detected: block %d has unset m_chain_tx_count (%s %s). Please report this issue here: %s\n", LogDebug(BCLog::VALIDATION, "Block %d has unset m_chain_tx_count. Unable to estimate verification progress.\n", pindex->nHeight);
pindex->nHeight, CLIENT_NAME, FormatFullVersion(), CLIENT_BUGREPORT);
return 0.0; return 0.0;
} }

View file

@ -1745,20 +1745,27 @@ RPCHelpMan importdescriptors()
if (scanned_time <= GetImportTimestamp(request, now) || results.at(i).exists("error")) { if (scanned_time <= GetImportTimestamp(request, now) || results.at(i).exists("error")) {
response.push_back(results.at(i)); response.push_back(results.at(i));
} else { } else {
std::string error_msg{strprintf("Rescan failed for descriptor with timestamp %d. There "
"was an error reading a block from time %d, which is after or within %d seconds "
"of key creation, and could contain transactions pertaining to the desc. As a "
"result, transactions and coins using this desc may not appear in the wallet.",
GetImportTimestamp(request, now), scanned_time - TIMESTAMP_WINDOW - 1, TIMESTAMP_WINDOW)};
if (pwallet->chain().havePruned()) {
error_msg += strprintf(" This error could be caused by pruning or data corruption "
"(see bitcoind log for details) and could be dealt with by downloading and "
"rescanning the relevant blocks (see -reindex option and rescanblockchain RPC).");
} else if (pwallet->chain().hasAssumedValidChain()) {
error_msg += strprintf(" This error is likely caused by an in-progress assumeutxo "
"background sync. Check logs or getchainstates RPC for assumeutxo background "
"sync progress and try again later.");
} else {
error_msg += strprintf(" This error could potentially caused by data corruption. If "
"the issue persists you may want to reindex (see -reindex option).");
}
UniValue result = UniValue(UniValue::VOBJ); UniValue result = UniValue(UniValue::VOBJ);
result.pushKV("success", UniValue(false)); result.pushKV("success", UniValue(false));
result.pushKV( result.pushKV("error", JSONRPCError(RPC_MISC_ERROR, error_msg));
"error",
JSONRPCError(
RPC_MISC_ERROR,
strprintf("Rescan failed for descriptor with timestamp %d. There was an error reading a "
"block from time %d, which is after or within %d seconds of key creation, and "
"could contain transactions pertaining to the desc. As a result, transactions "
"and coins using this desc may not appear in the wallet. This error could be "
"caused by pruning or data corruption (see bitcoind log for details) and could "
"be dealt with by downloading and rescanning the relevant blocks (see -reindex "
"option and rescanblockchain RPC).",
GetImportTimestamp(request, now), scanned_time - TIMESTAMP_WINDOW - 1, TIMESTAMP_WINDOW)));
response.push_back(std::move(result)); response.push_back(std::move(result));
} }
} }

View file

@ -6,6 +6,7 @@
#include <key_io.h> #include <key_io.h>
#include <policy/rbf.h> #include <policy/rbf.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <rpc/blockchain.h>
#include <util/vector.h> #include <util/vector.h>
#include <wallet/receive.h> #include <wallet/receive.h>
#include <wallet/rpc/util.h> #include <wallet/rpc/util.h>
@ -909,10 +910,16 @@ RPCHelpMan rescanblockchain()
} }
} }
// We can't rescan beyond non-pruned blocks, stop and throw an error // We can't rescan unavailable blocks, stop and throw an error
if (!pwallet->chain().hasBlocks(pwallet->GetLastBlockHash(), start_height, stop_height)) { if (!pwallet->chain().hasBlocks(pwallet->GetLastBlockHash(), start_height, stop_height)) {
if (pwallet->chain().havePruned() && pwallet->chain().getPruneHeight() >= start_height) {
throw JSONRPCError(RPC_MISC_ERROR, "Can't rescan beyond pruned data. Use RPC call getblockchaininfo to determine your pruned height."); throw JSONRPCError(RPC_MISC_ERROR, "Can't rescan beyond pruned data. Use RPC call getblockchaininfo to determine your pruned height.");
} }
if (pwallet->chain().hasAssumedValidChain()) {
throw JSONRPCError(RPC_MISC_ERROR, "Failed to rescan unavailable blocks likely due to an in-progress assumeutxo background sync. Check logs or getchainstates RPC for assumeutxo background sync progress and try again later.");
}
throw JSONRPCError(RPC_MISC_ERROR, "Failed to rescan unavailable blocks, potentially caused by data corruption. If the issue persists you may want to reindex (see -reindex option).");
}
CHECK_NONFATAL(pwallet->chain().findAncestorByHeight(pwallet->GetLastBlockHash(), start_height, FoundBlock().hash(start_block))); CHECK_NONFATAL(pwallet->chain().findAncestorByHeight(pwallet->GetLastBlockHash(), start_height, FoundBlock().hash(start_block)));
} }

View file

@ -153,6 +153,8 @@ class MiningTest(BitcoinTestFramework):
# The template will have an adjusted timestamp, which we then modify # The template will have an adjusted timestamp, which we then modify
tmpl = node.getblocktemplate(NORMAL_GBT_REQUEST_PARAMS) tmpl = node.getblocktemplate(NORMAL_GBT_REQUEST_PARAMS)
assert_greater_than_or_equal(tmpl['curtime'], t + MAX_FUTURE_BLOCK_TIME - MAX_TIMEWARP) assert_greater_than_or_equal(tmpl['curtime'], t + MAX_FUTURE_BLOCK_TIME - MAX_TIMEWARP)
# mintime and curtime should match
assert_equal(tmpl['mintime'], tmpl['curtime'])
block = CBlock() block = CBlock()
block.nVersion = tmpl["version"] block.nVersion = tmpl["version"]

View file

@ -13,14 +13,26 @@ from test_framework.blocktools import (
create_coinbase create_coinbase
) )
from test_framework.messages import ( from test_framework.messages import (
COutPoint,
CTransaction,
CTxIn,
CTxOut,
HeaderAndShortIDs,
MSG_BLOCK, MSG_BLOCK,
MSG_TYPE_MASK, MSG_TYPE_MASK,
msg_cmpctblock,
msg_sendcmpct,
)
from test_framework.script import (
CScript,
OP_TRUE,
) )
from test_framework.p2p import ( from test_framework.p2p import (
CBlockHeader, CBlockHeader,
msg_block, msg_block,
msg_headers, msg_headers,
P2PDataStore, P2PDataStore,
p2p_lock,
) )
from test_framework.test_framework import BitcoinTestFramework from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import ( from test_framework.util import (
@ -31,6 +43,7 @@ from test_framework.util import (
class P2PStaller(P2PDataStore): class P2PStaller(P2PDataStore):
def __init__(self, stall_block): def __init__(self, stall_block):
self.stall_block = stall_block self.stall_block = stall_block
self.stall_block_requested = False
super().__init__() super().__init__()
def on_getdata(self, message): def on_getdata(self, message):
@ -39,6 +52,8 @@ class P2PStaller(P2PDataStore):
if (inv.type & MSG_TYPE_MASK) == MSG_BLOCK: if (inv.type & MSG_TYPE_MASK) == MSG_BLOCK:
if (inv.hash != self.stall_block): if (inv.hash != self.stall_block):
self.send_message(msg_block(self.block_store[inv.hash])) self.send_message(msg_block(self.block_store[inv.hash]))
else:
self.stall_block_requested = True
def on_getheaders(self, message): def on_getheaders(self, message):
pass pass
@ -47,44 +62,50 @@ class P2PStaller(P2PDataStore):
class P2PIBDStallingTest(BitcoinTestFramework): class P2PIBDStallingTest(BitcoinTestFramework):
def set_test_params(self): def set_test_params(self):
self.setup_clean_chain = True self.setup_clean_chain = True
self.num_nodes = 1 self.num_nodes = 3
def setup_network(self):
self.setup_nodes()
# Don't connect the nodes
def prepare_blocks(self):
self.log.info("Prepare blocks without sending them to any node")
self.NUM_BLOCKS = 1025
self.block_dict = {}
self.blocks = []
def run_test(self):
NUM_BLOCKS = 1025
NUM_PEERS = 4
node = self.nodes[0] node = self.nodes[0]
tip = int(node.getbestblockhash(), 16) tip = int(node.getbestblockhash(), 16)
blocks = []
height = 1 height = 1
block_time = node.getblock(node.getbestblockhash())['time'] + 1 block_time = int(time.time())
self.log.info("Prepare blocks without sending them to the node") for _ in range(self.NUM_BLOCKS):
block_dict = {} self.blocks.append(create_block(tip, create_coinbase(height), block_time))
for _ in range(NUM_BLOCKS): self.blocks[-1].solve()
blocks.append(create_block(tip, create_coinbase(height), block_time)) tip = self.blocks[-1].sha256
blocks[-1].solve()
tip = blocks[-1].sha256
block_time += 1 block_time += 1
height += 1 height += 1
block_dict[blocks[-1].sha256] = blocks[-1] self.block_dict[self.blocks[-1].sha256] = self.blocks[-1]
stall_block = blocks[0].sha256
def ibd_stalling(self):
NUM_PEERS = 4
stall_block = self.blocks[0].sha256
node = self.nodes[0]
headers_message = msg_headers() headers_message = msg_headers()
headers_message.headers = [CBlockHeader(b) for b in blocks[:NUM_BLOCKS-1]] headers_message.headers = [CBlockHeader(b) for b in self.blocks[:self.NUM_BLOCKS-1]]
peers = [] peers = []
self.log.info("Part 1: Test stalling during IBD")
self.log.info("Check that a staller does not get disconnected if the 1024 block lookahead buffer is filled") self.log.info("Check that a staller does not get disconnected if the 1024 block lookahead buffer is filled")
self.mocktime = int(time.time()) + 1 self.mocktime = int(time.time()) + 1
node.setmocktime(self.mocktime) node.setmocktime(self.mocktime)
for id in range(NUM_PEERS): for id in range(NUM_PEERS):
peers.append(node.add_outbound_p2p_connection(P2PStaller(stall_block), p2p_idx=id, connection_type="outbound-full-relay")) peers.append(node.add_outbound_p2p_connection(P2PStaller(stall_block), p2p_idx=id, connection_type="outbound-full-relay"))
peers[-1].block_store = block_dict peers[-1].block_store = self.block_dict
peers[-1].send_message(headers_message) peers[-1].send_message(headers_message)
# Need to wait until 1023 blocks are received - the magic total bytes number is a workaround in lack of an rpc # Wait until all blocks are received (except for stall_block), so that no other blocks are in flight.
# returning the number of downloaded (but not connected) blocks. self.wait_until(lambda: sum(len(peer['inflight']) for peer in node.getpeerinfo()) == 1)
bytes_recv = 172761 if not self.options.v2transport else 169692
self.wait_until(lambda: self.total_bytes_recv_for_blocks() == bytes_recv)
self.all_sync_send_with_ping(peers) self.all_sync_send_with_ping(peers)
# If there was a peer marked for stalling, it would get disconnected # If there was a peer marked for stalling, it would get disconnected
self.mocktime += 3 self.mocktime += 3
@ -93,7 +114,7 @@ class P2PIBDStallingTest(BitcoinTestFramework):
assert_equal(node.num_test_p2p_connections(), NUM_PEERS) assert_equal(node.num_test_p2p_connections(), NUM_PEERS)
self.log.info("Check that increasing the window beyond 1024 blocks triggers stalling logic") self.log.info("Check that increasing the window beyond 1024 blocks triggers stalling logic")
headers_message.headers = [CBlockHeader(b) for b in blocks] headers_message.headers = [CBlockHeader(b) for b in self.blocks]
with node.assert_debug_log(expected_msgs=['Stall started']): with node.assert_debug_log(expected_msgs=['Stall started']):
for p in peers: for p in peers:
p.send_message(headers_message) p.send_message(headers_message)
@ -139,17 +160,123 @@ class P2PIBDStallingTest(BitcoinTestFramework):
with node.assert_debug_log(expected_msgs=['Decreased stalling timeout to 2 seconds']): with node.assert_debug_log(expected_msgs=['Decreased stalling timeout to 2 seconds']):
for p in peers: for p in peers:
if p.is_connected and (stall_block in p.getdata_requests): if p.is_connected and (stall_block in p.getdata_requests):
p.send_message(msg_block(block_dict[stall_block])) p.send_message(msg_block(self.block_dict[stall_block]))
self.log.info("Check that all outstanding blocks get connected") self.log.info("Check that all outstanding blocks get connected")
self.wait_until(lambda: node.getblockcount() == NUM_BLOCKS) self.wait_until(lambda: node.getblockcount() == self.NUM_BLOCKS)
def total_bytes_recv_for_blocks(self): def near_tip_stalling(self):
total = 0 node = self.nodes[1]
for info in self.nodes[0].getpeerinfo(): self.log.info("Part 3: Test stalling close to the tip")
if ("block" in info["bytesrecv_per_msg"].keys()): # only send <= 1024 headers, so that the window can't overshoot and the ibd stalling mechanism isn't triggered
total += info["bytesrecv_per_msg"]["block"] # make sure it works at different lengths
return total for header_length in [1, 10, 1024]:
peers = []
stall_block = self.blocks[0].sha256
headers_message = msg_headers()
headers_message.headers = [CBlockHeader(b) for b in self.blocks[:self.NUM_BLOCKS-1][:header_length]]
self.mocktime = int(time.time())
node.setmocktime(self.mocktime)
self.log.info(f"Add three stalling peers, sending {header_length} headers")
for id in range(4):
peers.append(node.add_outbound_p2p_connection(P2PStaller(stall_block), p2p_idx=id, connection_type="outbound-full-relay"))
peers[-1].block_store = self.block_dict
peers[-1].send_message(headers_message)
self.wait_until(lambda: sum(len(peer['inflight']) for peer in node.getpeerinfo()) == 1)
self.all_sync_send_with_ping(peers)
assert_equal(sum(peer.stall_block_requested for peer in peers), 1)
self.log.info("Check that after 30 seconds we request the block from a second peer")
self.mocktime += 31
node.setmocktime(self.mocktime)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 2)
self.log.info("Check that after another 30 seconds we request the block from a third peer")
self.mocktime += 31
node.setmocktime(self.mocktime)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 3)
self.log.info("Check that after another 30 seconds we aren't requesting it from a fourth peer yet")
self.mocktime += 31
node.setmocktime(self.mocktime)
self.all_sync_send_with_ping(peers)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 3)
self.log.info("Check that after another 20 minutes, first three stalling peers are disconnected")
# 10 minutes BLOCK_DOWNLOAD_TIMEOUT_BASE + 2*5 minutes BLOCK_DOWNLOAD_TIMEOUT_PER_PEER
self.mocktime += 20 * 60
node.setmocktime(self.mocktime)
# all peers have been requested
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 4)
self.log.info("Check that after another 20 minutes, last stalling peer is disconnected")
# 10 minutes BLOCK_DOWNLOAD_TIMEOUT_BASE + 2*5 minutes BLOCK_DOWNLOAD_TIMEOUT_PER_PEER
self.mocktime += 20 * 60
node.setmocktime(self.mocktime)
for peer in peers:
peer.wait_for_disconnect()
self.log.info("Provide missing block and check that the sync succeeds")
peer = node.add_outbound_p2p_connection(P2PStaller(stall_block), p2p_idx=0, connection_type="outbound-full-relay")
peer.send_message(msg_block(self.block_dict[stall_block]))
self.wait_until(lambda: node.getblockcount() == self.NUM_BLOCKS - 1)
node.disconnect_p2ps()
def at_tip_stalling(self):
self.log.info("Test stalling and interaction with compact blocks when at tip")
node = self.nodes[2]
peers = []
# Create a block with a tx (would be invalid, but this doesn't matter since we will only ever send the header)
tx = CTransaction()
tx.vin.append(CTxIn(COutPoint(self.blocks[1].vtx[0].sha256, 0), scriptSig=b""))
tx.vout.append(CTxOut(49 * 100000000, CScript([OP_TRUE])))
tx.calc_sha256()
block_time = self.blocks[1].nTime + 1
block = create_block(self.blocks[1].sha256, create_coinbase(3), block_time, txlist=[tx])
block.solve()
for id in range(3):
peers.append(node.add_outbound_p2p_connection(P2PStaller(block.sha256), p2p_idx=id, connection_type="outbound-full-relay"))
# First Peer is a high-bw compact block peer
peers[0].send_and_ping(msg_sendcmpct(announce=True, version=2))
peers[0].block_store = self.block_dict
headers_message = msg_headers()
headers_message.headers = [CBlockHeader(b) for b in self.blocks[:2]]
peers[0].send_message(headers_message)
self.wait_until(lambda: node.getblockcount() == 2)
self.log.info("First peer announces via cmpctblock")
cmpct_block = HeaderAndShortIDs()
cmpct_block.initialize_from_block(block)
peers[0].send_and_ping(msg_cmpctblock(cmpct_block.to_p2p()))
with p2p_lock:
assert "getblocktxn" in peers[0].last_message
self.log.info("Also announce block from other peers by header")
headers_message = msg_headers()
headers_message.headers = [CBlockHeader(block)]
for peer in peers[1:4]:
peer.send_and_ping(headers_message)
self.log.info("Check that block is requested from two more header-announcing peers")
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 0)
self.mocktime = int(time.time()) + 31
node.setmocktime(self.mocktime)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 1)
self.mocktime += 31
node.setmocktime(self.mocktime)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 2)
self.log.info("Check that block is not requested from a third header-announcing peer")
self.mocktime += 31
node.setmocktime(self.mocktime)
self.wait_until(lambda: sum(peer.stall_block_requested for peer in peers) == 2)
def all_sync_send_with_ping(self, peers): def all_sync_send_with_ping(self, peers):
for p in peers: for p in peers:
@ -162,6 +289,12 @@ class P2PIBDStallingTest(BitcoinTestFramework):
return True return True
return False return False
def run_test(self):
self.prepare_blocks()
self.ibd_stalling()
self.near_tip_stalling()
self.at_tip_stalling()
if __name__ == '__main__': if __name__ == '__main__':
P2PIBDStallingTest(__file__).main() P2PIBDStallingTest(__file__).main()

View file

@ -549,6 +549,7 @@ class BlockchainTest(BitcoinTestFramework):
# The chain has probably already been restored by the time reconsiderblock returns, # The chain has probably already been restored by the time reconsiderblock returns,
# but poll anyway. # but poll anyway.
self.wait_until(lambda: node.waitfornewblock(timeout=100)['hash'] == current_hash) self.wait_until(lambda: node.waitfornewblock(timeout=100)['hash'] == current_hash)
assert_raises_rpc_error(-1, "Negative timeout", node.waitfornewblock, -1)
def _test_waitforblockheight(self): def _test_waitforblockheight(self):
self.log.info("Test waitforblockheight") self.log.info("Test waitforblockheight")

View file

@ -7,11 +7,11 @@ See feature_assumeutxo.py for background.
## Possible test improvements ## Possible test improvements
- TODO: test import descriptors while background sync is in progress
- TODO: test loading a wallet (backup) on a pruned node - TODO: test loading a wallet (backup) on a pruned node
""" """
from test_framework.address import address_to_scriptpubkey from test_framework.address import address_to_scriptpubkey
from test_framework.descriptors import descsum_create
from test_framework.test_framework import BitcoinTestFramework from test_framework.test_framework import BitcoinTestFramework
from test_framework.messages import COIN from test_framework.messages import COIN
from test_framework.util import ( from test_framework.util import (
@ -20,6 +20,7 @@ from test_framework.util import (
ensure_for, ensure_for,
) )
from test_framework.wallet import MiniWallet from test_framework.wallet import MiniWallet
from test_framework.wallet_util import get_generate_key
START_HEIGHT = 199 START_HEIGHT = 199
SNAPSHOT_BASE_HEIGHT = 299 SNAPSHOT_BASE_HEIGHT = 299
@ -49,6 +50,13 @@ class AssumeutxoTest(BitcoinTestFramework):
self.add_nodes(3) self.add_nodes(3)
self.start_nodes(extra_args=self.extra_args) self.start_nodes(extra_args=self.extra_args)
def import_descriptor(self, node, wallet_name, key, timestamp):
import_request = [{"desc": descsum_create("pkh(" + key.pubkey + ")"),
"timestamp": timestamp,
"label": "Descriptor import test"}]
wrpc = node.get_wallet_rpc(wallet_name)
return wrpc.importdescriptors(import_request)
def run_test(self): def run_test(self):
""" """
Bring up two (disconnected) nodes, mine some new blocks on the first, Bring up two (disconnected) nodes, mine some new blocks on the first,
@ -157,6 +165,21 @@ class AssumeutxoTest(BitcoinTestFramework):
self.log.info("Backup from before the snapshot height can't be loaded during background sync") self.log.info("Backup from before the snapshot height can't be loaded during background sync")
assert_raises_rpc_error(-4, "Wallet loading failed. Error loading wallet. Wallet requires blocks to be downloaded, and software does not currently support loading wallets while blocks are being downloaded out of order when using assumeutxo snapshots. Wallet should be able to load successfully after node sync reaches height 299", n1.restorewallet, "w2", "backup_w2.dat") assert_raises_rpc_error(-4, "Wallet loading failed. Error loading wallet. Wallet requires blocks to be downloaded, and software does not currently support loading wallets while blocks are being downloaded out of order when using assumeutxo snapshots. Wallet should be able to load successfully after node sync reaches height 299", n1.restorewallet, "w2", "backup_w2.dat")
self.log.info("Test loading descriptors during background sync")
wallet_name = "w1"
n1.createwallet(wallet_name, disable_private_keys=True)
key = get_generate_key()
time = n1.getblockchaininfo()['time']
timestamp = 0
expected_error_message = f"Rescan failed for descriptor with timestamp {timestamp}. There was an error reading a block from time {time}, which is after or within 7200 seconds of key creation, and could contain transactions pertaining to the desc. As a result, transactions and coins using this desc may not appear in the wallet. This error is likely caused by an in-progress assumeutxo background sync. Check logs or getchainstates RPC for assumeutxo background sync progress and try again later."
result = self.import_descriptor(n1, wallet_name, key, timestamp)
assert_equal(result[0]['error']['code'], -1)
assert_equal(result[0]['error']['message'], expected_error_message)
self.log.info("Test that rescanning blocks from before the snapshot fails when blocks are not available from the background sync yet")
w1 = n1.get_wallet_rpc(wallet_name)
assert_raises_rpc_error(-1, "Failed to rescan unavailable blocks likely due to an in-progress assumeutxo background sync. Check logs or getchainstates RPC for assumeutxo background sync progress and try again later.", w1.rescanblockchain, 100)
PAUSE_HEIGHT = FINAL_HEIGHT - 40 PAUSE_HEIGHT = FINAL_HEIGHT - 40
self.log.info("Restarting node to stop at height %d", PAUSE_HEIGHT) self.log.info("Restarting node to stop at height %d", PAUSE_HEIGHT)
@ -204,6 +227,11 @@ class AssumeutxoTest(BitcoinTestFramework):
self.wait_until(lambda: len(n2.getchainstates()['chainstates']) == 1) self.wait_until(lambda: len(n2.getchainstates()['chainstates']) == 1)
ensure_for(duration=1, f=lambda: (n2.getbalance() == 34)) ensure_for(duration=1, f=lambda: (n2.getbalance() == 34))
self.log.info("Ensuring descriptors can be loaded after background sync")
n1.loadwallet(wallet_name)
result = self.import_descriptor(n1, wallet_name, key, timestamp)
assert_equal(result[0]['success'], True)
if __name__ == '__main__': if __name__ == '__main__':
AssumeutxoTest(__file__).main() AssumeutxoTest(__file__).main()