0
0
Fork 0
mirror of https://github.com/bitcoin/bitcoin.git synced 2025-02-02 09:46:52 -05:00

Merge bitcoin/bitcoin#25717: p2p: Implement anti-DoS headers sync

3add234546 ui: show header pre-synchronization progress (Pieter Wuille)
738421c50f Emit NotifyHeaderTip signals for pre-synchronization progress (Pieter Wuille)
376086fc5a Make validation interface capable of signalling header presync (Pieter Wuille)
93eae27031 Test large reorgs with headerssync logic (Suhas Daftuar)
355547334f Track headers presync progress and log it (Pieter Wuille)
03712dddfb Expose HeadersSyncState::m_current_height in getpeerinfo() (Suhas Daftuar)
150a5486db Test headers sync using minchainwork threshold (Suhas Daftuar)
0b6aa826b5 Add unit test for HeadersSyncState (Suhas Daftuar)
83c6a0c524 Reduce spurious messages during headers sync (Suhas Daftuar)
ed6cddd98e Require callers of AcceptBlockHeader() to perform anti-dos checks (Suhas Daftuar)
551a8d957c Utilize anti-DoS headers download strategy (Suhas Daftuar)
ed470940cd Add functions to construct locators without CChain (Pieter Wuille)
84852bb6bb Add bitdeque, an std::deque<bool> analogue that does bit packing. (Pieter Wuille)
1d4cfa4272 Add function to validate difficulty changes (Suhas Daftuar)

Pull request description:

  New nodes starting up for the first time lack protection against DoS from low-difficulty headers. While checkpoints serve as our protection against headers that fork from the main chain below the known checkpointed values, this protection only applies to nodes that have been able to download the honest chain to the checkpointed heights.

  We can protect all nodes from DoS from low-difficulty headers by adopting a different strategy: before we commit to storing a header in permanent storage, first verify that the header is part of a chain that has sufficiently high work (either `nMinimumChainWork`, or something comparable to our tip). This means that we will download headers from a given peer twice: once to verify the work on the chain, and a second time when permanently storing the headers.

  The p2p protocol doesn't provide an easy way for us to ensure that we receive the same headers during the second download of peer's headers chain. To ensure that a peer doesn't (say) give us the main chain in phase 1 to trick us into permanently storing an alternate, low-work chain in phase 2, we store commitments to the headers during our first download, which we validate in the second download.

  Some parameters must be chosen for commitment size/frequency in phase 1, and validation of commitments in phase 2. In this PR, those parameters are chosen to both (a) minimize the per-peer memory usage that an attacker could utilize, and (b) bound the expected amount of permanent memory that an attacker could get us to use to be well-below the memory growth that we'd get from the honest chain (where we expect 1 new block header every 10 minutes).

  After this PR, we should be able to remove checkpoints from our code, which is a nice philosophical change for us to make as well, as there has been confusion over the years about the role checkpoints play in Bitcoin's consensus algorithm.

  Thanks to Pieter Wuille for collaborating on this design.

ACKs for top commit:
  Sjors:
    re-tACK 3add234546
  mzumsande:
    re-ACK 3add234546
  sipa:
    re-ACK 3add234546
  glozow:
    ACK 3add234546

Tree-SHA512: e7789d65f62f72141b8899eb4a2fb3d0621278394d2d7adaa004675250118f89a4e4cb42777fe56649d744ec445ad95141e10f6def65f0a58b7b35b2e654a875
This commit is contained in:
fanquake 2022-08-30 15:34:10 +01:00
commit e9035f867a
No known key found for this signature in database
GPG key ID: 2EEB9F5CC09526C1
55 changed files with 2709 additions and 148 deletions

View file

@ -151,6 +151,7 @@ BITCOIN_CORE_H = \
external_signer.h \
flatfile.h \
fs.h \
headerssync.h \
httprpc.h \
httpserver.h \
i2p.h \
@ -264,6 +265,7 @@ BITCOIN_CORE_H = \
undo.h \
util/asmap.h \
util/bip32.h \
util/bitdeque.h \
util/bytevectorhash.h \
util/check.h \
util/epochguard.h \
@ -360,6 +362,7 @@ libbitcoin_node_a_SOURCES = \
dbwrapper.cpp \
deploymentstatus.cpp \
flatfile.cpp \
headerssync.cpp \
httprpc.cpp \
httpserver.cpp \
i2p.cpp \

View file

@ -93,6 +93,7 @@ BITCOIN_TESTS =\
test/fs_tests.cpp \
test/getarg_tests.cpp \
test/hash_tests.cpp \
test/headers_sync_chainwork_tests.cpp \
test/httpserver_tests.cpp \
test/i2p_tests.cpp \
test/interfaces_tests.cpp \
@ -235,6 +236,7 @@ test_fuzz_fuzz_SOURCES = \
test/fuzz/banman.cpp \
test/fuzz/base_encode_decode.cpp \
test/fuzz/bech32.cpp \
test/fuzz/bitdeque.cpp \
test/fuzz/block.cpp \
test/fuzz/block_header.cpp \
test/fuzz/blockfilter.cpp \

View file

@ -195,7 +195,7 @@ int main(int argc, char* argv[])
bool new_block;
auto sc = std::make_shared<submitblock_StateCatcher>(block.GetHash());
RegisterSharedValidationInterface(sc);
bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*new_block=*/&new_block);
bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&new_block);
UnregisterSharedValidationInterface(sc);
if (!new_block && accepted) {
std::cerr << "duplicate" << std::endl;
@ -210,6 +210,9 @@ int main(int argc, char* argv[])
case BlockValidationResult::BLOCK_RESULT_UNSET:
std::cerr << "initial value. Block has not yet been rejected" << std::endl;
break;
case BlockValidationResult::BLOCK_HEADER_LOW_WORK:
std::cerr << "the block header may be on a too-little-work chain" << std::endl;
break;
case BlockValidationResult::BLOCK_CONSENSUS:
std::cerr << "invalid by consensus rules (excluding any below reasons)" << std::endl;
break;

View file

@ -28,32 +28,33 @@ void CChain::SetTip(CBlockIndex& block)
}
}
CBlockLocator CChain::GetLocator(const CBlockIndex *pindex) const {
int nStep = 1;
std::vector<uint256> vHave;
vHave.reserve(32);
std::vector<uint256> LocatorEntries(const CBlockIndex* index)
{
int step = 1;
std::vector<uint256> have;
if (index == nullptr) return have;
if (!pindex)
pindex = Tip();
while (pindex) {
vHave.push_back(pindex->GetBlockHash());
// Stop when we have added the genesis block.
if (pindex->nHeight == 0)
break;
have.reserve(32);
while (index) {
have.emplace_back(index->GetBlockHash());
if (index->nHeight == 0) break;
// Exponentially larger steps back, plus the genesis block.
int nHeight = std::max(pindex->nHeight - nStep, 0);
if (Contains(pindex)) {
// Use O(1) CChain index if possible.
pindex = (*this)[nHeight];
} else {
// Otherwise, use O(log n) skiplist.
pindex = pindex->GetAncestor(nHeight);
}
if (vHave.size() > 10)
nStep *= 2;
int height = std::max(index->nHeight - step, 0);
// Use skiplist.
index = index->GetAncestor(height);
if (have.size() > 10) step *= 2;
}
return have;
}
return CBlockLocator(vHave);
CBlockLocator GetLocator(const CBlockIndex* index)
{
return CBlockLocator{std::move(LocatorEntries(index))};
}
CBlockLocator CChain::GetLocator() const
{
return ::GetLocator(Tip());
}
const CBlockIndex *CChain::FindFork(const CBlockIndex *pindex) const {

View file

@ -473,8 +473,8 @@ public:
/** Set/initialize a chain with a given tip. */
void SetTip(CBlockIndex& block);
/** Return a CBlockLocator that refers to a block in this chain (by default the tip). */
CBlockLocator GetLocator(const CBlockIndex* pindex = nullptr) const;
/** Return a CBlockLocator that refers to the tip in of this chain. */
CBlockLocator GetLocator() const;
/** Find the last common block between this chain and a block index entry. */
const CBlockIndex* FindFork(const CBlockIndex* pindex) const;
@ -483,4 +483,10 @@ public:
CBlockIndex* FindEarliestAtLeast(int64_t nTime, int height) const;
};
/** Get a locator for a block index entry. */
CBlockLocator GetLocator(const CBlockIndex* index);
/** Construct a list of hash entries to put in a locator. */
std::vector<uint256> LocatorEntries(const CBlockIndex* index);
#endif // BITCOIN_CHAIN_H

View file

@ -79,6 +79,7 @@ enum class BlockValidationResult {
BLOCK_INVALID_PREV, //!< A block this one builds on is invalid
BLOCK_TIME_FUTURE, //!< block timestamp was > 2 hours in the future (or our clock is bad)
BLOCK_CHECKPOINT, //!< the block failed to meet one of our checkpoints
BLOCK_HEADER_LOW_WORK //!< the block header may be on a too-little-work chain
};

317
src/headerssync.cpp Normal file
View file

@ -0,0 +1,317 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <headerssync.h>
#include <logging.h>
#include <pow.h>
#include <timedata.h>
#include <util/check.h>
// The two constants below are computed using the simulation script on
// https://gist.github.com/sipa/016ae445c132cdf65a2791534dfb7ae1
//! Store a commitment to a header every HEADER_COMMITMENT_PERIOD blocks.
constexpr size_t HEADER_COMMITMENT_PERIOD{584};
//! Only feed headers to validation once this many headers on top have been
//! received and validated against commitments.
constexpr size_t REDOWNLOAD_BUFFER_SIZE{13959}; // 13959/584 = ~23.9 commitments
// Our memory analysis assumes 48 bytes for a CompressedHeader (so we should
// re-calculate parameters if we compress further)
static_assert(sizeof(CompressedHeader) == 48);
HeadersSyncState::HeadersSyncState(NodeId id, const Consensus::Params& consensus_params,
const CBlockIndex* chain_start, const arith_uint256& minimum_required_work) :
m_id(id), m_consensus_params(consensus_params),
m_chain_start(chain_start),
m_minimum_required_work(minimum_required_work),
m_current_chain_work(chain_start->nChainWork),
m_commit_offset(GetRand<unsigned>(HEADER_COMMITMENT_PERIOD)),
m_last_header_received(m_chain_start->GetBlockHeader()),
m_current_height(chain_start->nHeight)
{
// Estimate the number of blocks that could possibly exist on the peer's
// chain *right now* using 6 blocks/second (fastest blockrate given the MTP
// rule) times the number of seconds from the last allowed block until
// today. This serves as a memory bound on how many commitments we might
// store from this peer, and we can safely give up syncing if the peer
// exceeds this bound, because it's not possible for a consensus-valid
// chain to be longer than this (at the current time -- in the future we
// could try again, if necessary, to sync a longer chain).
m_max_commitments = 6*(Ticks<std::chrono::seconds>(GetAdjustedTime() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD;
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync started with peer=%d: height=%i, max_commitments=%i, min_work=%s\n", m_id, m_current_height, m_max_commitments, m_minimum_required_work.ToString());
}
/** Free any memory in use, and mark this object as no longer usable. This is
* required to guarantee that we won't reuse this object with the same
* SaltedTxidHasher for another sync. */
void HeadersSyncState::Finalize()
{
Assume(m_download_state != State::FINAL);
m_header_commitments = {};
m_last_header_received.SetNull();
m_redownloaded_headers = {};
m_redownload_buffer_last_hash.SetNull();
m_redownload_buffer_first_prev_hash.SetNull();
m_process_all_remaining_headers = false;
m_current_height = 0;
m_download_state = State::FINAL;
}
/** Process the next batch of headers received from our peer.
* Validate and store commitments, and compare total chainwork to our target to
* see if we can switch to REDOWNLOAD mode. */
HeadersSyncState::ProcessingResult HeadersSyncState::ProcessNextHeaders(const
std::vector<CBlockHeader>& received_headers, const bool full_headers_message)
{
ProcessingResult ret;
Assume(!received_headers.empty());
if (received_headers.empty()) return ret;
Assume(m_download_state != State::FINAL);
if (m_download_state == State::FINAL) return ret;
if (m_download_state == State::PRESYNC) {
// During PRESYNC, we minimally validate block headers and
// occasionally add commitments to them, until we reach our work
// threshold (at which point m_download_state is updated to REDOWNLOAD).
ret.success = ValidateAndStoreHeadersCommitments(received_headers);
if (ret.success) {
if (full_headers_message || m_download_state == State::REDOWNLOAD) {
// A full headers message means the peer may have more to give us;
// also if we just switched to REDOWNLOAD then we need to re-request
// headers from the beginning.
ret.request_more = true;
} else {
Assume(m_download_state == State::PRESYNC);
// If we're in PRESYNC and we get a non-full headers
// message, then the peer's chain has ended and definitely doesn't
// have enough work, so we can stop our sync.
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: incomplete headers message at height=%i (presync phase)\n", m_id, m_current_height);
}
}
} else if (m_download_state == State::REDOWNLOAD) {
// During REDOWNLOAD, we compare our stored commitments to what we
// receive, and add headers to our redownload buffer. When the buffer
// gets big enough (meaning that we've checked enough commitments),
// we'll return a batch of headers to the caller for processing.
ret.success = true;
for (const auto& hdr : received_headers) {
if (!ValidateAndStoreRedownloadedHeader(hdr)) {
// Something went wrong -- the peer gave us an unexpected chain.
// We could consider looking at the reason for failure and
// punishing the peer, but for now just give up on sync.
ret.success = false;
break;
}
}
if (ret.success) {
// Return any headers that are ready for acceptance.
ret.pow_validated_headers = PopHeadersReadyForAcceptance();
// If we hit our target blockhash, then all remaining headers will be
// returned and we can clear any leftover internal state.
if (m_redownloaded_headers.empty() && m_process_all_remaining_headers) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync complete with peer=%d: releasing all at height=%i (redownload phase)\n", m_id, m_redownload_buffer_last_height);
} else if (full_headers_message) {
// If the headers message is full, we need to request more.
ret.request_more = true;
} else {
// For some reason our peer gave us a high-work chain, but is now
// declining to serve us that full chain again. Give up.
// Note that there's no more processing to be done with these
// headers, so we can still return success.
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: incomplete headers message at height=%i (redownload phase)\n", m_id, m_redownload_buffer_last_height);
}
}
}
if (!(ret.success && ret.request_more)) Finalize();
return ret;
}
bool HeadersSyncState::ValidateAndStoreHeadersCommitments(const std::vector<CBlockHeader>& headers)
{
// The caller should not give us an empty set of headers.
Assume(headers.size() > 0);
if (headers.size() == 0) return true;
Assume(m_download_state == State::PRESYNC);
if (m_download_state != State::PRESYNC) return false;
if (headers[0].hashPrevBlock != m_last_header_received.GetHash()) {
// Somehow our peer gave us a header that doesn't connect.
// This might be benign -- perhaps our peer reorged away from the chain
// they were on. Give up on this sync for now (likely we will start a
// new sync with a new starting point).
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", m_id, m_current_height);
return false;
}
// If it does connect, (minimally) validate and occasionally store
// commitments.
for (const auto& hdr : headers) {
if (!ValidateAndProcessSingleHeader(hdr)) {
return false;
}
}
if (m_current_chain_work >= m_minimum_required_work) {
m_redownloaded_headers.clear();
m_redownload_buffer_last_height = m_chain_start->nHeight;
m_redownload_buffer_first_prev_hash = m_chain_start->GetBlockHash();
m_redownload_buffer_last_hash = m_chain_start->GetBlockHash();
m_redownload_chain_work = m_chain_start->nChainWork;
m_download_state = State::REDOWNLOAD;
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync transition with peer=%d: reached sufficient work at height=%i, redownloading from height=%i\n", m_id, m_current_height, m_redownload_buffer_last_height);
}
return true;
}
bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current)
{
Assume(m_download_state == State::PRESYNC);
if (m_download_state != State::PRESYNC) return false;
int next_height = m_current_height + 1;
// Verify that the difficulty isn't growing too fast; an adversary with
// limited hashing capability has a greater chance of producing a high
// work chain if they compress the work into as few blocks as possible,
// so don't let anyone give a chain that would violate the difficulty
// adjustment maximum.
if (!PermittedDifficultyTransition(m_consensus_params, next_height,
m_last_header_received.nBits, current.nBits)) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: invalid difficulty transition at height=%i (presync phase)\n", m_id, next_height);
return false;
}
if (next_height % HEADER_COMMITMENT_PERIOD == m_commit_offset) {
// Add a commitment.
m_header_commitments.push_back(m_hasher(current.GetHash()) & 1);
if (m_header_commitments.size() > m_max_commitments) {
// The peer's chain is too long; give up.
// It's possible the chain grew since we started the sync; so
// potentially we could succeed in syncing the peer's chain if we
// try again later.
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: exceeded max commitments at height=%i (presync phase)\n", m_id, next_height);
return false;
}
}
m_current_chain_work += GetBlockProof(CBlockIndex(current));
m_last_header_received = current;
m_current_height = next_height;
return true;
}
bool HeadersSyncState::ValidateAndStoreRedownloadedHeader(const CBlockHeader& header)
{
Assume(m_download_state == State::REDOWNLOAD);
if (m_download_state != State::REDOWNLOAD) return false;
int64_t next_height = m_redownload_buffer_last_height + 1;
// Ensure that we're working on a header that connects to the chain we're
// downloading.
if (header.hashPrevBlock != m_redownload_buffer_last_hash) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (redownload phase)\n", m_id, next_height);
return false;
}
// Check that the difficulty adjustments are within our tolerance:
uint32_t previous_nBits{0};
if (!m_redownloaded_headers.empty()) {
previous_nBits = m_redownloaded_headers.back().nBits;
} else {
previous_nBits = m_chain_start->nBits;
}
if (!PermittedDifficultyTransition(m_consensus_params, next_height,
previous_nBits, header.nBits)) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: invalid difficulty transition at height=%i (redownload phase)\n", m_id, next_height);
return false;
}
// Track work on the redownloaded chain
m_redownload_chain_work += GetBlockProof(CBlockIndex(header));
if (m_redownload_chain_work >= m_minimum_required_work) {
m_process_all_remaining_headers = true;
}
// If we're at a header for which we previously stored a commitment, verify
// it is correct. Failure will result in aborting download.
// Also, don't check commitments once we've gotten to our target blockhash;
// it's possible our peer has extended its chain between our first sync and
// our second, and we don't want to return failure after we've seen our
// target blockhash just because we ran out of commitments.
if (!m_process_all_remaining_headers && next_height % HEADER_COMMITMENT_PERIOD == m_commit_offset) {
if (m_header_commitments.size() == 0) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: commitment overrun at height=%i (redownload phase)\n", m_id, next_height);
// Somehow our peer managed to feed us a different chain and
// we've run out of commitments.
return false;
}
bool commitment = m_hasher(header.GetHash()) & 1;
bool expected_commitment = m_header_commitments.front();
m_header_commitments.pop_front();
if (commitment != expected_commitment) {
LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: commitment mismatch at height=%i (redownload phase)\n", m_id, next_height);
return false;
}
}
// Store this header for later processing.
m_redownloaded_headers.push_back(header);
m_redownload_buffer_last_height = next_height;
m_redownload_buffer_last_hash = header.GetHash();
return true;
}
std::vector<CBlockHeader> HeadersSyncState::PopHeadersReadyForAcceptance()
{
std::vector<CBlockHeader> ret;
Assume(m_download_state == State::REDOWNLOAD);
if (m_download_state != State::REDOWNLOAD) return ret;
while (m_redownloaded_headers.size() > REDOWNLOAD_BUFFER_SIZE ||
(m_redownloaded_headers.size() > 0 && m_process_all_remaining_headers)) {
ret.emplace_back(m_redownloaded_headers.front().GetFullHeader(m_redownload_buffer_first_prev_hash));
m_redownloaded_headers.pop_front();
m_redownload_buffer_first_prev_hash = ret.back().GetHash();
}
return ret;
}
CBlockLocator HeadersSyncState::NextHeadersRequestLocator() const
{
Assume(m_download_state != State::FINAL);
if (m_download_state == State::FINAL) return {};
auto chain_start_locator = LocatorEntries(m_chain_start);
std::vector<uint256> locator;
if (m_download_state == State::PRESYNC) {
// During pre-synchronization, we continue from the last header received.
locator.push_back(m_last_header_received.GetHash());
}
if (m_download_state == State::REDOWNLOAD) {
// During redownload, we will download from the last received header that we stored.
locator.push_back(m_redownload_buffer_last_hash);
}
locator.insert(locator.end(), chain_start_locator.begin(), chain_start_locator.end());
return CBlockLocator{std::move(locator)};
}

277
src/headerssync.h Normal file
View file

@ -0,0 +1,277 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_HEADERSSYNC_H
#define BITCOIN_HEADERSSYNC_H
#include <arith_uint256.h>
#include <chain.h>
#include <consensus/params.h>
#include <net.h> // For NodeId
#include <primitives/block.h>
#include <uint256.h>
#include <util/bitdeque.h>
#include <util/hasher.h>
#include <deque>
#include <vector>
// A compressed CBlockHeader, which leaves out the prevhash
struct CompressedHeader {
// header
int32_t nVersion{0};
uint256 hashMerkleRoot;
uint32_t nTime{0};
uint32_t nBits{0};
uint32_t nNonce{0};
CompressedHeader()
{
hashMerkleRoot.SetNull();
}
CompressedHeader(const CBlockHeader& header)
{
nVersion = header.nVersion;
hashMerkleRoot = header.hashMerkleRoot;
nTime = header.nTime;
nBits = header.nBits;
nNonce = header.nNonce;
}
CBlockHeader GetFullHeader(const uint256& hash_prev_block) {
CBlockHeader ret;
ret.nVersion = nVersion;
ret.hashPrevBlock = hash_prev_block;
ret.hashMerkleRoot = hashMerkleRoot;
ret.nTime = nTime;
ret.nBits = nBits;
ret.nNonce = nNonce;
return ret;
};
};
/** HeadersSyncState:
*
* We wish to download a peer's headers chain in a DoS-resistant way.
*
* The Bitcoin protocol does not offer an easy way to determine the work on a
* peer's chain. Currently, we can query a peer's headers by using a GETHEADERS
* message, and our peer can return a set of up to 2000 headers that connect to
* something we know. If a peer's chain has more than 2000 blocks, then we need
* a way to verify that the chain actually has enough work on it to be useful to
* us -- by being above our anti-DoS minimum-chain-work threshold -- before we
* commit to storing those headers in memory. Otherwise, it would be cheap for
* an attacker to waste all our memory by serving us low-work headers
* (particularly for a new node coming online for the first time).
*
* To prevent memory-DoS with low-work headers, while still always being
* able to reorg to whatever the most-work chain is, we require that a chain
* meet a work threshold before committing it to memory. We can do this by
* downloading a peer's headers twice, whenever we are not sure that the chain
* has sufficient work:
*
* - In the first download phase, called pre-synchronization, we can calculate
* the work on the chain as we go (just by checking the nBits value on each
* header, and validating the proof-of-work).
*
* - Once we have reached a header where the cumulative chain work is
* sufficient, we switch to downloading the headers a second time, this time
* processing them fully, and possibly storing them in memory.
*
* To prevent an attacker from using (eg) the honest chain to convince us that
* they have a high-work chain, but then feeding us an alternate set of
* low-difficulty headers in the second phase, we store commitments to the
* chain we see in the first download phase that we check in the second phase,
* as follows:
*
* - In phase 1 (presync), store 1 bit (using a salted hash function) for every
* N headers that we see. With a reasonable choice of N, this uses relatively
* little memory even for a very long chain.
*
* - In phase 2 (redownload), keep a lookahead buffer and only accept headers
* from that buffer into the block index (permanent memory usage) once they
* have some target number of verified commitments on top of them. With this
* parametrization, we can achieve a given security target for potential
* permanent memory usage, while choosing N to minimize memory use during the
* sync (temporary, per-peer storage).
*/
class HeadersSyncState {
public:
~HeadersSyncState() {}
enum class State {
/** PRESYNC means the peer has not yet demonstrated their chain has
* sufficient work and we're only building commitments to the chain they
* serve us. */
PRESYNC,
/** REDOWNLOAD means the peer has given us a high-enough-work chain,
* and now we're redownloading the headers we saw before and trying to
* accept them */
REDOWNLOAD,
/** We're done syncing with this peer and can discard any remaining state */
FINAL
};
/** Return the current state of our download */
State GetState() const { return m_download_state; }
/** Return the height reached during the PRESYNC phase */
int64_t GetPresyncHeight() const { return m_current_height; }
/** Return the block timestamp of the last header received during the PRESYNC phase. */
uint32_t GetPresyncTime() const { return m_last_header_received.nTime; }
/** Return the amount of work in the chain received during the PRESYNC phase. */
arith_uint256 GetPresyncWork() const { return m_current_chain_work; }
/** Construct a HeadersSyncState object representing a headers sync via this
* download-twice mechanism).
*
* id: node id (for logging)
* consensus_params: parameters needed for difficulty adjustment validation
* chain_start: best known fork point that the peer's headers branch from
* minimum_required_work: amount of chain work required to accept the chain
*/
HeadersSyncState(NodeId id, const Consensus::Params& consensus_params,
const CBlockIndex* chain_start, const arith_uint256& minimum_required_work);
/** Result data structure for ProcessNextHeaders. */
struct ProcessingResult {
std::vector<CBlockHeader> pow_validated_headers;
bool success{false};
bool request_more{false};
};
/** Process a batch of headers, once a sync via this mechanism has started
*
* received_headers: headers that were received over the network for processing.
* Assumes the caller has already verified the headers
* are continuous, and has checked that each header
* satisfies the proof-of-work target included in the
* header (but not necessarily verified that the
* proof-of-work target is correct and passes consensus
* rules).
* full_headers_message: true if the message was at max capacity,
* indicating more headers may be available
* ProcessingResult.pow_validated_headers: will be filled in with any
* headers that the caller can fully process and
* validate now (because these returned headers are
* on a chain with sufficient work)
* ProcessingResult.success: set to false if an error is detected and the sync is
* aborted; true otherwise.
* ProcessingResult.request_more: if true, the caller is suggested to call
* NextHeadersRequestLocator and send a getheaders message using it.
*/
ProcessingResult ProcessNextHeaders(const std::vector<CBlockHeader>&
received_headers, bool full_headers_message);
/** Issue the next GETHEADERS message to our peer.
*
* This will return a locator appropriate for the current sync object, to continue the
* synchronization phase it is in.
*/
CBlockLocator NextHeadersRequestLocator() const;
private:
/** Clear out all download state that might be in progress (freeing any used
* memory), and mark this object as no longer usable.
*/
void Finalize();
/**
* Only called in PRESYNC.
* Validate the work on the headers we received from the network, and
* store commitments for later. Update overall state with successfully
* processed headers.
* On failure, this invokes Finalize() and returns false.
*/
bool ValidateAndStoreHeadersCommitments(const std::vector<CBlockHeader>& headers);
/** In PRESYNC, process and update state for a single header */
bool ValidateAndProcessSingleHeader(const CBlockHeader& current);
/** In REDOWNLOAD, check a header's commitment (if applicable) and add to
* buffer for later processing */
bool ValidateAndStoreRedownloadedHeader(const CBlockHeader& header);
/** Return a set of headers that satisfy our proof-of-work threshold */
std::vector<CBlockHeader> PopHeadersReadyForAcceptance();
private:
/** NodeId of the peer (used for log messages) **/
const NodeId m_id;
/** We use the consensus params in our anti-DoS calculations */
const Consensus::Params& m_consensus_params;
/** Store the last block in our block index that the peer's chain builds from */
const CBlockIndex* m_chain_start{nullptr};
/** Minimum work that we're looking for on this chain. */
const arith_uint256 m_minimum_required_work;
/** Work that we've seen so far on the peer's chain */
arith_uint256 m_current_chain_work;
/** m_hasher is a salted hasher for making our 1-bit commitments to headers we've seen. */
const SaltedTxidHasher m_hasher;
/** A queue of commitment bits, created during the 1st phase, and verified during the 2nd. */
bitdeque<> m_header_commitments;
/** The (secret) offset on the heights for which to create commitments.
*
* m_header_commitments entries are created at any height h for which
* (h % HEADER_COMMITMENT_PERIOD) == m_commit_offset. */
const unsigned m_commit_offset;
/** m_max_commitments is a bound we calculate on how long an honest peer's chain could be,
* given the MTP rule.
*
* Any peer giving us more headers than this will have its sync aborted. This serves as a
* memory bound on m_header_commitments. */
uint64_t m_max_commitments{0};
/** Store the latest header received while in PRESYNC (initialized to m_chain_start) */
CBlockHeader m_last_header_received;
/** Height of m_last_header_received */
int64_t m_current_height{0};
/** During phase 2 (REDOWNLOAD), we buffer redownloaded headers in memory
* until enough commitments have been verified; those are stored in
* m_redownloaded_headers */
std::deque<CompressedHeader> m_redownloaded_headers;
/** Height of last header in m_redownloaded_headers */
int64_t m_redownload_buffer_last_height{0};
/** Hash of last header in m_redownloaded_headers (initialized to
* m_chain_start). We have to cache it because we don't have hashPrevBlock
* available in a CompressedHeader.
*/
uint256 m_redownload_buffer_last_hash;
/** The hashPrevBlock entry for the first header in m_redownloaded_headers
* We need this to reconstruct the full header when it's time for
* processing.
*/
uint256 m_redownload_buffer_first_prev_hash;
/** The accumulated work on the redownloaded chain. */
arith_uint256 m_redownload_chain_work;
/** Set this to true once we encounter the target blockheader during phase
* 2 (REDOWNLOAD). At this point, we can process and store all remaining
* headers still in m_redownloaded_headers.
*/
bool m_process_all_remaining_headers{false};
/** Current state of our headers sync. */
State m_download_state{State::PRESYNC};
};
#endif // BITCOIN_HEADERSSYNC_H

View file

@ -260,7 +260,7 @@ public:
//! Register handler for header tip messages.
using NotifyHeaderTipFn =
std::function<void(SynchronizationState, interfaces::BlockTip tip, double verification_progress)>;
std::function<void(SynchronizationState, interfaces::BlockTip tip, bool presync)>;
virtual std::unique_ptr<Handler> handleNotifyHeaderTip(NotifyHeaderTipFn fn) = 0;
//! Get and set internal node context. Useful for testing, but not

View file

@ -165,6 +165,7 @@ const CLogCategoryDesc LogCategories[] =
#endif
{BCLog::UTIL, "util"},
{BCLog::BLOCKSTORE, "blockstorage"},
{BCLog::HEADERSSYNC, "headerssync"},
{BCLog::ALL, "1"},
{BCLog::ALL, "all"},
};
@ -263,6 +264,8 @@ std::string LogCategoryToStr(BCLog::LogFlags category)
return "util";
case BCLog::LogFlags::BLOCKSTORE:
return "blockstorage";
case BCLog::LogFlags::HEADERSSYNC:
return "headerssync";
case BCLog::LogFlags::ALL:
return "all";
}

View file

@ -65,6 +65,7 @@ namespace BCLog {
#endif
UTIL = (1 << 25),
BLOCKSTORE = (1 << 26),
HEADERSSYNC = (1 << 27),
ALL = ~(uint32_t)0,
};
enum class Level {

View file

@ -14,6 +14,7 @@
#include <consensus/validation.h>
#include <deploymentstatus.h>
#include <hash.h>
#include <headerssync.h>
#include <index/blockfilterindex.h>
#include <merkleblock.h>
#include <netbase.h>
@ -381,6 +382,15 @@ struct Peer {
/** Time of the last getheaders message to this peer */
NodeClock::time_point m_last_getheaders_timestamp{};
/** Protects m_headers_sync **/
Mutex m_headers_sync_mutex;
/** Headers-sync state for this peer (eg for initial sync, or syncing large
* reorgs) **/
std::unique_ptr<HeadersSyncState> m_headers_sync PT_GUARDED_BY(m_headers_sync_mutex) GUARDED_BY(m_headers_sync_mutex) {};
/** Whether we've sent our peer a sendheaders message. **/
std::atomic<bool> m_sent_sendheaders{false};
explicit Peer(NodeId id, ServiceFlags our_services)
: m_id{id}
, m_our_services{our_services}
@ -503,9 +513,9 @@ public:
/** Implement NetEventsInterface */
void InitializeNode(CNode& node, ServiceFlags our_services) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
void FinalizeNode(const CNode& node) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
void FinalizeNode(const CNode& node) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_headers_presync_mutex);
bool ProcessMessages(CNode* pfrom, std::atomic<bool>& interrupt) override
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex, !m_headers_presync_mutex);
bool SendMessages(CNode* pto) override EXCLUSIVE_LOCKS_REQUIRED(pto->cs_sendProcessing)
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
@ -522,7 +532,7 @@ public:
void UnitTestMisbehaving(NodeId peer_id, int howmuch) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex) { Misbehaving(*Assert(GetPeerRef(peer_id)), howmuch, ""); };
void ProcessMessage(CNode& pfrom, const std::string& msg_type, CDataStream& vRecv,
const std::chrono::microseconds time_received, const std::atomic<bool>& interruptMsgProc) override
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex, !m_headers_presync_mutex);
void UpdateLastBlockAnnounceTime(NodeId node, int64_t time_in_seconds) override;
private:
@ -581,18 +591,70 @@ private:
void ProcessOrphanTx(std::set<uint256>& orphan_work_set) EXCLUSIVE_LOCKS_REQUIRED(cs_main, g_cs_orphans)
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
/** Process a single headers message from a peer. */
/** Process a single headers message from a peer.
*
* @param[in] pfrom CNode of the peer
* @param[in] peer The peer sending us the headers
* @param[in] headers The headers received. Note that this may be modified within ProcessHeadersMessage.
* @param[in] via_compact_block Whether this header came in via compact block handling.
*/
void ProcessHeadersMessage(CNode& pfrom, Peer& peer,
const std::vector<CBlockHeader>& headers,
std::vector<CBlockHeader>&& headers,
bool via_compact_block)
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_headers_presync_mutex);
/** Various helpers for headers processing, invoked by ProcessHeadersMessage() */
/** Return true if headers are continuous and have valid proof-of-work (DoS points assigned on failure) */
bool CheckHeadersPoW(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams, Peer& peer);
/** Calculate an anti-DoS work threshold for headers chains */
arith_uint256 GetAntiDoSWorkThreshold();
/** Deal with state tracking and headers sync for peers that send the
* occasional non-connecting header (this can happen due to BIP 130 headers
* announcements for blocks interacting with the 2hr (MAX_FUTURE_BLOCK_TIME) rule). */
void HandleFewUnconnectingHeaders(CNode& pfrom, Peer& peer, const std::vector<CBlockHeader>& headers);
/** Return true if the headers connect to each other, false otherwise */
bool CheckHeadersAreContinuous(const std::vector<CBlockHeader>& headers) const;
/** Try to continue a low-work headers sync that has already begun.
* Assumes the caller has already verified the headers connect, and has
* checked that each header satisfies the proof-of-work target included in
* the header.
* @param[in] peer The peer we're syncing with.
* @param[in] pfrom CNode of the peer
* @param[in,out] headers The headers to be processed.
* @return True if the passed in headers were successfully processed
* as the continuation of a low-work headers sync in progress;
* false otherwise.
* If false, the passed in headers will be returned back to
* the caller.
* If true, the returned headers may be empty, indicating
* there is no more work for the caller to do; or the headers
* may be populated with entries that have passed anti-DoS
* checks (and therefore may be validated for block index
* acceptance by the caller).
*/
bool IsContinuationOfLowWorkHeadersSync(Peer& peer, CNode& pfrom,
std::vector<CBlockHeader>& headers)
EXCLUSIVE_LOCKS_REQUIRED(peer.m_headers_sync_mutex, !m_headers_presync_mutex);
/** Check work on a headers chain to be processed, and if insufficient,
* initiate our anti-DoS headers sync mechanism.
*
* @param[in] peer The peer whose headers we're processing.
* @param[in] pfrom CNode of the peer
* @param[in] chain_start_header Where these headers connect in our index.
* @param[in,out] headers The headers to be processed.
*
* @return True if chain was low work and a headers sync was
* initiated (and headers will be empty after calling); false
* otherwise.
*/
bool TryLowWorkHeadersSync(Peer& peer, CNode& pfrom,
const CBlockIndex* chain_start_header,
std::vector<CBlockHeader>& headers)
EXCLUSIVE_LOCKS_REQUIRED(!peer.m_headers_sync_mutex, !m_peer_mutex, !m_headers_presync_mutex);
/** Return true if the given header is an ancestor of
* m_chainman.m_best_header or our current tip */
bool IsAncestorOfBestHeaderOrTip(const CBlockIndex* header) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/** Request further headers from this peer with a given locator.
* We don't issue a getheaders message if we have a recent one outstanding.
* This returns true if a getheaders is actually sent, and false otherwise.
@ -623,6 +685,9 @@ private:
/** Send `addr` messages on a regular schedule. */
void MaybeSendAddr(CNode& node, Peer& peer, std::chrono::microseconds current_time);
/** Send a single `sendheaders` message, after we have completed headers sync with a peer. */
void MaybeSendSendHeaders(CNode& node, Peer& peer);
/** Relay (gossip) an address to a few randomly chosen nodes.
*
* @param[in] originator The id of the peer that sent us the address. We don't want to relay it back.
@ -779,6 +844,24 @@ private:
std::shared_ptr<const CBlockHeaderAndShortTxIDs> m_most_recent_compact_block GUARDED_BY(m_most_recent_block_mutex);
uint256 m_most_recent_block_hash GUARDED_BY(m_most_recent_block_mutex);
// Data about the low-work headers synchronization, aggregated from all peers' HeadersSyncStates.
/** Mutex guarding the other m_headers_presync_* variables. */
Mutex m_headers_presync_mutex;
/** A type to represent statistics about a peer's low-work headers sync.
*
* - The first field is the total verified amount of work in that synchronization.
* - The second is:
* - nullopt: the sync is in REDOWNLOAD phase (phase 2).
* - {height, timestamp}: the sync has the specified tip height and block timestamp (phase 1).
*/
using HeadersPresyncStats = std::pair<arith_uint256, std::optional<std::pair<int64_t, uint32_t>>>;
/** Statistics for all peers in low-work headers sync. */
std::map<NodeId, HeadersPresyncStats> m_headers_presync_stats GUARDED_BY(m_headers_presync_mutex) {};
/** The peer with the most-work entry in m_headers_presync_stats. */
NodeId m_headers_presync_bestpeer GUARDED_BY(m_headers_presync_mutex) {-1};
/** The m_headers_presync_stats improved, and needs signalling. */
std::atomic_bool m_headers_presync_should_signal{false};
/** Height of the highest block announced using BIP 152 high-bandwidth mode. */
int m_highest_fast_announce{0};
@ -816,7 +899,7 @@ private:
EXCLUSIVE_LOCKS_REQUIRED(!m_most_recent_block_mutex, peer.m_getdata_requests_mutex) LOCKS_EXCLUDED(::cs_main);
/** Process a new block. Perform any post-processing housekeeping */
void ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing);
void ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked);
/** Relay map (txid or wtxid -> CTransactionRef) */
typedef std::map<uint256, CTransactionRef> MapRelay;
@ -1437,6 +1520,10 @@ void PeerManagerImpl::FinalizeNode(const CNode& node)
// fSuccessfullyConnected set.
m_addrman.Connected(node.addr);
}
{
LOCK(m_headers_presync_mutex);
m_headers_presync_stats.erase(nodeid);
}
LogPrint(BCLog::NET, "Cleared nodestate for peer=%d\n", nodeid);
}
@ -1501,6 +1588,12 @@ bool PeerManagerImpl::GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) c
stats.m_addr_processed = peer->m_addr_processed.load();
stats.m_addr_rate_limited = peer->m_addr_rate_limited.load();
stats.m_addr_relay_enabled = peer->m_addr_relay_enabled.load();
{
LOCK(peer->m_headers_sync_mutex);
if (peer->m_headers_sync) {
stats.presync_height = peer->m_headers_sync->GetPresyncHeight();
}
}
return true;
}
@ -1544,6 +1637,10 @@ bool PeerManagerImpl::MaybePunishNodeForBlock(NodeId nodeid, const BlockValidati
switch (state.GetResult()) {
case BlockValidationResult::BLOCK_RESULT_UNSET:
break;
case BlockValidationResult::BLOCK_HEADER_LOW_WORK:
// We didn't try to process the block because the header chain may have
// too little work.
break;
// The node is providing invalid data:
case BlockValidationResult::BLOCK_CONSENSUS:
case BlockValidationResult::BLOCK_MUTATED:
@ -2263,6 +2360,35 @@ void PeerManagerImpl::SendBlockTransactions(CNode& pfrom, Peer& peer, const CBlo
m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::BLOCKTXN, resp));
}
bool PeerManagerImpl::CheckHeadersPoW(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams, Peer& peer)
{
// Do these headers have proof-of-work matching what's claimed?
if (!HasValidProofOfWork(headers, consensusParams)) {
Misbehaving(peer, 100, "header with invalid proof of work");
return false;
}
// Are these headers connected to each other?
if (!CheckHeadersAreContinuous(headers)) {
Misbehaving(peer, 20, "non-continuous headers sequence");
return false;
}
return true;
}
arith_uint256 PeerManagerImpl::GetAntiDoSWorkThreshold()
{
arith_uint256 near_chaintip_work = 0;
LOCK(cs_main);
if (m_chainman.ActiveChain().Tip() != nullptr) {
const CBlockIndex *tip = m_chainman.ActiveChain().Tip();
// Use a 144 block buffer, so that we'll accept headers that fork from
// near our tip.
near_chaintip_work = tip->nChainWork - std::min<arith_uint256>(144*GetBlockProof(*tip), tip->nChainWork);
}
return std::max(near_chaintip_work, arith_uint256(nMinimumChainWork));
}
/**
* Special handling for unconnecting headers that might be part of a block
* announcement.
@ -2285,7 +2411,7 @@ void PeerManagerImpl::HandleFewUnconnectingHeaders(CNode& pfrom, Peer& peer,
nodestate->nUnconnectingHeaders++;
// Try to fill in the missing headers.
if (MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), peer)) {
if (MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), peer)) {
LogPrint(BCLog::NET, "received header %s: missing prev block %s, sending getheaders (%d) to end (peer=%d, nUnconnectingHeaders=%d)\n",
headers[0].GetHash().ToString(),
headers[0].hashPrevBlock.ToString(),
@ -2316,6 +2442,146 @@ bool PeerManagerImpl::CheckHeadersAreContinuous(const std::vector<CBlockHeader>&
return true;
}
bool PeerManagerImpl::IsContinuationOfLowWorkHeadersSync(Peer& peer, CNode& pfrom, std::vector<CBlockHeader>& headers)
{
if (peer.m_headers_sync) {
auto result = peer.m_headers_sync->ProcessNextHeaders(headers, headers.size() == MAX_HEADERS_RESULTS);
if (result.request_more) {
auto locator = peer.m_headers_sync->NextHeadersRequestLocator();
// If we were instructed to ask for a locator, it should not be empty.
Assume(!locator.vHave.empty());
if (!locator.vHave.empty()) {
// It should be impossible for the getheaders request to fail,
// because we should have cleared the last getheaders timestamp
// when processing the headers that triggered this call. But
// it may be possible to bypass this via compactblock
// processing, so check the result before logging just to be
// safe.
bool sent_getheaders = MaybeSendGetHeaders(pfrom, locator, peer);
if (sent_getheaders) {
LogPrint(BCLog::NET, "more getheaders (from %s) to peer=%d\n",
locator.vHave.front().ToString(), pfrom.GetId());
} else {
LogPrint(BCLog::NET, "error sending next getheaders (from %s) to continue sync with peer=%d\n",
locator.vHave.front().ToString(), pfrom.GetId());
}
}
}
if (peer.m_headers_sync->GetState() == HeadersSyncState::State::FINAL) {
peer.m_headers_sync.reset(nullptr);
// Delete this peer's entry in m_headers_presync_stats.
// If this is m_headers_presync_bestpeer, it will be replaced later
// by the next peer that triggers the else{} branch below.
LOCK(m_headers_presync_mutex);
m_headers_presync_stats.erase(pfrom.GetId());
} else {
// Build statistics for this peer's sync.
HeadersPresyncStats stats;
stats.first = peer.m_headers_sync->GetPresyncWork();
if (peer.m_headers_sync->GetState() == HeadersSyncState::State::PRESYNC) {
stats.second = {peer.m_headers_sync->GetPresyncHeight(),
peer.m_headers_sync->GetPresyncTime()};
}
// Update statistics in stats.
LOCK(m_headers_presync_mutex);
m_headers_presync_stats[pfrom.GetId()] = stats;
auto best_it = m_headers_presync_stats.find(m_headers_presync_bestpeer);
bool best_updated = false;
if (best_it == m_headers_presync_stats.end()) {
// If the cached best peer is outdated, iterate over all remaining ones (including
// newly updated one) to find the best one.
NodeId peer_best{-1};
const HeadersPresyncStats* stat_best{nullptr};
for (const auto& [peer, stat] : m_headers_presync_stats) {
if (!stat_best || stat > *stat_best) {
peer_best = peer;
stat_best = &stat;
}
}
m_headers_presync_bestpeer = peer_best;
best_updated = (peer_best == pfrom.GetId());
} else if (best_it->first == pfrom.GetId() || stats > best_it->second) {
// pfrom was and remains the best peer, or pfrom just became best.
m_headers_presync_bestpeer = pfrom.GetId();
best_updated = true;
}
if (best_updated && stats.second.has_value()) {
// If the best peer updated, and it is in its first phase, signal.
m_headers_presync_should_signal = true;
}
}
if (result.success) {
// We only overwrite the headers passed in if processing was
// successful.
headers.swap(result.pow_validated_headers);
}
return result.success;
}
// Either we didn't have a sync in progress, or something went wrong
// processing these headers, or we are returning headers to the caller to
// process.
return false;
}
bool PeerManagerImpl::TryLowWorkHeadersSync(Peer& peer, CNode& pfrom, const CBlockIndex* chain_start_header, std::vector<CBlockHeader>& headers)
{
// Calculate the total work on this chain.
arith_uint256 total_work = chain_start_header->nChainWork + CalculateHeadersWork(headers);
// Our dynamic anti-DoS threshold (minimum work required on a headers chain
// before we'll store it)
arith_uint256 minimum_chain_work = GetAntiDoSWorkThreshold();
// Avoid DoS via low-difficulty-headers by only processing if the headers
// are part of a chain with sufficient work.
if (total_work < minimum_chain_work) {
// Only try to sync with this peer if their headers message was full;
// otherwise they don't have more headers after this so no point in
// trying to sync their too-little-work chain.
if (headers.size() == MAX_HEADERS_RESULTS) {
// Note: we could advance to the last header in this set that is
// known to us, rather than starting at the first header (which we
// may already have); however this is unlikely to matter much since
// ProcessHeadersMessage() already handles the case where all
// headers in a received message are already known and are
// ancestors of m_best_header or chainActive.Tip(), by skipping
// this logic in that case. So even if the first header in this set
// of headers is known, some header in this set must be new, so
// advancing to the first unknown header would be a small effect.
LOCK(peer.m_headers_sync_mutex);
peer.m_headers_sync.reset(new HeadersSyncState(peer.m_id, m_chainparams.GetConsensus(),
chain_start_header, minimum_chain_work));
// Now a HeadersSyncState object for tracking this synchronization is created,
// process the headers using it as normal.
return IsContinuationOfLowWorkHeadersSync(peer, pfrom, headers);
} else {
LogPrint(BCLog::NET, "Ignoring low-work chain (height=%u) from peer=%d\n", chain_start_header->nHeight + headers.size(), pfrom.GetId());
// Since this is a low-work headers chain, no further processing is required.
headers = {};
return true;
}
}
return false;
}
bool PeerManagerImpl::IsAncestorOfBestHeaderOrTip(const CBlockIndex* header)
{
if (header == nullptr) {
return false;
} else if (m_chainman.m_best_header != nullptr && header == m_chainman.m_best_header->GetAncestor(header->nHeight)) {
return true;
} else if (m_chainman.ActiveChain().Contains(header)) {
return true;
}
return false;
}
bool PeerManagerImpl::MaybeSendGetHeaders(CNode& pfrom, const CBlockLocator& locator, Peer& peer)
{
const CNetMsgMaker msgMaker(pfrom.GetCommonVersion());
@ -2461,21 +2727,73 @@ void PeerManagerImpl::UpdatePeerStateForReceivedHeaders(CNode& pfrom,
}
void PeerManagerImpl::ProcessHeadersMessage(CNode& pfrom, Peer& peer,
const std::vector<CBlockHeader>& headers,
std::vector<CBlockHeader>&& headers,
bool via_compact_block)
{
const CNetMsgMaker msgMaker(pfrom.GetCommonVersion());
size_t nCount = headers.size();
if (nCount == 0) {
// Nothing interesting. Stop asking this peers for more headers.
// If we were in the middle of headers sync, receiving an empty headers
// message suggests that the peer suddenly has nothing to give us
// (perhaps it reorged to our chain). Clear download state for this peer.
LOCK(peer.m_headers_sync_mutex);
if (peer.m_headers_sync) {
peer.m_headers_sync.reset(nullptr);
LOCK(m_headers_presync_mutex);
m_headers_presync_stats.erase(pfrom.GetId());
}
return;
}
// Before we do any processing, make sure these pass basic sanity checks.
// We'll rely on headers having valid proof-of-work further down, as an
// anti-DoS criteria (note: this check is required before passing any
// headers into HeadersSyncState).
if (!CheckHeadersPoW(headers, m_chainparams.GetConsensus(), peer)) {
// Misbehaving() calls are handled within CheckHeadersPoW(), so we can
// just return. (Note that even if a header is announced via compact
// block, the header itself should be valid, so this type of error can
// always be punished.)
return;
}
const CBlockIndex *pindexLast = nullptr;
// We'll set already_validated_work to true if these headers are
// successfully processed as part of a low-work headers sync in progress
// (either in PRESYNC or REDOWNLOAD phase).
// If true, this will mean that any headers returned to us (ie during
// REDOWNLOAD) can be validated without further anti-DoS checks.
bool already_validated_work = false;
// If we're in the middle of headers sync, let it do its magic.
bool have_headers_sync = false;
{
LOCK(peer.m_headers_sync_mutex);
already_validated_work = IsContinuationOfLowWorkHeadersSync(peer, pfrom, headers);
// The headers we passed in may have been:
// - untouched, perhaps if no headers-sync was in progress, or some
// failure occurred
// - erased, such as if the headers were successfully processed and no
// additional headers processing needs to take place (such as if we
// are still in PRESYNC)
// - replaced with headers that are now ready for validation, such as
// during the REDOWNLOAD phase of a low-work headers sync.
// So just check whether we still have headers that we need to process,
// or not.
if (headers.empty()) {
return;
}
have_headers_sync = !!peer.m_headers_sync;
}
// Do these headers connect to something in our block index?
bool headers_connect_blockindex{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers[0].hashPrevBlock) != nullptr)};
const CBlockIndex *chain_start_header{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers[0].hashPrevBlock))};
bool headers_connect_blockindex{chain_start_header != nullptr};
if (!headers_connect_blockindex) {
if (nCount <= MAX_BLOCKS_TO_ANNOUNCE) {
@ -2489,28 +2807,51 @@ void PeerManagerImpl::ProcessHeadersMessage(CNode& pfrom, Peer& peer,
return;
}
// If the headers we received are already in memory and an ancestor of
// m_best_header or our tip, skip anti-DoS checks. These headers will not
// use any more memory (and we are not leaking information that could be
// used to fingerprint us).
const CBlockIndex *last_received_header{nullptr};
{
LOCK(cs_main);
last_received_header = m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash());
if (IsAncestorOfBestHeaderOrTip(last_received_header)) {
already_validated_work = true;
}
}
// At this point, the headers connect to something in our block index.
if (!CheckHeadersAreContinuous(headers)) {
Misbehaving(peer, 20, "non-continuous headers sequence");
// Do anti-DoS checks to determine if we should process or store for later
// processing.
if (!already_validated_work && TryLowWorkHeadersSync(peer, pfrom,
chain_start_header, headers)) {
// If we successfully started a low-work headers sync, then there
// should be no headers to process any further.
Assume(headers.empty());
return;
}
// At this point, we have a set of headers with sufficient work on them
// which can be processed.
// If we don't have the last header, then this peer will have given us
// something new (if these headers are valid).
bool received_new_header{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash()) == nullptr)};
bool received_new_header{last_received_header != nullptr};
// Now process all the headers.
BlockValidationState state;
if (!m_chainman.ProcessNewBlockHeaders(headers, state, &pindexLast)) {
if (!m_chainman.ProcessNewBlockHeaders(headers, /*min_pow_checked=*/true, state, &pindexLast)) {
if (state.IsInvalid()) {
MaybePunishNodeForBlock(pfrom.GetId(), state, via_compact_block, "invalid header received");
return;
}
}
Assume(pindexLast);
// Consider fetching more headers.
if (nCount == MAX_HEADERS_RESULTS) {
// Consider fetching more headers if we are not using our headers-sync mechanism.
if (nCount == MAX_HEADERS_RESULTS && !have_headers_sync) {
// Headers message had its maximum size; the peer may have more headers.
if (MaybeSendGetHeaders(pfrom, WITH_LOCK(m_chainman.GetMutex(), return m_chainman.ActiveChain().GetLocator(pindexLast)), peer)) {
if (MaybeSendGetHeaders(pfrom, GetLocator(pindexLast), peer)) {
LogPrint(BCLog::NET, "more getheaders (%d) to end to peer=%d (startheight:%d)\n",
pindexLast->nHeight, pfrom.GetId(), peer.m_starting_height);
}
@ -2771,10 +3112,10 @@ void PeerManagerImpl::ProcessGetCFCheckPt(CNode& node, Peer& peer, CDataStream&
m_connman.PushMessage(&node, std::move(msg));
}
void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing)
void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked)
{
bool new_block{false};
m_chainman.ProcessNewBlock(block, force_processing, &new_block);
m_chainman.ProcessNewBlock(block, force_processing, min_pow_checked, &new_block);
if (new_block) {
node.m_last_block_time = GetTime<std::chrono::seconds>();
} else {
@ -3032,13 +3373,6 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
pfrom.ConnectionTypeAsString());
}
if (pfrom.GetCommonVersion() >= SENDHEADERS_VERSION) {
// Tell our peer we prefer to receive headers rather than inv's
// We send this to non-NODE NETWORK peers as well, because even
// non-NODE NETWORK peers can announce blocks (such as pruning
// nodes)
m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::SENDHEADERS));
}
if (pfrom.GetCommonVersion() >= SHORT_IDS_BLOCKS_VERSION) {
// Tell our peer we are willing to provide version 2 cmpctblocks.
// However, we do not request new block announcements using
@ -3285,7 +3619,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// use if we turned on sync with all peers).
CNodeState& state{*Assert(State(pfrom.GetId()))};
if (state.fSyncStarted || (!peer->m_inv_triggered_getheaders_before_sync && *best_block != m_last_block_inv_triggering_headers_sync)) {
if (MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), *peer)) {
if (MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), *peer)) {
LogPrint(BCLog::NET, "getheaders (%d) %s to peer=%d\n",
m_chainman.m_best_header->nHeight, best_block->ToString(),
pfrom.GetId());
@ -3749,12 +4083,17 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
{
LOCK(cs_main);
if (!m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.hashPrevBlock)) {
const CBlockIndex* prev_block = m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.hashPrevBlock);
if (!prev_block) {
// Doesn't connect (or is genesis), instead of DoSing in AcceptBlockHeader, request deeper headers
if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), *peer);
MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), *peer);
}
return;
} else if (prev_block->nChainWork + CalculateHeadersWork({cmpctblock.header}) < GetAntiDoSWorkThreshold()) {
// If we get a low-work header in a compact block, we can ignore it.
LogPrint(BCLog::NET, "Ignoring low-work compact block from peer %d\n", pfrom.GetId());
return;
}
if (!m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.GetHash())) {
@ -3764,7 +4103,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
const CBlockIndex *pindex = nullptr;
BlockValidationState state;
if (!m_chainman.ProcessNewBlockHeaders({cmpctblock.header}, state, &pindex)) {
if (!m_chainman.ProcessNewBlockHeaders({cmpctblock.header}, /*min_pow_checked=*/true, state, &pindex)) {
if (state.IsInvalid()) {
MaybePunishNodeForBlock(pfrom.GetId(), state, /*via_compact_block=*/true, "invalid header via cmpctblock");
return;
@ -3931,7 +4270,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// we have a chain with at least nMinimumChainWork), and we ignore
// compact blocks with less work than our tip, it is safe to treat
// reconstructed compact blocks as having been requested.
ProcessBlock(pfrom, pblock, /*force_processing=*/true);
ProcessBlock(pfrom, pblock, /*force_processing=*/true, /*min_pow_checked=*/true);
LOCK(cs_main); // hold cs_main for CBlockIndex::IsValid()
if (pindex->IsValid(BLOCK_VALID_TRANSACTIONS)) {
// Clear download state for this block, which is in
@ -4014,7 +4353,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// disk-space attacks), but this should be safe due to the
// protections in the compact block handler -- see related comment
// in compact block optimistic reconstruction handling.
ProcessBlock(pfrom, pblock, /*force_processing=*/true);
ProcessBlock(pfrom, pblock, /*force_processing=*/true, /*min_pow_checked=*/true);
}
return;
}
@ -4045,7 +4384,23 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
ReadCompactSize(vRecv); // ignore tx count; assume it is 0.
}
return ProcessHeadersMessage(pfrom, *peer, headers, /*via_compact_block=*/false);
ProcessHeadersMessage(pfrom, *peer, std::move(headers), /*via_compact_block=*/false);
// Check if the headers presync progress needs to be reported to validation.
// This needs to be done without holding the m_headers_presync_mutex lock.
if (m_headers_presync_should_signal.exchange(false)) {
HeadersPresyncStats stats;
{
LOCK(m_headers_presync_mutex);
auto it = m_headers_presync_stats.find(m_headers_presync_bestpeer);
if (it != m_headers_presync_stats.end()) stats = it->second;
}
if (stats.second) {
m_chainman.ReportHeadersPresync(stats.first, stats.second->first, stats.second->second);
}
}
return;
}
if (msg_type == NetMsgType::BLOCK)
@ -4063,6 +4418,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
bool forceProcessing = false;
const uint256 hash(pblock->GetHash());
bool min_pow_checked = false;
{
LOCK(cs_main);
// Always process the block if we requested it, since we may
@ -4073,8 +4429,14 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// which peers send us compact blocks, so the race between here and
// cs_main in ProcessNewBlock is fine.
mapBlockSource.emplace(hash, std::make_pair(pfrom.GetId(), true));
// Check work on this block against our anti-dos thresholds.
const CBlockIndex* prev_block = m_chainman.m_blockman.LookupBlockIndex(pblock->hashPrevBlock);
if (prev_block && prev_block->nChainWork + CalculateHeadersWork({pblock->GetBlockHeader()}) >= GetAntiDoSWorkThreshold()) {
min_pow_checked = true;
}
ProcessBlock(pfrom, pblock, forceProcessing);
}
ProcessBlock(pfrom, pblock, forceProcessing, min_pow_checked);
return;
}
@ -4502,7 +4864,7 @@ void PeerManagerImpl::ConsiderEviction(CNode& pto, Peer& peer, std::chrono::seco
// getheaders in-flight already, in which case the peer should
// still respond to us with a sufficiently high work chain tip.
MaybeSendGetHeaders(pto,
m_chainman.ActiveChain().GetLocator(state.m_chain_sync.m_work_header->pprev),
GetLocator(state.m_chain_sync.m_work_header->pprev),
peer);
LogPrint(BCLog::NET, "sending getheaders to outbound peer=%d to verify chain work (current best known block:%s, benchmark blockhash: %s)\n", pto.GetId(), state.pindexBestKnownBlock != nullptr ? state.pindexBestKnownBlock->GetBlockHash().ToString() : "<none>", state.m_chain_sync.m_work_header->GetBlockHash().ToString());
state.m_chain_sync.m_sent_getheaders = true;
@ -4759,6 +5121,27 @@ void PeerManagerImpl::MaybeSendAddr(CNode& node, Peer& peer, std::chrono::micros
}
}
void PeerManagerImpl::MaybeSendSendHeaders(CNode& node, Peer& peer)
{
// Delay sending SENDHEADERS (BIP 130) until we're done with an
// initial-headers-sync with this peer. Receiving headers announcements for
// new blocks while trying to sync their headers chain is problematic,
// because of the state tracking done.
if (!peer.m_sent_sendheaders && node.GetCommonVersion() >= SENDHEADERS_VERSION) {
LOCK(cs_main);
CNodeState &state = *State(node.GetId());
if (state.pindexBestKnownBlock != nullptr &&
state.pindexBestKnownBlock->nChainWork > nMinimumChainWork) {
// Tell our peer we prefer to receive headers rather than inv's
// We send this to non-NODE NETWORK peers as well, because even
// non-NODE NETWORK peers can announce blocks (such as pruning
// nodes)
m_connman.PushMessage(&node, CNetMsgMaker(node.GetCommonVersion()).Make(NetMsgType::SENDHEADERS));
peer.m_sent_sendheaders = true;
}
}
}
void PeerManagerImpl::MaybeSendFeefilter(CNode& pto, Peer& peer, std::chrono::microseconds current_time)
{
if (m_ignore_incoming_txs) return;
@ -4880,6 +5263,8 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
MaybeSendAddr(*pto, *peer, current_time);
MaybeSendSendHeaders(*pto, *peer);
{
LOCK(cs_main);
@ -4924,7 +5309,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
got back an empty response. */
if (pindexStart->pprev)
pindexStart = pindexStart->pprev;
if (MaybeSendGetHeaders(*pto, m_chainman.ActiveChain().GetLocator(pindexStart), *peer)) {
if (MaybeSendGetHeaders(*pto, GetLocator(pindexStart), *peer)) {
LogPrint(BCLog::NET, "initial getheaders (%d) to peer=%d (startheight:%d)\n", pindexStart->nHeight, pto->GetId(), peer->m_starting_height);
state.fSyncStarted = true;

View file

@ -35,6 +35,7 @@ struct CNodeStateStats {
uint64_t m_addr_rate_limited = 0;
bool m_addr_relay_enabled{false};
ServiceFlags their_services;
int64_t presync_height{-1};
};
class PeerManager : public CValidationInterface, public NetEventsInterface

View file

@ -53,7 +53,7 @@ void CClientUIInterface::NotifyNetworkActiveChanged(bool networkActive) { return
void CClientUIInterface::NotifyAlertChanged() { return g_ui_signals.NotifyAlertChanged(); }
void CClientUIInterface::ShowProgress(const std::string& title, int nProgress, bool resume_possible) { return g_ui_signals.ShowProgress(title, nProgress, resume_possible); }
void CClientUIInterface::NotifyBlockTip(SynchronizationState s, const CBlockIndex* i) { return g_ui_signals.NotifyBlockTip(s, i); }
void CClientUIInterface::NotifyHeaderTip(SynchronizationState s, const CBlockIndex* i) { return g_ui_signals.NotifyHeaderTip(s, i); }
void CClientUIInterface::NotifyHeaderTip(SynchronizationState s, int64_t height, int64_t timestamp, bool presync) { return g_ui_signals.NotifyHeaderTip(s, height, timestamp, presync); }
void CClientUIInterface::BannedListChanged() { return g_ui_signals.BannedListChanged(); }
bool InitError(const bilingual_str& str)

View file

@ -105,7 +105,7 @@ public:
ADD_SIGNALS_DECL_WRAPPER(NotifyBlockTip, void, SynchronizationState, const CBlockIndex*);
/** Best header has changed */
ADD_SIGNALS_DECL_WRAPPER(NotifyHeaderTip, void, SynchronizationState, const CBlockIndex*);
ADD_SIGNALS_DECL_WRAPPER(NotifyHeaderTip, void, SynchronizationState, int64_t height, int64_t timestamp, bool presync);
/** Banlist did change. */
ADD_SIGNALS_DECL_WRAPPER(BannedListChanged, void, void);

View file

@ -377,9 +377,8 @@ public:
std::unique_ptr<Handler> handleNotifyHeaderTip(NotifyHeaderTipFn fn) override
{
return MakeHandler(
::uiInterface.NotifyHeaderTip_connect([fn](SynchronizationState sync_state, const CBlockIndex* block) {
fn(sync_state, BlockTip{block->nHeight, block->GetBlockTime(), block->GetBlockHash()},
/* verification progress is unused when a header was received */ 0);
::uiInterface.NotifyHeaderTip_connect([fn](SynchronizationState sync_state, int64_t height, int64_t timestamp, bool presync) {
fn(sync_state, BlockTip{(int)height, timestamp, uint256{}}, presync);
}));
}
NodeContext* context() override { return m_context; }
@ -400,7 +399,7 @@ bool FillBlock(const CBlockIndex* index, const FoundBlock& block, UniqueLock<Rec
if (block.m_max_time) *block.m_max_time = index->GetBlockTimeMax();
if (block.m_mtp_time) *block.m_mtp_time = index->GetMedianTimePast();
if (block.m_in_active_chain) *block.m_in_active_chain = active[index->nHeight] == index;
if (block.m_locator) { *block.m_locator = active.GetLocator(index); }
if (block.m_locator) { *block.m_locator = GetLocator(index); }
if (block.m_next_block) FillBlock(active[index->nHeight] == index ? active[index->nHeight + 1] : nullptr, *block.m_next_block, lock, active);
if (block.m_data) {
REVERSE_LOCK(lock);
@ -527,8 +526,7 @@ public:
{
LOCK(::cs_main);
const CBlockIndex* index = chainman().m_blockman.LookupBlockIndex(block_hash);
if (!index) return {};
return chainman().ActiveChain().GetLocator(index);
return GetLocator(index);
}
std::optional<int> findLocatorFork(const CBlockLocator& locator) override
{

View file

@ -71,6 +71,57 @@ unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nF
return bnNew.GetCompact();
}
// Check that on difficulty adjustments, the new difficulty does not increase
// or decrease beyond the permitted limits.
bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits)
{
if (params.fPowAllowMinDifficultyBlocks) return true;
if (height % params.DifficultyAdjustmentInterval() == 0) {
int64_t smallest_timespan = params.nPowTargetTimespan/4;
int64_t largest_timespan = params.nPowTargetTimespan*4;
const arith_uint256 pow_limit = UintToArith256(params.powLimit);
arith_uint256 observed_new_target;
observed_new_target.SetCompact(new_nbits);
// Calculate the largest difficulty value possible:
arith_uint256 largest_difficulty_target;
largest_difficulty_target.SetCompact(old_nbits);
largest_difficulty_target *= largest_timespan;
largest_difficulty_target /= params.nPowTargetTimespan;
if (largest_difficulty_target > pow_limit) {
largest_difficulty_target = pow_limit;
}
// Round and then compare this new calculated value to what is
// observed.
arith_uint256 maximum_new_target;
maximum_new_target.SetCompact(largest_difficulty_target.GetCompact());
if (maximum_new_target < observed_new_target) return false;
// Calculate the smallest difficulty value possible:
arith_uint256 smallest_difficulty_target;
smallest_difficulty_target.SetCompact(old_nbits);
smallest_difficulty_target *= smallest_timespan;
smallest_difficulty_target /= params.nPowTargetTimespan;
if (smallest_difficulty_target > pow_limit) {
smallest_difficulty_target = pow_limit;
}
// Round and then compare this new calculated value to what is
// observed.
arith_uint256 minimum_new_target;
minimum_new_target.SetCompact(smallest_difficulty_target.GetCompact());
if (minimum_new_target > observed_new_target) return false;
} else if (old_nbits != new_nbits) {
return false;
}
return true;
}
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
bool fNegative;

View file

@ -20,4 +20,18 @@ unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nF
/** Check whether a block hash satisfies the proof-of-work requirement specified by nBits */
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params&);
/**
* Return false if the proof-of-work requirement specified by new_nbits at a
* given height is not possible, given the proof-of-work on the prior block as
* specified by old_nbits.
*
* This function only checks that the new value is within a factor of 4 of the
* old value for blocks at the difficulty adjustment interval, and otherwise
* requires the values to be the same.
*
* Always returns true on networks where min difficulty blocks are allowed,
* such as regtest/testnet.
*/
bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits);
#endif // BITCOIN_POW_H

View file

@ -75,6 +75,7 @@ Q_IMPORT_PLUGIN(QAndroidPlatformIntegrationPlugin)
Q_DECLARE_METATYPE(bool*)
Q_DECLARE_METATYPE(CAmount)
Q_DECLARE_METATYPE(SynchronizationState)
Q_DECLARE_METATYPE(SyncType)
Q_DECLARE_METATYPE(uint256)
static void RegisterMetaTypes()
@ -82,6 +83,7 @@ static void RegisterMetaTypes()
// Register meta types used for QMetaObject::invokeMethod and Qt::QueuedConnection
qRegisterMetaType<bool*>();
qRegisterMetaType<SynchronizationState>();
qRegisterMetaType<SyncType>();
#ifdef ENABLE_WALLET
qRegisterMetaType<WalletModel*>();
#endif

View file

@ -615,8 +615,8 @@ void BitcoinGUI::setClientModel(ClientModel *_clientModel, interfaces::BlockAndH
connect(_clientModel, &ClientModel::numConnectionsChanged, this, &BitcoinGUI::setNumConnections);
connect(_clientModel, &ClientModel::networkActiveChanged, this, &BitcoinGUI::setNetworkActive);
modalOverlay->setKnownBestHeight(tip_info->header_height, QDateTime::fromSecsSinceEpoch(tip_info->header_time));
setNumBlocks(tip_info->block_height, QDateTime::fromSecsSinceEpoch(tip_info->block_time), tip_info->verification_progress, false, SynchronizationState::INIT_DOWNLOAD);
modalOverlay->setKnownBestHeight(tip_info->header_height, QDateTime::fromSecsSinceEpoch(tip_info->header_time), /*presync=*/false);
setNumBlocks(tip_info->block_height, QDateTime::fromSecsSinceEpoch(tip_info->block_time), tip_info->verification_progress, SyncType::BLOCK_SYNC, SynchronizationState::INIT_DOWNLOAD);
connect(_clientModel, &ClientModel::numBlocksChanged, this, &BitcoinGUI::setNumBlocks);
// Receive and report messages from client model
@ -1026,6 +1026,13 @@ void BitcoinGUI::updateHeadersSyncProgressLabel()
progressBarLabel->setText(tr("Syncing Headers (%1%)…").arg(QString::number(100.0 / (headersTipHeight+estHeadersLeft)*headersTipHeight, 'f', 1)));
}
void BitcoinGUI::updateHeadersPresyncProgressLabel(int64_t height, const QDateTime& blockDate)
{
int estHeadersLeft = blockDate.secsTo(QDateTime::currentDateTime()) / Params().GetConsensus().nPowTargetSpacing;
if (estHeadersLeft > HEADER_HEIGHT_DELTA_SYNC)
progressBarLabel->setText(tr("Pre-syncing Headers (%1%)…").arg(QString::number(100.0 / (height+estHeadersLeft)*height, 'f', 1)));
}
void BitcoinGUI::openOptionsDialogWithTab(OptionsDialog::Tab tab)
{
if (!clientModel || !clientModel->getOptionsModel())
@ -1039,7 +1046,7 @@ void BitcoinGUI::openOptionsDialogWithTab(OptionsDialog::Tab tab)
GUIUtil::ShowModalDialogAsynchronously(dlg);
}
void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool header, SynchronizationState sync_state)
void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state)
{
// Disabling macOS App Nap on initial sync, disk and reindex operations.
#ifdef Q_OS_MACOS
@ -1052,8 +1059,8 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
if (modalOverlay)
{
if (header)
modalOverlay->setKnownBestHeight(count, blockDate);
if (synctype != SyncType::BLOCK_SYNC)
modalOverlay->setKnownBestHeight(count, blockDate, synctype == SyncType::HEADER_PRESYNC);
else
modalOverlay->tipUpdate(count, blockDate, nVerificationProgress);
}
@ -1067,7 +1074,10 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
enum BlockSource blockSource = clientModel->getBlockSource();
switch (blockSource) {
case BlockSource::NETWORK:
if (header) {
if (synctype == SyncType::HEADER_PRESYNC) {
updateHeadersPresyncProgressLabel(count, blockDate);
return;
} else if (synctype == SyncType::HEADER_SYNC) {
updateHeadersSyncProgressLabel();
return;
}
@ -1075,7 +1085,7 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
updateHeadersSyncProgressLabel();
break;
case BlockSource::DISK:
if (header) {
if (synctype != SyncType::BLOCK_SYNC) {
progressBarLabel->setText(tr("Indexing blocks on disk…"));
} else {
progressBarLabel->setText(tr("Processing blocks on disk…"));
@ -1085,7 +1095,7 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
progressBarLabel->setText(tr("Reindexing blocks on disk…"));
break;
case BlockSource::NONE:
if (header) {
if (synctype != SyncType::BLOCK_SYNC) {
return;
}
progressBarLabel->setText(tr("Connecting to peers…"));

View file

@ -10,6 +10,7 @@
#endif
#include <qt/bitcoinunits.h>
#include <qt/clientmodel.h>
#include <qt/guiutil.h>
#include <qt/optionsdialog.h>
@ -28,7 +29,6 @@
#include <memory>
class ClientModel;
class NetworkStyle;
class Notificator;
class OptionsModel;
@ -208,6 +208,7 @@ private:
void updateNetworkState();
void updateHeadersSyncProgressLabel();
void updateHeadersPresyncProgressLabel(int64_t height, const QDateTime& blockDate);
/** Open the OptionsDialog on the specified tab index */
void openOptionsDialogWithTab(OptionsDialog::Tab tab);
@ -226,7 +227,7 @@ public Q_SLOTS:
/** Set network state shown in the UI */
void setNetworkActive(bool network_active);
/** Set number of blocks and last block date shown in the UI */
void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state);
void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state);
/** Notify the user of an event from the core network or transaction handling code.
@param[in] title the message box / notification title

View file

@ -215,26 +215,26 @@ QString ClientModel::blocksDir() const
return GUIUtil::PathToQString(gArgs.GetBlocksDirPath());
}
void ClientModel::TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, bool header)
void ClientModel::TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, SyncType synctype)
{
if (header) {
if (synctype == SyncType::HEADER_SYNC) {
// cache best headers time and height to reduce future cs_main locks
cachedBestHeaderHeight = tip.block_height;
cachedBestHeaderTime = tip.block_time;
} else {
} else if (synctype == SyncType::BLOCK_SYNC) {
m_cached_num_blocks = tip.block_height;
WITH_LOCK(m_cached_tip_mutex, m_cached_tip_blocks = tip.block_hash;);
}
// Throttle GUI notifications about (a) blocks during initial sync, and (b) both blocks and headers during reindex.
const bool throttle = (sync_state != SynchronizationState::POST_INIT && !header) || sync_state == SynchronizationState::INIT_REINDEX;
const bool throttle = (sync_state != SynchronizationState::POST_INIT && synctype == SyncType::BLOCK_SYNC) || sync_state == SynchronizationState::INIT_REINDEX;
const int64_t now = throttle ? GetTimeMillis() : 0;
int64_t& nLastUpdateNotification = header ? nLastHeaderTipUpdateNotification : nLastBlockTipUpdateNotification;
int64_t& nLastUpdateNotification = synctype != SyncType::BLOCK_SYNC ? nLastHeaderTipUpdateNotification : nLastBlockTipUpdateNotification;
if (throttle && now < nLastUpdateNotification + count_milliseconds(MODEL_UPDATE_DELAY)) {
return;
}
Q_EMIT numBlocksChanged(tip.block_height, QDateTime::fromSecsSinceEpoch(tip.block_time), verification_progress, header, sync_state);
Q_EMIT numBlocksChanged(tip.block_height, QDateTime::fromSecsSinceEpoch(tip.block_time), verification_progress, synctype, sync_state);
nLastUpdateNotification = now;
}
@ -264,11 +264,11 @@ void ClientModel::subscribeToCoreSignals()
});
m_handler_notify_block_tip = m_node.handleNotifyBlockTip(
[this](SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress) {
TipChanged(sync_state, tip, verification_progress, /*header=*/false);
TipChanged(sync_state, tip, verification_progress, SyncType::BLOCK_SYNC);
});
m_handler_notify_header_tip = m_node.handleNotifyHeaderTip(
[this](SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress) {
TipChanged(sync_state, tip, verification_progress, /*header=*/true);
[this](SynchronizationState sync_state, interfaces::BlockTip tip, bool presync) {
TipChanged(sync_state, tip, /*verification_progress=*/0.0, presync ? SyncType::HEADER_PRESYNC : SyncType::HEADER_SYNC);
});
}

View file

@ -37,6 +37,12 @@ enum class BlockSource {
NETWORK
};
enum class SyncType {
HEADER_PRESYNC,
HEADER_SYNC,
BLOCK_SYNC
};
enum NumConnections {
CONNECTIONS_NONE = 0,
CONNECTIONS_IN = (1U << 0),
@ -105,13 +111,13 @@ private:
//! A thread to interact with m_node asynchronously
QThread* const m_thread;
void TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, bool header) EXCLUSIVE_LOCKS_REQUIRED(!m_cached_tip_mutex);
void TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, SyncType synctype) EXCLUSIVE_LOCKS_REQUIRED(!m_cached_tip_mutex);
void subscribeToCoreSignals();
void unsubscribeFromCoreSignals();
Q_SIGNALS:
void numConnectionsChanged(int count);
void numBlocksChanged(int count, const QDateTime& blockDate, double nVerificationProgress, bool header, SynchronizationState sync_state);
void numBlocksChanged(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType header, SynchronizationState sync_state);
void mempoolSizeChanged(long count, size_t mempoolSizeInBytes);
void networkActiveChanged(bool networkActive);
void alertsChanged(const QString &warnings);

View file

@ -78,13 +78,16 @@ bool ModalOverlay::event(QEvent* ev) {
return QWidget::event(ev);
}
void ModalOverlay::setKnownBestHeight(int count, const QDateTime& blockDate)
void ModalOverlay::setKnownBestHeight(int count, const QDateTime& blockDate, bool presync)
{
if (count > bestHeaderHeight) {
if (!presync && count > bestHeaderHeight) {
bestHeaderHeight = count;
bestHeaderDate = blockDate;
UpdateHeaderSyncLabel();
}
if (presync) {
UpdateHeaderPresyncLabel(count, blockDate);
}
}
void ModalOverlay::tipUpdate(int count, const QDateTime& blockDate, double nVerificationProgress)
@ -158,6 +161,11 @@ void ModalOverlay::UpdateHeaderSyncLabel() {
ui->numberOfBlocksLeft->setText(tr("Unknown. Syncing Headers (%1, %2%)…").arg(bestHeaderHeight).arg(QString::number(100.0 / (bestHeaderHeight + est_headers_left) * bestHeaderHeight, 'f', 1)));
}
void ModalOverlay::UpdateHeaderPresyncLabel(int height, const QDateTime& blockDate) {
int est_headers_left = blockDate.secsTo(QDateTime::currentDateTime()) / Params().GetConsensus().nPowTargetSpacing;
ui->numberOfBlocksLeft->setText(tr("Unknown. Pre-syncing Headers (%1, %2%)…").arg(height).arg(QString::number(100.0 / (height + est_headers_left) * height, 'f', 1)));
}
void ModalOverlay::toggleVisibility()
{
showHide(layerIsVisible, true);

View file

@ -26,7 +26,7 @@ public:
~ModalOverlay();
void tipUpdate(int count, const QDateTime& blockDate, double nVerificationProgress);
void setKnownBestHeight(int count, const QDateTime& blockDate);
void setKnownBestHeight(int count, const QDateTime& blockDate, bool presync);
// will show or hide the modal layer
void showHide(bool hide = false, bool userRequested = false);
@ -52,6 +52,7 @@ private:
bool userClosed;
QPropertyAnimation m_animation;
void UpdateHeaderSyncLabel();
void UpdateHeaderPresyncLabel(int height, const QDateTime& blockDate);
};
#endif // BITCOIN_QT_MODALOVERLAY_H

View file

@ -661,7 +661,7 @@ void RPCConsole::setClientModel(ClientModel *model, int bestblock_height, int64_
setNumConnections(model->getNumConnections());
connect(model, &ClientModel::numConnectionsChanged, this, &RPCConsole::setNumConnections);
setNumBlocks(bestblock_height, QDateTime::fromSecsSinceEpoch(bestblock_date), verification_progress, false);
setNumBlocks(bestblock_height, QDateTime::fromSecsSinceEpoch(bestblock_date), verification_progress, SyncType::BLOCK_SYNC);
connect(model, &ClientModel::numBlocksChanged, this, &RPCConsole::setNumBlocks);
updateNetworkState();
@ -973,9 +973,9 @@ void RPCConsole::setNetworkActive(bool networkActive)
updateNetworkState();
}
void RPCConsole::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers)
void RPCConsole::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype)
{
if (!headers) {
if (synctype == SyncType::BLOCK_SYNC) {
ui->numberOfBlocks->setText(QString::number(count));
ui->lastBlockTime->setText(blockDate.toString());
}

View file

@ -9,6 +9,7 @@
#include <config/bitcoin-config.h>
#endif
#include <qt/clientmodel.h>
#include <qt/guiutil.h>
#include <qt/peertablemodel.h>
@ -19,7 +20,6 @@
#include <QThread>
#include <QWidget>
class ClientModel;
class PlatformStyle;
class RPCExecutor;
class RPCTimerInterface;
@ -121,7 +121,7 @@ public Q_SLOTS:
/** Set network state shown in the UI */
void setNetworkActive(bool networkActive);
/** Set number of blocks and last block date shown in the UI */
void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers);
void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype);
/** Set size (number of transactions and memory usage) of the mempool in the UI */
void setMempoolSize(long numberOfTxs, size_t dynUsage);
/** Go forward or back in history */

View file

@ -839,7 +839,7 @@ void SendCoinsDialog::updateCoinControlState()
m_coin_control->fAllowWatchOnly = model->wallet().privateKeysDisabled() && !model->wallet().hasExternalSigner();
}
void SendCoinsDialog::updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state) {
void SendCoinsDialog::updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state) {
if (sync_state == SynchronizationState::POST_INIT) {
updateSmartFeeLabel();
}

View file

@ -5,6 +5,7 @@
#ifndef BITCOIN_QT_SENDCOINSDIALOG_H
#define BITCOIN_QT_SENDCOINSDIALOG_H
#include <qt/clientmodel.h>
#include <qt/walletmodel.h>
#include <QDialog>
@ -12,7 +13,6 @@
#include <QString>
#include <QTimer>
class ClientModel;
class PlatformStyle;
class SendCoinsEntry;
class SendCoinsRecipient;
@ -111,7 +111,7 @@ private Q_SLOTS:
void coinControlClipboardLowOutput();
void coinControlClipboardChange();
void updateFeeSectionControls();
void updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state);
void updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state);
void updateSmartFeeLabel();
Q_SIGNALS:

View file

@ -132,7 +132,7 @@ static bool GenerateBlock(ChainstateManager& chainman, CBlock& block, uint64_t&
}
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(block);
if (!chainman.ProcessNewBlock(shared_pblock, true, nullptr)) {
if (!chainman.ProcessNewBlock(shared_pblock, /*force_processing=*/true, /*min_pow_checked=*/true, nullptr)) {
throw JSONRPCError(RPC_INTERNAL_ERROR, "ProcessNewBlock, block not accepted");
}
@ -981,7 +981,7 @@ static RPCHelpMan submitblock()
bool new_block;
auto sc = std::make_shared<submitblock_StateCatcher>(block.GetHash());
RegisterSharedValidationInterface(sc);
bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*new_block=*/&new_block);
bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&new_block);
UnregisterSharedValidationInterface(sc);
if (!new_block && accepted) {
return "duplicate";
@ -1023,7 +1023,7 @@ static RPCHelpMan submitheader()
}
BlockValidationState state;
chainman.ProcessNewBlockHeaders({h}, state);
chainman.ProcessNewBlockHeaders({h}, /*min_pow_checked=*/true, state);
if (state.IsValid()) return UniValue::VNULL;
if (state.IsError()) {
throw JSONRPCError(RPC_VERIFY_ERROR, state.ToString());

View file

@ -132,6 +132,7 @@ static RPCHelpMan getpeerinfo()
{RPCResult::Type::BOOL, "bip152_hb_to", "Whether we selected peer as (compact blocks) high-bandwidth peer"},
{RPCResult::Type::BOOL, "bip152_hb_from", "Whether peer selected us as (compact blocks) high-bandwidth peer"},
{RPCResult::Type::NUM, "startingheight", /*optional=*/true, "The starting height (block) of the peer"},
{RPCResult::Type::NUM, "presynced_headers", /*optional=*/true, "The current height of header pre-synchronization with this peer, or -1 if no low-work sync is in progress"},
{RPCResult::Type::NUM, "synced_headers", /*optional=*/true, "The last header we have in common with this peer"},
{RPCResult::Type::NUM, "synced_blocks", /*optional=*/true, "The last block we have in common with this peer"},
{RPCResult::Type::ARR, "inflight", /*optional=*/true, "",
@ -226,6 +227,7 @@ static RPCHelpMan getpeerinfo()
obj.pushKV("bip152_hb_from", stats.m_bip152_highbandwidth_from);
if (fStateStats) {
obj.pushKV("startingheight", statestats.m_starting_height);
obj.pushKV("presynced_headers", statestats.presync_height);
obj.pushKV("synced_headers", statestats.nSyncHeight);
obj.pushKV("synced_blocks", statestats.nCommonHeight);
UniValue heights(UniValue::VARR);

View file

@ -101,7 +101,7 @@ bool BuildChainTestingSetup::BuildChain(const CBlockIndex* pindex,
CBlockHeader header = block->GetBlockHeader();
BlockValidationState state;
if (!Assert(m_node.chainman)->ProcessNewBlockHeaders({header}, state, &pindex)) {
if (!Assert(m_node.chainman)->ProcessNewBlockHeaders({header}, true, state, &pindex)) {
return false;
}
}
@ -178,7 +178,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
uint256 chainA_last_header = last_header;
for (size_t i = 0; i < 2; i++) {
const auto& block = chainA[i];
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
for (size_t i = 0; i < 2; i++) {
const auto& block = chainA[i];
@ -196,7 +196,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
uint256 chainB_last_header = last_header;
for (size_t i = 0; i < 3; i++) {
const auto& block = chainB[i];
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
for (size_t i = 0; i < 3; i++) {
const auto& block = chainB[i];
@ -227,7 +227,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
// Reorg back to chain A.
for (size_t i = 2; i < 4; i++) {
const auto& block = chainA[i];
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
// Check that chain A and B blocks can be retrieved.

View file

@ -102,7 +102,7 @@ BOOST_FIXTURE_TEST_CASE(coinstatsindex_unclean_shutdown, TestChain100Setup)
LOCK(cs_main);
BlockValidationState state;
BOOST_CHECK(CheckBlock(block, state, params.GetConsensus()));
BOOST_CHECK(chainstate.AcceptBlock(new_block, state, &new_block_index, true, nullptr, nullptr));
BOOST_CHECK(chainstate.AcceptBlock(new_block, state, &new_block_index, true, nullptr, nullptr, true));
CCoinsViewCache view(&chainstate.CoinsTip());
BOOST_CHECK(chainstate.ConnectBlock(block, state, new_block_index, view));
}

542
src/test/fuzz/bitdeque.cpp Normal file
View file

@ -0,0 +1,542 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <util/bitdeque.h>
#include <random.h>
#include <test/fuzz/FuzzedDataProvider.h>
#include <test/fuzz/util.h>
#include <deque>
#include <vector>
namespace {
constexpr int LEN_BITS = 16;
constexpr int RANDDATA_BITS = 20;
using bitdeque_type = bitdeque<128>;
//! Deterministic random vector of bools, for begin/end insertions to draw from.
std::vector<bool> RANDDATA;
void InitRandData()
{
FastRandomContext ctx(true);
RANDDATA.clear();
for (size_t i = 0; i < (1U << RANDDATA_BITS) + (1U << LEN_BITS); ++i) {
RANDDATA.push_back(ctx.randbool());
}
}
} // namespace
FUZZ_TARGET_INIT(bitdeque, InitRandData)
{
FuzzedDataProvider provider(buffer.data(), buffer.size());
FastRandomContext ctx(true);
size_t maxlen = (1U << provider.ConsumeIntegralInRange<size_t>(0, LEN_BITS)) - 1;
size_t limitlen = 4 * maxlen;
std::deque<bool> deq;
bitdeque_type bitdeq;
const auto& cdeq = deq;
const auto& cbitdeq = bitdeq;
size_t initlen = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
while (initlen) {
bool val = ctx.randbool();
deq.push_back(val);
bitdeq.push_back(val);
--initlen;
}
while (provider.remaining_bytes()) {
{
assert(deq.size() == bitdeq.size());
auto it = deq.begin();
auto bitit = bitdeq.begin();
auto itend = deq.end();
while (it != itend) {
assert(*it == *bitit);
++it;
++bitit;
}
}
CallOneOf(provider,
[&] {
// constructor()
deq = std::deque<bool>{};
bitdeq = bitdeque_type{};
},
[&] {
// clear()
deq.clear();
bitdeq.clear();
},
[&] {
// resize()
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
deq.resize(count);
bitdeq.resize(count);
},
[&] {
// assign(count, val)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
bool val = ctx.randbool();
deq.assign(count, val);
bitdeq.assign(count, val);
},
[&] {
// constructor(count, val)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
bool val = ctx.randbool();
deq = std::deque<bool>(count, val);
bitdeq = bitdeque_type(count, val);
},
[&] {
// constructor(count)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
deq = std::deque<bool>(count);
bitdeq = bitdeque_type(count);
},
[&] {
// construct(begin, end)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
deq = std::deque<bool>(rand_begin, rand_end);
bitdeq = bitdeque_type(rand_begin, rand_end);
},
[&] {
// assign(begin, end)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
deq.assign(rand_begin, rand_end);
bitdeq.assign(rand_begin, rand_end);
},
[&] {
// construct(initializer_list)
std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool(), ctx.randbool(), ctx.randbool()};
deq = std::deque<bool>(ilist);
bitdeq = bitdeque_type(ilist);
},
[&] {
// assign(initializer_list)
std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool()};
deq.assign(ilist);
bitdeq.assign(ilist);
},
[&] {
// operator=(const&)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
bool val = ctx.randbool();
const std::deque<bool> deq2(count, val);
deq = deq2;
const bitdeque_type bitdeq2(count, val);
bitdeq = bitdeq2;
},
[&] {
// operator=(&&)
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
bool val = ctx.randbool();
std::deque<bool> deq2(count, val);
deq = std::move(deq2);
bitdeque_type bitdeq2(count, val);
bitdeq = std::move(bitdeq2);
},
[&] {
// deque swap
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
std::deque<bool> deq2(rand_begin, rand_end);
bitdeque_type bitdeq2(rand_begin, rand_end);
using std::swap;
assert(deq.size() == bitdeq.size());
assert(deq2.size() == bitdeq2.size());
swap(deq, deq2);
swap(bitdeq, bitdeq2);
assert(deq.size() == bitdeq.size());
assert(deq2.size() == bitdeq2.size());
},
[&] {
// deque.swap
auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
std::deque<bool> deq2(rand_begin, rand_end);
bitdeque_type bitdeq2(rand_begin, rand_end);
assert(deq.size() == bitdeq.size());
assert(deq2.size() == bitdeq2.size());
deq.swap(deq2);
bitdeq.swap(bitdeq2);
assert(deq.size() == bitdeq.size());
assert(deq2.size() == bitdeq2.size());
},
[&] {
// operator=(initializer_list)
std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool()};
deq = ilist;
bitdeq = ilist;
},
[&] {
// iterator arithmetic
auto pos1 = provider.ConsumeIntegralInRange<long>(0, cdeq.size());
auto pos2 = provider.ConsumeIntegralInRange<long>(0, cdeq.size());
auto it = deq.begin() + pos1;
auto bitit = bitdeq.begin() + pos1;
if ((size_t)pos1 != cdeq.size()) assert(*it == *bitit);
assert(it - deq.begin() == pos1);
assert(bitit - bitdeq.begin() == pos1);
if (provider.ConsumeBool()) {
it += pos2 - pos1;
bitit += pos2 - pos1;
} else {
it -= pos1 - pos2;
bitit -= pos1 - pos2;
}
if ((size_t)pos2 != cdeq.size()) assert(*it == *bitit);
assert(deq.end() - it == bitdeq.end() - bitit);
if (provider.ConsumeBool()) {
if ((size_t)pos2 != cdeq.size()) {
++it;
++bitit;
}
} else {
if (pos2 != 0) {
--it;
--bitit;
}
}
assert(deq.end() - it == bitdeq.end() - bitit);
},
[&] {
// begin() and end()
assert(deq.end() - deq.begin() == bitdeq.end() - bitdeq.begin());
},
[&] {
// begin() and end() (const)
assert(cdeq.end() - cdeq.begin() == cbitdeq.end() - cbitdeq.begin());
},
[&] {
// rbegin() and rend()
assert(deq.rend() - deq.rbegin() == bitdeq.rend() - bitdeq.rbegin());
},
[&] {
// rbegin() and rend() (const)
assert(cdeq.rend() - cdeq.rbegin() == cbitdeq.rend() - cbitdeq.rbegin());
},
[&] {
// cbegin() and cend()
assert(cdeq.cend() - cdeq.cbegin() == cbitdeq.cend() - cbitdeq.cbegin());
},
[&] {
// crbegin() and crend()
assert(cdeq.crend() - cdeq.crbegin() == cbitdeq.crend() - cbitdeq.crbegin());
},
[&] {
// size() and maxsize()
assert(cdeq.size() == cbitdeq.size());
assert(cbitdeq.size() <= cbitdeq.max_size());
},
[&] {
// empty
assert(cdeq.empty() == cbitdeq.empty());
},
[&] {
// at (in range) and flip
if (!cdeq.empty()) {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
auto& ref = deq.at(pos);
auto bitref = bitdeq.at(pos);
assert(ref == bitref);
if (ctx.randbool()) {
ref = !ref;
bitref.flip();
}
}
},
[&] {
// at (maybe out of range) and bit assign
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() + maxlen);
bool newval = ctx.randbool();
bool throw_deq{false}, throw_bitdeq{false};
bool val_deq{false}, val_bitdeq{false};
try {
auto& ref = deq.at(pos);
val_deq = ref;
ref = newval;
} catch (const std::out_of_range&) {
throw_deq = true;
}
try {
auto ref = bitdeq.at(pos);
val_bitdeq = ref;
ref = newval;
} catch (const std::out_of_range&) {
throw_bitdeq = true;
}
assert(throw_deq == throw_bitdeq);
assert(throw_bitdeq == (pos >= cdeq.size()));
if (!throw_deq) assert(val_deq == val_bitdeq);
},
[&] {
// at (maybe out of range) (const)
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() + maxlen);
bool throw_deq{false}, throw_bitdeq{false};
bool val_deq{false}, val_bitdeq{false};
try {
auto& ref = cdeq.at(pos);
val_deq = ref;
} catch (const std::out_of_range&) {
throw_deq = true;
}
try {
auto ref = cbitdeq.at(pos);
val_bitdeq = ref;
} catch (const std::out_of_range&) {
throw_bitdeq = true;
}
assert(throw_deq == throw_bitdeq);
assert(throw_bitdeq == (pos >= cdeq.size()));
if (!throw_deq) assert(val_deq == val_bitdeq);
},
[&] {
// operator[]
if (!cdeq.empty()) {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
assert(deq[pos] == bitdeq[pos]);
if (ctx.randbool()) {
deq[pos] = !deq[pos];
bitdeq[pos].flip();
}
}
},
[&] {
// operator[] const
if (!cdeq.empty()) {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
assert(deq[pos] == bitdeq[pos]);
}
},
[&] {
// front()
if (!cdeq.empty()) {
auto& ref = deq.front();
auto bitref = bitdeq.front();
assert(ref == bitref);
if (ctx.randbool()) {
ref = !ref;
bitref = !bitref;
}
}
},
[&] {
// front() const
if (!cdeq.empty()) {
auto& ref = cdeq.front();
auto bitref = cbitdeq.front();
assert(ref == bitref);
}
},
[&] {
// back() and swap(bool, ref)
if (!cdeq.empty()) {
auto& ref = deq.back();
auto bitref = bitdeq.back();
assert(ref == bitref);
if (ctx.randbool()) {
ref = !ref;
bitref.flip();
}
}
},
[&] {
// back() const
if (!cdeq.empty()) {
const auto& cdeq = deq;
const auto& cbitdeq = bitdeq;
auto& ref = cdeq.back();
auto bitref = cbitdeq.back();
assert(ref == bitref);
}
},
[&] {
// push_back()
if (cdeq.size() < limitlen) {
bool val = ctx.randbool();
if (cdeq.empty()) {
deq.push_back(val);
bitdeq.push_back(val);
} else {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
auto& ref = deq[pos];
auto bitref = bitdeq[pos];
assert(ref == bitref);
deq.push_back(val);
bitdeq.push_back(val);
assert(ref == bitref); // references are not invalidated
}
}
},
[&] {
// push_front()
if (cdeq.size() < limitlen) {
bool val = ctx.randbool();
if (cdeq.empty()) {
deq.push_front(val);
bitdeq.push_front(val);
} else {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
auto& ref = deq[pos];
auto bitref = bitdeq[pos];
assert(ref == bitref);
deq.push_front(val);
bitdeq.push_front(val);
assert(ref == bitref); // references are not invalidated
}
}
},
[&] {
// pop_back()
if (!cdeq.empty()) {
if (cdeq.size() == 1) {
deq.pop_back();
bitdeq.pop_back();
} else {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 2);
auto& ref = deq[pos];
auto bitref = bitdeq[pos];
assert(ref == bitref);
deq.pop_back();
bitdeq.pop_back();
assert(ref == bitref); // references to other elements are not invalidated
}
}
},
[&] {
// pop_front()
if (!cdeq.empty()) {
if (cdeq.size() == 1) {
deq.pop_front();
bitdeq.pop_front();
} else {
size_t pos = provider.ConsumeIntegralInRange<size_t>(1, cdeq.size() - 1);
auto& ref = deq[pos];
auto bitref = bitdeq[pos];
assert(ref == bitref);
deq.pop_front();
bitdeq.pop_front();
assert(ref == bitref); // references to other elements are not invalidated
}
}
},
[&] {
// erase (in middle, single)
if (!cdeq.empty()) {
size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
size_t after = cdeq.size() - 1 - before;
auto it = deq.erase(cdeq.begin() + before);
auto bitit = bitdeq.erase(cbitdeq.begin() + before);
assert(it == cdeq.begin() + before && it == cdeq.end() - after);
assert(bitit == cbitdeq.begin() + before && bitit == cbitdeq.end() - after);
}
},
[&] {
// erase (at front, range)
size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
auto it = deq.erase(cdeq.begin(), cdeq.begin() + count);
auto bitit = bitdeq.erase(cbitdeq.begin(), cbitdeq.begin() + count);
assert(it == deq.begin());
assert(bitit == bitdeq.begin());
},
[&] {
// erase (at back, range)
size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
auto it = deq.erase(cdeq.end() - count, cdeq.end());
auto bitit = bitdeq.erase(cbitdeq.end() - count, cbitdeq.end());
assert(it == deq.end());
assert(bitit == bitdeq.end());
},
[&] {
// erase (in middle, range)
size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - count);
size_t after = cdeq.size() - count - before;
auto it = deq.erase(cdeq.begin() + before, cdeq.end() - after);
auto bitit = bitdeq.erase(cbitdeq.begin() + before, cbitdeq.end() - after);
assert(it == cdeq.begin() + before && it == cdeq.end() - after);
assert(bitit == cbitdeq.begin() + before && bitit == cbitdeq.end() - after);
},
[&] {
// insert/emplace (in middle, single)
if (cdeq.size() < limitlen) {
size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
bool val = ctx.randbool();
bool do_emplace = provider.ConsumeBool();
auto it = deq.insert(cdeq.begin() + before, val);
auto bitit = do_emplace ? bitdeq.emplace(cbitdeq.begin() + before, val)
: bitdeq.insert(cbitdeq.begin() + before, val);
assert(it == deq.begin() + before);
assert(bitit == bitdeq.begin() + before);
}
},
[&] {
// insert (at front, begin/end)
if (cdeq.size() < limitlen) {
size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
auto it = deq.insert(cdeq.begin(), rand_begin, rand_end);
auto bitit = bitdeq.insert(cbitdeq.begin(), rand_begin, rand_end);
assert(it == cdeq.begin());
assert(bitit == cbitdeq.begin());
}
},
[&] {
// insert (at back, begin/end)
if (cdeq.size() < limitlen) {
size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
auto it = deq.insert(cdeq.end(), rand_begin, rand_end);
auto bitit = bitdeq.insert(cbitdeq.end(), rand_begin, rand_end);
assert(it == cdeq.end() - count);
assert(bitit == cbitdeq.end() - count);
}
},
[&] {
// insert (in middle, range)
if (cdeq.size() < limitlen) {
size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
bool val = ctx.randbool();
auto it = deq.insert(cdeq.begin() + before, count, val);
auto bitit = bitdeq.insert(cbitdeq.begin() + before, count, val);
assert(it == deq.begin() + before);
assert(bitit == bitdeq.begin() + before);
}
},
[&] {
// insert (in middle, begin/end)
if (cdeq.size() < limitlen) {
size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
auto rand_end = rand_begin + count;
auto it = deq.insert(cdeq.begin() + before, rand_begin, rand_end);
auto bitit = bitdeq.insert(cbitdeq.begin() + before, rand_begin, rand_end);
assert(it == deq.begin() + before);
assert(bitit == bitdeq.begin() + before);
}
}
);
}
}

View file

@ -83,3 +83,40 @@ FUZZ_TARGET_INIT(pow, initialize_pow)
}
}
}
FUZZ_TARGET_INIT(pow_transition, initialize_pow)
{
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
const Consensus::Params& consensus_params{Params().GetConsensus()};
std::vector<std::unique_ptr<CBlockIndex>> blocks;
const uint32_t old_time{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
const uint32_t new_time{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
const int32_t version{fuzzed_data_provider.ConsumeIntegral<int32_t>()};
uint32_t nbits{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
const arith_uint256 pow_limit = UintToArith256(consensus_params.powLimit);
arith_uint256 old_target;
old_target.SetCompact(nbits);
if (old_target > pow_limit) {
nbits = pow_limit.GetCompact();
}
// Create one difficulty adjustment period worth of headers
for (int height = 0; height < consensus_params.DifficultyAdjustmentInterval(); ++height) {
CBlockHeader header;
header.nVersion = version;
header.nTime = old_time;
header.nBits = nbits;
if (height == consensus_params.DifficultyAdjustmentInterval() - 1) {
header.nTime = new_time;
}
auto current_block{std::make_unique<CBlockIndex>(header)};
current_block->pprev = blocks.empty() ? nullptr : blocks.back().get();
current_block->nHeight = height;
blocks.emplace_back(std::move(current_block)).get();
}
auto last_block{blocks.back().get()};
unsigned int new_nbits{GetNextWorkRequired(last_block, nullptr, consensus_params)};
Assert(PermittedDifficultyTransition(consensus_params, last_block->nHeight + 1, last_block->nBits, new_nbits));
}

View file

@ -58,7 +58,7 @@ FUZZ_TARGET_INIT(utxo_snapshot, initialize_chain)
if (fuzzed_data_provider.ConsumeBool()) {
for (const auto& block : *g_chain) {
BlockValidationState dummy;
bool processed{chainman.ProcessNewBlockHeaders({*block}, dummy)};
bool processed{chainman.ProcessNewBlockHeaders({*block}, true, dummy)};
Assert(processed);
const auto* index{WITH_LOCK(::cs_main, return chainman.m_blockman.LookupBlockIndex(block->GetHash()))};
Assert(index);

View file

@ -0,0 +1,146 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <chain.h>
#include <chainparams.h>
#include <consensus/params.h>
#include <headerssync.h>
#include <pow.h>
#include <test/util/setup_common.h>
#include <validation.h>
#include <vector>
#include <boost/test/unit_test.hpp>
struct HeadersGeneratorSetup : public RegTestingSetup {
/** Search for a nonce to meet (regtest) proof of work */
void FindProofOfWork(CBlockHeader& starting_header);
/**
* Generate headers in a chain that build off a given starting hash, using
* the given nVersion, advancing time by 1 second from the starting
* prev_time, and with a fixed merkle root hash.
*/
void GenerateHeaders(std::vector<CBlockHeader>& headers, size_t count,
const uint256& starting_hash, const int nVersion, int prev_time,
const uint256& merkle_root, const uint32_t nBits);
};
void HeadersGeneratorSetup::FindProofOfWork(CBlockHeader& starting_header)
{
while (!CheckProofOfWork(starting_header.GetHash(), starting_header.nBits, Params().GetConsensus())) {
++(starting_header.nNonce);
}
}
void HeadersGeneratorSetup::GenerateHeaders(std::vector<CBlockHeader>& headers,
size_t count, const uint256& starting_hash, const int nVersion, int prev_time,
const uint256& merkle_root, const uint32_t nBits)
{
uint256 prev_hash = starting_hash;
while (headers.size() < count) {
headers.push_back(CBlockHeader());
CBlockHeader& next_header = headers.back();;
next_header.nVersion = nVersion;
next_header.hashPrevBlock = prev_hash;
next_header.hashMerkleRoot = merkle_root;
next_header.nTime = prev_time+1;
next_header.nBits = nBits;
FindProofOfWork(next_header);
prev_hash = next_header.GetHash();
prev_time = next_header.nTime;
}
return;
}
BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, HeadersGeneratorSetup)
// In this test, we construct two sets of headers from genesis, one with
// sufficient proof of work and one without.
// 1. We deliver the first set of headers and verify that the headers sync state
// updates to the REDOWNLOAD phase successfully.
// 2. Then we deliver the second set of headers and verify that they fail
// processing (presumably due to commitments not matching).
// 3. Finally, we verify that repeating with the first set of headers in both
// phases is successful.
BOOST_AUTO_TEST_CASE(headers_sync_state)
{
std::vector<CBlockHeader> first_chain;
std::vector<CBlockHeader> second_chain;
std::unique_ptr<HeadersSyncState> hss;
const int target_blocks = 15000;
arith_uint256 chain_work = target_blocks*2;
// Generate headers for two different chains (using differing merkle roots
// to ensure the headers are different).
GenerateHeaders(first_chain, target_blocks-1, Params().GenesisBlock().GetHash(),
Params().GenesisBlock().nVersion, Params().GenesisBlock().nTime,
ArithToUint256(0), Params().GenesisBlock().nBits);
GenerateHeaders(second_chain, target_blocks-2, Params().GenesisBlock().GetHash(),
Params().GenesisBlock().nVersion, Params().GenesisBlock().nTime,
ArithToUint256(1), Params().GenesisBlock().nBits);
const CBlockIndex* chain_start = WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(Params().GenesisBlock().GetHash()));
std::vector<CBlockHeader> headers_batch;
// Feed the first chain to HeadersSyncState, by delivering 1 header
// initially and then the rest.
headers_batch.insert(headers_batch.end(), std::next(first_chain.begin()), first_chain.end());
hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
(void)hss->ProcessNextHeaders({first_chain.front()}, true);
// Pretend the first header is still "full", so we don't abort.
auto result = hss->ProcessNextHeaders(headers_batch, true);
// This chain should look valid, and we should have met the proof-of-work
// requirement.
BOOST_CHECK(result.success);
BOOST_CHECK(result.request_more);
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::REDOWNLOAD);
// Try to sneakily feed back the second chain.
result = hss->ProcessNextHeaders(second_chain, true);
BOOST_CHECK(!result.success); // foiled!
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
// Now try again, this time feeding the first chain twice.
hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
(void)hss->ProcessNextHeaders(first_chain, true);
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::REDOWNLOAD);
result = hss->ProcessNextHeaders(first_chain, true);
BOOST_CHECK(result.success);
BOOST_CHECK(!result.request_more);
// All headers should be ready for acceptance:
BOOST_CHECK(result.pow_validated_headers.size() == first_chain.size());
// Nothing left for the sync logic to do:
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
// Finally, verify that just trying to process the second chain would not
// succeed (too little work)
hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::PRESYNC);
// Pretend just the first message is "full", so we don't abort.
(void)hss->ProcessNextHeaders({second_chain.front()}, true);
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::PRESYNC);
headers_batch.clear();
headers_batch.insert(headers_batch.end(), std::next(second_chain.begin(), 1), second_chain.end());
// Tell the sync logic that the headers message was not full, implying no
// more headers can be requested. For a low-work-chain, this should causes
// the sync to end with no headers for acceptance.
result = hss->ProcessNextHeaders(headers_batch, false);
BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
BOOST_CHECK(result.pow_validated_headers.empty());
BOOST_CHECK(!result.request_more);
// Nevertheless, no validation errors should have been detected with the
// chain:
BOOST_CHECK(result.success);
}
BOOST_AUTO_TEST_SUITE_END()

View file

@ -588,7 +588,7 @@ BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
pblock->nNonce = bi.nonce;
}
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(*pblock);
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, nullptr));
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, true, nullptr));
pblock->hashPrevBlock = pblock->GetHash();
}

View file

@ -20,7 +20,14 @@ BOOST_AUTO_TEST_CASE(get_next_work)
pindexLast.nHeight = 32255;
pindexLast.nTime = 1262152739; // Block #32255
pindexLast.nBits = 0x1d00ffff;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00d86aU);
// Here (and below): expected_nbits is calculated in
// CalculateNextWorkRequired(); redoing the calculation here would be just
// reimplementing the same code that is written in pow.cpp. Rather than
// copy that code, we just hardcode the expected result.
unsigned int expected_nbits = 0x1d00d86aU;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
}
/* Test the constraint on the upper bound for next work */
@ -32,7 +39,9 @@ BOOST_AUTO_TEST_CASE(get_next_work_pow_limit)
pindexLast.nHeight = 2015;
pindexLast.nTime = 1233061996; // Block #2015
pindexLast.nBits = 0x1d00ffff;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00ffffU);
unsigned int expected_nbits = 0x1d00ffffU;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
}
/* Test the constraint on the lower bound for actual time taken */
@ -44,7 +53,12 @@ BOOST_AUTO_TEST_CASE(get_next_work_lower_limit_actual)
pindexLast.nHeight = 68543;
pindexLast.nTime = 1279297671; // Block #68543
pindexLast.nBits = 0x1c05a3f4;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1c0168fdU);
unsigned int expected_nbits = 0x1c0168fdU;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
// Test that reducing nbits further would not be a PermittedDifficultyTransition.
unsigned int invalid_nbits = expected_nbits-1;
BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, invalid_nbits));
}
/* Test the constraint on the upper bound for actual time taken */
@ -56,7 +70,12 @@ BOOST_AUTO_TEST_CASE(get_next_work_upper_limit_actual)
pindexLast.nHeight = 46367;
pindexLast.nTime = 1269211443; // Block #46367
pindexLast.nBits = 0x1c387f6f;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00e1fdU);
unsigned int expected_nbits = 0x1d00e1fdU;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
// Test that increasing nbits further would not be a PermittedDifficultyTransition.
unsigned int invalid_nbits = expected_nbits+1;
BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, invalid_nbits));
}
BOOST_AUTO_TEST_CASE(CheckProofOfWork_test_negative_target)

View file

@ -78,7 +78,7 @@ BOOST_AUTO_TEST_CASE(getlocator_test)
for (int n=0; n<100; n++) {
int r = InsecureRandRange(150000);
CBlockIndex* tip = (r < 100000) ? &vBlocksMain[r] : &vBlocksSide[r - 100000];
CBlockLocator locator = chain.GetLocator(tip);
CBlockLocator locator = GetLocator(tip);
// The first result must be the block itself, the last one must be genesis.
BOOST_CHECK(locator.vHave.front() == tip->GetBlockHash());

View file

@ -68,7 +68,7 @@ CTxIn MineBlock(const NodeContext& node, const CScript& coinbase_scriptPubKey)
assert(block->nNonce);
}
bool processed{Assert(node.chainman)->ProcessNewBlock(block, true, nullptr)};
bool processed{Assert(node.chainman)->ProcessNewBlock(block, true, true, nullptr)};
assert(processed);
return CTxIn{block->vtx[0]->GetHash(), 0};

View file

@ -321,7 +321,7 @@ CBlock TestChain100Setup::CreateAndProcessBlock(
const CBlock block = this->CreateBlock(txns, scriptPubKey, *chainstate);
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(block);
Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, nullptr);
Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, true, nullptr);
return block;
}

View file

@ -23,6 +23,7 @@
#include <util/string.h>
#include <util/time.h>
#include <util/vector.h>
#include <util/bitdeque.h>
#include <array>
#include <fstream>

View file

@ -100,7 +100,7 @@ std::shared_ptr<CBlock> MinerTestingSetup::FinalizeBlock(std::shared_ptr<CBlock>
// submit block header, so that miner can get the block height from the
// global state and the node has the topology of the chain
BlockValidationState ignored;
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlockHeaders({pblock->GetBlockHeader()}, ignored));
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlockHeaders({pblock->GetBlockHeader()}, true, ignored));
return pblock;
}
@ -157,7 +157,7 @@ BOOST_AUTO_TEST_CASE(processnewblock_signals_ordering)
bool ignored;
// Connect the genesis block and drain any outstanding events
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(std::make_shared<CBlock>(Params().GenesisBlock()), true, &ignored));
BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(std::make_shared<CBlock>(Params().GenesisBlock()), true, true, &ignored));
SyncWithValidationInterfaceQueue();
// subscribe to events (this subscriber will validate event ordering)
@ -179,13 +179,13 @@ BOOST_AUTO_TEST_CASE(processnewblock_signals_ordering)
FastRandomContext insecure;
for (int i = 0; i < 1000; i++) {
auto block = blocks[insecure.randrange(blocks.size() - 1)];
Assert(m_node.chainman)->ProcessNewBlock(block, true, &ignored);
Assert(m_node.chainman)->ProcessNewBlock(block, true, true, &ignored);
}
// to make sure that eventually we process the full chain - do it here
for (const auto& block : blocks) {
if (block->vtx.size() == 1) {
bool processed = Assert(m_node.chainman)->ProcessNewBlock(block, true, &ignored);
bool processed = Assert(m_node.chainman)->ProcessNewBlock(block, true, true, &ignored);
assert(processed);
}
}
@ -224,7 +224,7 @@ BOOST_AUTO_TEST_CASE(mempool_locks_reorg)
{
bool ignored;
auto ProcessBlock = [&](std::shared_ptr<const CBlock> block) -> bool {
return Assert(m_node.chainman)->ProcessNewBlock(block, /*force_processing=*/true, /*new_block=*/&ignored);
return Assert(m_node.chainman)->ProcessNewBlock(block, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&ignored);
};
// Process all mined blocks

View file

@ -132,7 +132,7 @@ BOOST_FIXTURE_TEST_CASE(chainstate_update_tip, TestChain100Setup)
bool checked = CheckBlock(*pblock, state, chainparams.GetConsensus());
BOOST_CHECK(checked);
bool accepted = background_cs.AcceptBlock(
pblock, state, &pindex, true, nullptr, &newblock);
pblock, state, &pindex, true, nullptr, &newblock, true);
BOOST_CHECK(accepted);
}
// UpdateTip is called here

469
src/util/bitdeque.h Normal file
View file

@ -0,0 +1,469 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_UTIL_BITDEQUE_H
#define BITCOIN_UTIL_BITDEQUE_H
#include <bitset>
#include <cstddef>
#include <deque>
#include <limits>
#include <stdexcept>
#include <tuple>
/** Class that mimics std::deque<bool>, but with std::vector<bool>'s bit packing.
*
* BlobSize selects the (minimum) number of bits that are allocated at once.
* Larger values reduce the asymptotic memory usage overhead, at the cost of
* needing larger up-front allocations. The default is 4096 bytes.
*/
template<int BlobSize = 4096 * 8>
class bitdeque
{
// Internal definitions
using word_type = std::bitset<BlobSize>;
using deque_type = std::deque<word_type>;
static_assert(BlobSize > 0);
static constexpr int BITS_PER_WORD = BlobSize;
// Forward and friend declarations of iterator types.
template<bool Const> class Iterator;
template<bool Const> friend class Iterator;
/** Iterator to a bitdeque element, const or not. */
template<bool Const>
class Iterator
{
using deque_iterator = std::conditional_t<Const, typename deque_type::const_iterator, typename deque_type::iterator>;
deque_iterator m_it;
int m_bitpos{0};
Iterator(const deque_iterator& it, int bitpos) : m_it(it), m_bitpos(bitpos) {}
friend class bitdeque;
public:
using iterator_category = std::random_access_iterator_tag;
using value_type = bool;
using pointer = void;
using const_pointer = void;
using reference = std::conditional_t<Const, bool, typename word_type::reference>;
using const_reference = bool;
using difference_type = std::ptrdiff_t;
/** Default constructor. */
Iterator() = default;
/** Default copy constructor. */
Iterator(const Iterator&) = default;
/** Conversion from non-const to const iterator. */
template<bool ConstArg = Const, typename = std::enable_if_t<Const && ConstArg>>
Iterator(const Iterator<false>& x) : m_it(x.m_it), m_bitpos(x.m_bitpos) {}
Iterator& operator+=(difference_type dist)
{
if (dist > 0) {
if (dist + m_bitpos >= BITS_PER_WORD) {
++m_it;
dist -= BITS_PER_WORD - m_bitpos;
m_bitpos = 0;
}
auto jump = dist / BITS_PER_WORD;
m_it += jump;
m_bitpos += dist - jump * BITS_PER_WORD;
} else if (dist < 0) {
dist = -dist;
if (dist > m_bitpos) {
--m_it;
dist -= m_bitpos + 1;
m_bitpos = BITS_PER_WORD - 1;
}
auto jump = dist / BITS_PER_WORD;
m_it -= jump;
m_bitpos -= dist - jump * BITS_PER_WORD;
}
return *this;
}
friend difference_type operator-(const Iterator& x, const Iterator& y)
{
return BITS_PER_WORD * (x.m_it - y.m_it) + x.m_bitpos - y.m_bitpos;
}
Iterator& operator=(const Iterator&) = default;
Iterator& operator-=(difference_type dist) { return operator+=(-dist); }
Iterator& operator++() { ++m_bitpos; if (m_bitpos == BITS_PER_WORD) { m_bitpos = 0; ++m_it; }; return *this; }
Iterator operator++(int) { auto ret{*this}; operator++(); return ret; }
Iterator& operator--() { if (m_bitpos == 0) { m_bitpos = BITS_PER_WORD; --m_it; }; --m_bitpos; return *this; }
Iterator operator--(int) { auto ret{*this}; operator--(); return ret; }
friend Iterator operator+(Iterator x, difference_type dist) { x += dist; return x; }
friend Iterator operator+(difference_type dist, Iterator x) { x += dist; return x; }
friend Iterator operator-(Iterator x, difference_type dist) { x -= dist; return x; }
friend bool operator<(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) < std::tie(y.m_it, y.m_bitpos); }
friend bool operator>(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) > std::tie(y.m_it, y.m_bitpos); }
friend bool operator<=(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) <= std::tie(y.m_it, y.m_bitpos); }
friend bool operator>=(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) >= std::tie(y.m_it, y.m_bitpos); }
friend bool operator==(const Iterator& x, const Iterator& y) { return x.m_it == y.m_it && x.m_bitpos == y.m_bitpos; }
friend bool operator!=(const Iterator& x, const Iterator& y) { return x.m_it != y.m_it || x.m_bitpos != y.m_bitpos; }
reference operator*() const { return (*m_it)[m_bitpos]; }
reference operator[](difference_type pos) const { return *(*this + pos); }
};
public:
using value_type = bool;
using size_type = std::size_t;
using difference_type = typename deque_type::difference_type;
using reference = typename word_type::reference;
using const_reference = bool;
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
using pointer = void;
using const_pointer = void;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
private:
/** Deque of bitsets storing the actual bit data. */
deque_type m_deque;
/** Number of unused bits at the front of m_deque.front(). */
int m_pad_begin;
/** Number of unused bits at the back of m_deque.back(). */
int m_pad_end;
/** Shrink the container by n bits, removing from the end. */
void erase_back(size_type n)
{
if (n >= static_cast<size_type>(BITS_PER_WORD - m_pad_end)) {
n -= BITS_PER_WORD - m_pad_end;
m_pad_end = 0;
m_deque.erase(m_deque.end() - 1 - (n / BITS_PER_WORD), m_deque.end());
n %= BITS_PER_WORD;
}
if (n) {
auto& last = m_deque.back();
while (n) {
last.reset(BITS_PER_WORD - 1 - m_pad_end);
++m_pad_end;
--n;
}
}
}
/** Extend the container by n bits, adding at the end. */
void extend_back(size_type n)
{
if (n > static_cast<size_type>(m_pad_end)) {
n -= m_pad_end + 1;
m_pad_end = BITS_PER_WORD - 1;
m_deque.insert(m_deque.end(), 1 + (n / BITS_PER_WORD), {});
n %= BITS_PER_WORD;
}
m_pad_end -= n;
}
/** Shrink the container by n bits, removing from the beginning. */
void erase_front(size_type n)
{
if (n >= static_cast<size_type>(BITS_PER_WORD - m_pad_begin)) {
n -= BITS_PER_WORD - m_pad_begin;
m_pad_begin = 0;
m_deque.erase(m_deque.begin(), m_deque.begin() + 1 + (n / BITS_PER_WORD));
n %= BITS_PER_WORD;
}
if (n) {
auto& first = m_deque.front();
while (n) {
first.reset(m_pad_begin);
++m_pad_begin;
--n;
}
}
}
/** Extend the container by n bits, adding at the beginning. */
void extend_front(size_type n)
{
if (n > static_cast<size_type>(m_pad_begin)) {
n -= m_pad_begin + 1;
m_pad_begin = BITS_PER_WORD - 1;
m_deque.insert(m_deque.begin(), 1 + (n / BITS_PER_WORD), {});
n %= BITS_PER_WORD;
}
m_pad_begin -= n;
}
/** Insert a sequence of falses anywhere in the container. */
void insert_zeroes(size_type before, size_type count)
{
size_type after = size() - before;
if (before < after) {
extend_front(count);
std::move(begin() + count, begin() + count + before, begin());
} else {
extend_back(count);
std::move_backward(begin() + before, begin() + before + after, end());
}
}
public:
/** Construct an empty container. */
explicit bitdeque() : m_pad_begin{0}, m_pad_end{0} {}
/** Set the container equal to count times the value of val. */
void assign(size_type count, bool val)
{
m_deque.clear();
m_deque.resize((count + BITS_PER_WORD - 1) / BITS_PER_WORD);
m_pad_begin = 0;
m_pad_end = 0;
if (val) {
for (auto& elem : m_deque) elem.flip();
}
if (count % BITS_PER_WORD) {
erase_back(BITS_PER_WORD - (count % BITS_PER_WORD));
}
}
/** Construct a container containing count times the value of val. */
bitdeque(size_type count, bool val) { assign(count, val); }
/** Construct a container containing count false values. */
explicit bitdeque(size_t count) { assign(count, false); }
/** Copy constructor. */
bitdeque(const bitdeque&) = default;
/** Move constructor. */
bitdeque(bitdeque&&) noexcept = default;
/** Copy assignment operator. */
bitdeque& operator=(const bitdeque& other) = default;
/** Move assignment operator. */
bitdeque& operator=(bitdeque&& other) noexcept = default;
// Iterator functions.
iterator begin() noexcept { return {m_deque.begin(), m_pad_begin}; }
iterator end() noexcept { return iterator{m_deque.end(), 0} - m_pad_end; }
const_iterator begin() const noexcept { return const_iterator{m_deque.cbegin(), m_pad_begin}; }
const_iterator cbegin() const noexcept { return const_iterator{m_deque.cbegin(), m_pad_begin}; }
const_iterator end() const noexcept { return const_iterator{m_deque.cend(), 0} - m_pad_end; }
const_iterator cend() const noexcept { return const_iterator{m_deque.cend(), 0} - m_pad_end; }
reverse_iterator rbegin() noexcept { return reverse_iterator{end()}; }
reverse_iterator rend() noexcept { return reverse_iterator{begin()}; }
const_reverse_iterator rbegin() const noexcept { return const_reverse_iterator{cend()}; }
const_reverse_iterator crbegin() const noexcept { return const_reverse_iterator{cend()}; }
const_reverse_iterator rend() const noexcept { return const_reverse_iterator{cbegin()}; }
const_reverse_iterator crend() const noexcept { return const_reverse_iterator{cbegin()}; }
/** Count the number of bits in the container. */
size_type size() const noexcept { return m_deque.size() * BITS_PER_WORD - m_pad_begin - m_pad_end; }
/** Determine whether the container is empty. */
bool empty() const noexcept
{
return m_deque.size() == 0 || (m_deque.size() == 1 && (m_pad_begin + m_pad_end == BITS_PER_WORD));
}
/** Return the maximum size of the container. */
size_type max_size() const noexcept
{
if (m_deque.max_size() < std::numeric_limits<difference_type>::max() / BITS_PER_WORD) {
return m_deque.max_size() * BITS_PER_WORD;
} else {
return std::numeric_limits<difference_type>::max();
}
}
/** Set the container equal to the bits in [first,last). */
template<typename It>
void assign(It first, It last)
{
size_type count = std::distance(first, last);
assign(count, false);
auto it = begin();
while (first != last) {
*(it++) = *(first++);
}
}
/** Set the container equal to the bits in ilist. */
void assign(std::initializer_list<bool> ilist)
{
assign(ilist.size(), false);
auto it = begin();
auto init = ilist.begin();
while (init != ilist.end()) {
*(it++) = *(init++);
}
}
/** Set the container equal to the bits in ilist. */
bitdeque& operator=(std::initializer_list<bool> ilist)
{
assign(ilist);
return *this;
}
/** Construct a container containing the bits in [first,last). */
template<typename It>
bitdeque(It first, It last) { assign(first, last); }
/** Construct a container containing the bits in ilist. */
bitdeque(std::initializer_list<bool> ilist) { assign(ilist); }
// Access an element of the container, with bounds checking.
reference at(size_type position)
{
if (position >= size()) throw std::out_of_range("bitdeque::at() out of range");
return begin()[position];
}
const_reference at(size_type position) const
{
if (position >= size()) throw std::out_of_range("bitdeque::at() out of range");
return cbegin()[position];
}
// Access elements of the container without bounds checking.
reference operator[](size_type position) { return begin()[position]; }
const_reference operator[](size_type position) const { return cbegin()[position]; }
reference front() { return *begin(); }
const_reference front() const { return *cbegin(); }
reference back() { return end()[-1]; }
const_reference back() const { return cend()[-1]; }
/** Release unused memory. */
void shrink_to_fit()
{
m_deque.shrink_to_fit();
}
/** Empty the container. */
void clear() noexcept
{
m_deque.clear();
m_pad_begin = m_pad_end = 0;
}
// Append an element to the container.
void push_back(bool val)
{
extend_back(1);
back() = val;
}
reference emplace_back(bool val)
{
extend_back(1);
auto ref = back();
ref = val;
return ref;
}
// Prepend an element to the container.
void push_front(bool val)
{
extend_front(1);
front() = val;
}
reference emplace_front(bool val)
{
extend_front(1);
auto ref = front();
ref = val;
return ref;
}
// Remove the last element from the container.
void pop_back()
{
erase_back(1);
}
// Remove the first element from the container.
void pop_front()
{
erase_front(1);
}
/** Resize the container. */
void resize(size_type n)
{
if (n < size()) {
erase_back(size() - n);
} else {
extend_back(n - size());
}
}
// Swap two containers.
void swap(bitdeque& other) noexcept
{
std::swap(m_deque, other.m_deque);
std::swap(m_pad_begin, other.m_pad_begin);
std::swap(m_pad_end, other.m_pad_end);
}
friend void swap(bitdeque& b1, bitdeque& b2) noexcept { b1.swap(b2); }
// Erase elements from the container.
iterator erase(const_iterator first, const_iterator last)
{
size_type before = std::distance(cbegin(), first);
size_type dist = std::distance(first, last);
size_type after = std::distance(last, cend());
if (before < after) {
std::move_backward(begin(), begin() + before, end() - after);
erase_front(dist);
return begin() + before;
} else {
std::move(end() - after, end(), begin() + before);
erase_back(dist);
return end() - after;
}
}
iterator erase(iterator first, iterator last) { return erase(const_iterator{first}, const_iterator{last}); }
iterator erase(const_iterator pos) { return erase(pos, pos + 1); }
iterator erase(iterator pos) { return erase(const_iterator{pos}, const_iterator{pos} + 1); }
// Insert elements into the container.
iterator insert(const_iterator pos, bool val)
{
size_type before = pos - cbegin();
insert_zeroes(before, 1);
auto it = begin() + before;
*it = val;
return it;
}
iterator emplace(const_iterator pos, bool val) { return insert(pos, val); }
iterator insert(const_iterator pos, size_type count, bool val)
{
size_type before = pos - cbegin();
insert_zeroes(before, count);
auto it_begin = begin() + before;
auto it = it_begin;
auto it_end = it + count;
while (it != it_end) *(it++) = val;
return it_begin;
}
template<typename It>
iterator insert(const_iterator pos, It first, It last)
{
size_type before = pos - cbegin();
size_type count = std::distance(first, last);
insert_zeroes(before, count);
auto it_begin = begin() + before;
auto it = it_begin;
while (first != last) {
*(it++) = *(first++);
}
return it_begin;
}
};
#endif // BITCOIN_UTIL_BITDEQUE_H

View file

@ -2944,7 +2944,7 @@ static bool NotifyHeaderTip(CChainState& chainstate) LOCKS_EXCLUDED(cs_main) {
}
// Send block tip changed notifications without cs_main
if (fNotify) {
uiInterface.NotifyHeaderTip(GetSynchronizationState(fInitialBlockDownload), pindexHeader);
uiInterface.NotifyHeaderTip(GetSynchronizationState(fInitialBlockDownload), pindexHeader->nHeight, pindexHeader->nTime, false);
}
return fNotify;
}
@ -3432,6 +3432,22 @@ std::vector<unsigned char> ChainstateManager::GenerateCoinbaseCommitment(CBlock&
return commitment;
}
bool HasValidProofOfWork(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams)
{
return std::all_of(headers.cbegin(), headers.cend(),
[&](const auto& header) { return CheckProofOfWork(header.GetHash(), header.nBits, consensusParams);});
}
arith_uint256 CalculateHeadersWork(const std::vector<CBlockHeader>& headers)
{
arith_uint256 total_work{0};
for (const CBlockHeader& header : headers) {
CBlockIndex dummy(header);
total_work += GetBlockProof(dummy);
}
return total_work;
}
/** Context-dependent validity checks.
* By "context", we mean only the previous block headers, but not the UTXO
* set; UTXO-related validity checks are done in ConnectBlock().
@ -3572,9 +3588,10 @@ static bool ContextualCheckBlock(const CBlock& block, BlockValidationState& stat
return true;
}
bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValidationState& state, CBlockIndex** ppindex)
bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValidationState& state, CBlockIndex** ppindex, bool min_pow_checked)
{
AssertLockHeld(cs_main);
// Check for duplicate
uint256 hash = block.GetHash();
BlockMap::iterator miSelf{m_blockman.m_block_index.find(hash)};
@ -3652,6 +3669,10 @@ bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValida
}
}
}
if (!min_pow_checked) {
LogPrint(BCLog::VALIDATION, "%s: not adding new block header %s, missing anti-dos proof-of-work validation\n", __func__, hash.ToString());
return state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK, "too-little-chainwork");
}
CBlockIndex* pindex{m_blockman.AddToBlockIndex(block, m_best_header)};
if (ppindex)
@ -3661,14 +3682,14 @@ bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValida
}
// Exposed wrapper for AcceptBlockHeader
bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>& headers, BlockValidationState& state, const CBlockIndex** ppindex)
bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>& headers, bool min_pow_checked, BlockValidationState& state, const CBlockIndex** ppindex)
{
AssertLockNotHeld(cs_main);
{
LOCK(cs_main);
for (const CBlockHeader& header : headers) {
CBlockIndex *pindex = nullptr; // Use a temp pindex instead of ppindex to avoid a const_cast
bool accepted{AcceptBlockHeader(header, state, &pindex)};
bool accepted{AcceptBlockHeader(header, state, &pindex, min_pow_checked)};
ActiveChainstate().CheckBlockIndex();
if (!accepted) {
@ -3690,8 +3711,33 @@ bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>&
return true;
}
void ChainstateManager::ReportHeadersPresync(const arith_uint256& work, int64_t height, int64_t timestamp)
{
AssertLockNotHeld(cs_main);
const auto& chainstate = ActiveChainstate();
{
LOCK(cs_main);
// Don't report headers presync progress if we already have a post-minchainwork header chain.
// This means we lose reporting for potentially legimate, but unlikely, deep reorgs, but
// prevent attackers that spam low-work headers from filling our logs.
if (m_best_header->nChainWork >= UintToArith256(GetConsensus().nMinimumChainWork)) return;
// Rate limit headers presync updates to 4 per second, as these are not subject to DoS
// protection.
auto now = std::chrono::steady_clock::now();
if (now < m_last_presync_update + std::chrono::milliseconds{250}) return;
m_last_presync_update = now;
}
bool initial_download = chainstate.IsInitialBlockDownload();
uiInterface.NotifyHeaderTip(GetSynchronizationState(initial_download), height, timestamp, /*presync=*/true);
if (initial_download) {
const int64_t blocks_left{(GetTime() - timestamp) / GetConsensus().nPowTargetSpacing};
const double progress{100.0 * height / (height + blocks_left)};
LogPrintf("Pre-synchronizing blockheaders, height: %d (~%.2f%%)\n", height, progress);
}
}
/** Store block on disk. If dbp is non-nullptr, the file is known to already reside on disk */
bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock)
bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock, bool min_pow_checked)
{
const CBlock& block = *pblock;
@ -3701,7 +3747,7 @@ bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, Block
CBlockIndex *pindexDummy = nullptr;
CBlockIndex *&pindex = ppindex ? *ppindex : pindexDummy;
bool accepted_header{m_chainman.AcceptBlockHeader(block, state, &pindex)};
bool accepted_header{m_chainman.AcceptBlockHeader(block, state, &pindex, min_pow_checked)};
CheckBlockIndex();
if (!accepted_header)
@ -3774,7 +3820,7 @@ bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, Block
return true;
}
bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool* new_block)
bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked, bool* new_block)
{
AssertLockNotHeld(cs_main);
@ -3795,7 +3841,7 @@ bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& blo
bool ret = CheckBlock(*block, state, GetConsensus());
if (ret) {
// Store to disk
ret = ActiveChainstate().AcceptBlock(block, state, &pindex, force_processing, nullptr, new_block);
ret = ActiveChainstate().AcceptBlock(block, state, &pindex, force_processing, nullptr, new_block, min_pow_checked);
}
if (!ret) {
GetMainSignals().BlockChecked(*block, state);
@ -4332,7 +4378,7 @@ void CChainState::LoadExternalBlockFile(
const CBlockIndex* pindex = m_blockman.LookupBlockIndex(hash);
if (!pindex || (pindex->nStatus & BLOCK_HAVE_DATA) == 0) {
BlockValidationState state;
if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr)) {
if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr, true)) {
nLoaded++;
}
if (state.IsError()) {
@ -4370,7 +4416,7 @@ void CChainState::LoadExternalBlockFile(
head.ToString());
LOCK(cs_main);
BlockValidationState dummy;
if (AcceptBlock(pblockrecursive, dummy, nullptr, true, &it->second, nullptr)) {
if (AcceptBlock(pblockrecursive, dummy, nullptr, true, &it->second, nullptr, true)) {
nLoaded++;
queue.push_back(pblockrecursive->GetHash());
}

View file

@ -340,6 +340,12 @@ bool TestBlockValidity(BlockValidationState& state,
bool fCheckPOW = true,
bool fCheckMerkleRoot = true) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/** Check with the proof of work on each blockheader matches the value in nBits */
bool HasValidProofOfWork(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams);
/** Return the sum of the work on a given set of headers */
arith_uint256 CalculateHeadersWork(const std::vector<CBlockHeader>& headers);
/** RAII wrapper for VerifyDB: Verify consistency of the block and coin databases */
class CVerifyDB {
public:
@ -650,7 +656,7 @@ public:
EXCLUSIVE_LOCKS_REQUIRED(!m_chainstate_mutex)
LOCKS_EXCLUDED(::cs_main);
bool AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
bool AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock, bool min_pow_checked) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
// Block (dis)connection on a given view:
DisconnectResult DisconnectBlock(const CBlock& block, const CBlockIndex* pindex, CCoinsViewCache& view)
@ -847,13 +853,20 @@ private:
/**
* If a block header hasn't already been seen, call CheckBlockHeader on it, ensure
* that it doesn't descend from an invalid block, and then add it to m_block_index.
* Caller must set min_pow_checked=true in order to add a new header to the
* block index (permanent memory storage), indicating that the header is
* known to be part of a sufficiently high-work chain (anti-dos check).
*/
bool AcceptBlockHeader(
const CBlockHeader& block,
BlockValidationState& state,
CBlockIndex** ppindex) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
CBlockIndex** ppindex,
bool min_pow_checked) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
friend CChainState;
/** Most recent headers presync progress update, for rate-limiting. */
std::chrono::time_point<std::chrono::steady_clock> m_last_presync_update GUARDED_BY(::cs_main) {};
public:
using Options = kernel::ChainstateManagerOpts;
@ -989,10 +1002,15 @@ public:
*
* @param[in] block The block we want to process.
* @param[in] force_processing Process this block even if unrequested; used for non-network block sources.
* @param[in] min_pow_checked True if proof-of-work anti-DoS checks have
* been done by caller for headers chain
* (note: only affects headers acceptance; if
* block header is already present in block
* index then this parameter has no effect)
* @param[out] new_block A boolean which is set to indicate if the block was first received via this call
* @returns If the block was processed, independently of block validity
*/
bool ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool* new_block) LOCKS_EXCLUDED(cs_main);
bool ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked, bool* new_block) LOCKS_EXCLUDED(cs_main);
/**
* Process incoming block headers.
@ -1001,10 +1019,11 @@ public:
* validationinterface callback.
*
* @param[in] block The block headers themselves
* @param[in] min_pow_checked True if proof-of-work anti-DoS checks have been done by caller for headers chain
* @param[out] state This may be set to an Error state if any error occurred processing them
* @param[out] ppindex If set, the pointer will be set to point to the last new block index object for the given headers
*/
bool ProcessNewBlockHeaders(const std::vector<CBlockHeader>& block, BlockValidationState& state, const CBlockIndex** ppindex = nullptr) LOCKS_EXCLUDED(cs_main);
bool ProcessNewBlockHeaders(const std::vector<CBlockHeader>& block, bool min_pow_checked, BlockValidationState& state, const CBlockIndex** ppindex = nullptr) LOCKS_EXCLUDED(cs_main);
/**
* Try to add a transaction to the memory pool.
@ -1028,6 +1047,12 @@ public:
/** Produce the necessary coinbase commitment for a block (modifies the hash, don't call for mined blocks). */
std::vector<unsigned char> GenerateCoinbaseCommitment(CBlock& block, const CBlockIndex* pindexPrev) const;
/** This is used by net_processing to report pre-synchronization progress of headers, as
* headers are not yet fed to validation during that time, but validation is (for now)
* responsible for logging and signalling through NotifyHeaderTip, so it needs this
* information. */
void ReportHeadersPresync(const arith_uint256& work, int64_t height, int64_t timestamp);
~ChainstateManager();
};

View file

@ -1297,7 +1297,7 @@ class FullBlockTest(BitcoinTestFramework):
blocks2 = []
for i in range(89, LARGE_REORG_SIZE + 89):
blocks2.append(self.next_block("alt" + str(i)))
self.send_blocks(blocks2, False, force_send=True)
self.send_blocks(blocks2, False, force_send=False)
# extend alt chain to trigger re-org
block = self.next_block("alt" + str(chain1_tip + 1))

View file

@ -615,6 +615,27 @@ class CompactBlocksTest(BitcoinTestFramework):
bad_peer.send_message(msg)
bad_peer.wait_for_disconnect()
def test_low_work_compactblocks(self, test_node):
# A compactblock with insufficient work won't get its header included
node = self.nodes[0]
hashPrevBlock = int(node.getblockhash(node.getblockcount() - 150), 16)
block = self.build_block_on_tip(node)
block.hashPrevBlock = hashPrevBlock
block.solve()
comp_block = HeaderAndShortIDs()
comp_block.initialize_from_block(block)
with self.nodes[0].assert_debug_log(['[net] Ignoring low-work compact block from peer 0']):
test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p()))
tips = node.getchaintips()
found = False
for x in tips:
if x["hash"] == block.hash:
found = True
break
assert not found
def test_compactblocks_not_at_tip(self, test_node):
node = self.nodes[0]
# Test that requesting old compactblocks doesn't work.
@ -833,6 +854,9 @@ class CompactBlocksTest(BitcoinTestFramework):
self.log.info("Testing compactblock requests/announcements not at chain tip...")
self.test_compactblocks_not_at_tip(self.segwit_node)
self.log.info("Testing handling of low-work compact blocks...")
self.test_low_work_compactblocks(self.segwit_node)
self.log.info("Testing handling of incorrect blocktxn responses...")
self.test_incorrect_blocktxn_response(self.segwit_node)

View file

@ -22,6 +22,7 @@ class RejectLowDifficultyHeadersTest(BitcoinTestFramework):
self.setup_clean_chain = True
self.chain = 'testnet3' # Use testnet chain because it has an early checkpoint
self.num_nodes = 2
self.extra_args = [["-minimumchainwork=0x0"], ["-minimumchainwork=0x0"]]
def add_options(self, parser):
parser.add_argument(
@ -62,7 +63,7 @@ class RejectLowDifficultyHeadersTest(BitcoinTestFramework):
self.log.info("Feed all fork headers (succeeds without checkpoint)")
# On node 0 it succeeds because checkpoints are disabled
self.restart_node(0, extra_args=['-nocheckpoints'])
self.restart_node(0, extra_args=['-nocheckpoints', "-minimumchainwork=0x0"])
peer_no_checkpoint = self.nodes[0].add_p2p_connection(P2PInterface())
peer_no_checkpoint.send_and_ping(msg_headers(self.headers_fork))
assert {

View file

@ -0,0 +1,144 @@
#!/usr/bin/env python3
# Copyright (c) 2019-2021 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test that we reject low difficulty headers to prevent our block tree from filling up with useless bloat"""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.p2p import (
P2PInterface,
)
from test_framework.messages import (
msg_headers,
)
from test_framework.blocktools import (
NORMAL_GBT_REQUEST_PARAMS,
create_block,
)
from test_framework.util import assert_equal
NODE1_BLOCKS_REQUIRED = 15
NODE2_BLOCKS_REQUIRED = 2047
class RejectLowDifficultyHeadersTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 3
# Node0 has no required chainwork; node1 requires 15 blocks on top of the genesis block; node2 requires 2047
self.extra_args = [["-minimumchainwork=0x0", "-checkblockindex=0"], ["-minimumchainwork=0x1f", "-checkblockindex=0"], ["-minimumchainwork=0x1000", "-checkblockindex=0"]]
def setup_network(self):
self.setup_nodes()
self.reconnect_all()
self.sync_all()
def disconnect_all(self):
self.disconnect_nodes(0, 1)
self.disconnect_nodes(0, 2)
def reconnect_all(self):
self.connect_nodes(0, 1)
self.connect_nodes(0, 2)
def test_chains_sync_when_long_enough(self):
self.log.info("Generate blocks on the node with no required chainwork, and verify nodes 1 and 2 have no new headers in their headers tree")
with self.nodes[1].assert_debug_log(expected_msgs=["[net] Ignoring low-work chain (height=14)"]), self.nodes[2].assert_debug_log(expected_msgs=["[net] Ignoring low-work chain (height=14)"]):
self.generate(self.nodes[0], NODE1_BLOCKS_REQUIRED-1, sync_fun=self.no_op)
for node in self.nodes[1:]:
chaintips = node.getchaintips()
assert(len(chaintips) == 1)
assert {
'height': 0,
'hash': '0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206',
'branchlen': 0,
'status': 'active',
} in chaintips
self.log.info("Generate more blocks to satisfy node1's minchainwork requirement, and verify node2 still has no new headers in headers tree")
with self.nodes[2].assert_debug_log(expected_msgs=["[net] Ignoring low-work chain (height=15)"]):
self.generate(self.nodes[0], NODE1_BLOCKS_REQUIRED - self.nodes[0].getblockcount(), sync_fun=self.no_op)
self.sync_blocks(self.nodes[0:2])
assert {
'height': 0,
'hash': '0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206',
'branchlen': 0,
'status': 'active',
} in self.nodes[2].getchaintips()
assert(len(self.nodes[2].getchaintips()) == 1)
self.log.info("Generate long chain for node0/node1")
self.generate(self.nodes[0], NODE2_BLOCKS_REQUIRED-self.nodes[0].getblockcount(), sync_fun=self.no_op)
self.log.info("Verify that node2 will sync the chain when it gets long enough")
self.sync_blocks()
def test_peerinfo_includes_headers_presync_height(self):
self.log.info("Test that getpeerinfo() includes headers presync height")
# Disconnect network, so that we can find our own peer connection more
# easily
self.disconnect_all()
p2p = self.nodes[0].add_p2p_connection(P2PInterface())
node = self.nodes[0]
# Ensure we have a long chain already
current_height = self.nodes[0].getblockcount()
if (current_height < 3000):
self.generate(node, 3000-current_height, sync_fun=self.no_op)
# Send a group of 2000 headers, forking from genesis.
new_blocks = []
hashPrevBlock = int(node.getblockhash(0), 16)
for i in range(2000):
block = create_block(hashprev = hashPrevBlock, tmpl=node.getblocktemplate(NORMAL_GBT_REQUEST_PARAMS))
block.solve()
new_blocks.append(block)
hashPrevBlock = block.sha256
headers_message = msg_headers(headers=new_blocks)
p2p.send_and_ping(headers_message)
# getpeerinfo should show a sync in progress
assert_equal(node.getpeerinfo()[0]['presynced_headers'], 2000)
def test_large_reorgs_can_succeed(self):
self.log.info("Test that a 2000+ block reorg, starting from a point that is more than 2000 blocks before a locator entry, can succeed")
self.sync_all() # Ensure all nodes are synced.
self.disconnect_all()
# locator(block at height T) will have heights:
# [T, T-1, ..., T-10, T-12, T-16, T-24, T-40, T-72, T-136, T-264,
# T-520, T-1032, T-2056, T-4104, ...]
# So mine a number of blocks > 4104 to ensure that the first window of
# received headers during a sync are fully between locator entries.
BLOCKS_TO_MINE = 4110
self.generate(self.nodes[0], BLOCKS_TO_MINE, sync_fun=self.no_op)
self.generate(self.nodes[1], BLOCKS_TO_MINE+2, sync_fun=self.no_op)
self.reconnect_all()
self.sync_blocks(timeout=300) # Ensure tips eventually agree
def run_test(self):
self.test_chains_sync_when_long_enough()
self.test_large_reorgs_can_succeed()
self.test_peerinfo_includes_headers_presync_height()
if __name__ == '__main__':
RejectLowDifficultyHeadersTest().main()

View file

@ -72,6 +72,13 @@ class AcceptBlockTest(BitcoinTestFramework):
def setup_network(self):
self.setup_nodes()
def check_hash_in_chaintips(self, node, blockhash):
tips = node.getchaintips()
for x in tips:
if x["hash"] == blockhash:
return True
return False
def run_test(self):
test_node = self.nodes[0].add_p2p_connection(P2PInterface())
min_work_node = self.nodes[1].add_p2p_connection(P2PInterface())
@ -89,10 +96,15 @@ class AcceptBlockTest(BitcoinTestFramework):
blocks_h2[i].solve()
block_time += 1
test_node.send_and_ping(msg_block(blocks_h2[0]))
with self.nodes[1].assert_debug_log(expected_msgs=[f"AcceptBlockHeader: not adding new block header {blocks_h2[1].hash}, missing anti-dos proof-of-work validation"]):
min_work_node.send_and_ping(msg_block(blocks_h2[1]))
assert_equal(self.nodes[0].getblockcount(), 2)
assert_equal(self.nodes[1].getblockcount(), 1)
# Ensure that the header of the second block was also not accepted by node1
assert_equal(self.check_hash_in_chaintips(self.nodes[1], blocks_h2[1].hash), False)
self.log.info("First height 2 block accepted by node0; correctly rejected by node1")
# 3. Send another block that builds on genesis.

View file

@ -452,8 +452,9 @@ class BlockchainTest(BitcoinTestFramework):
# (Previously this was broken based on setting
# `rpc/blockchain.cpp:latestblock` incorrectly.)
#
b20hash = node.getblockhash(20)
b20 = node.getblock(b20hash)
fork_height = current_height - 100 # choose something vaguely near our tip
fork_hash = node.getblockhash(fork_height)
fork_block = node.getblock(fork_hash)
def solve_and_send_block(prevhash, height, time):
b = create_block(prevhash, create_coinbase(height), time)
@ -461,10 +462,10 @@ class BlockchainTest(BitcoinTestFramework):
peer.send_and_ping(msg_block(b))
return b
b21f = solve_and_send_block(int(b20hash, 16), 21, b20['time'] + 1)
b22f = solve_and_send_block(b21f.sha256, 22, b21f.nTime + 1)
b1 = solve_and_send_block(int(fork_hash, 16), fork_height+1, fork_block['time'] + 1)
b2 = solve_and_send_block(b1.sha256, fork_height+2, b1.nTime + 1)
node.invalidateblock(b22f.hash)
node.invalidateblock(b2.hash)
def assert_waitforheight(height, timeout=2):
assert_equal(

View file

@ -186,6 +186,7 @@ BASE_SCRIPTS = [
'wallet_signrawtransactionwithwallet.py --legacy-wallet',
'wallet_signrawtransactionwithwallet.py --descriptors',
'rpc_signrawtransactionwithkey.py',
'p2p_headers_sync_with_minchainwork.py',
'rpc_rawtransaction.py --legacy-wallet',
'wallet_groups.py --legacy-wallet',
'wallet_transactiontime_rescan.py --descriptors',