This change also introduces an aditional buffer of 10 blocks (PRUNE_LOCK_BUFFER) that will not be pruned before the best block.
Co-authored-by: Luke Dashjr <luke-jr+git@utopios.org>
This commit effectively splits the "load block index itself" logic from
"derive Chainstate variables from loaded block index" logic.
This means that BlockManager::LoadBlockIndex{,DB} will only load what's
relevant to the BlockManager.
I strongly recommend reviewing with the following git-diff flags:
--color-moved=dimmed_zebra --color-moved-ws=allow-indentation-change
Instead of having CBlockIndex's live on the heap, which requires manual
memory management, have them be owned by m_block_index. This means that
they will live and die with BlockManager.
A change to BlockManager::LookupBlockIndex:
- Previously, it was a const member function returning a non-const CBlockIndex*
- Now, there's are const and non-const versions of
BlockManager::LookupBlockIndex returning a CBlockIndex with the same
const-ness as the member function:
(e.g. const CBlockIndex* LookupBlockIndex(...) const)
See next commit for some weirdness that this eliminates.
The range based for-loops are modernize (using auto + destructuring) in
a future commit.
Mutex cs_main is already held by the caller of WriteUndoDataForBlock().
This change is needed to require CBlockIndex::GetUndoPos() to hold
cs_main and CBlockIndex::nStatus to be guarded by cs_main in the
following commits without adding 2 unnecessary cs_main locks to
WriteUndoDataForBlock().