This is a commit in preparation for the next few commits. The functions
are moved to methods to avoid their re-declaration for the purpose of
passing in BlockManager options.
The functions that were now moved into the BlockManager should no longer
use the params as an argument, but instead use the member variable.
In the moved ReadBlockFromDisk and UndoReadFromDisk, change
the function signature to accept a reference to a CBlockIndex instead of
a raw pointer. The pointer is expected to be non-null, so reflect that
in the type.
To allow for the move of functions to BlockManager methods all call
sites require an instantiated BlockManager, or a callback to one.
The fs.* files are already part of the libbitcoin_util library. With the
introduction of the fs_helpers.* it makes sense to move fs.* into the
util/ directory as well.
3141eab9c6 test: add functional test for ScanAndUnlinkAlreadyPrunedFiles (Andrew Toth)
e252909e56 test: add unit test for ScanAndUnlinkAlreadyPrunedFiles (Andrew Toth)
77557dda4a prune: scan and unlink already pruned block files on startup (Andrew Toth)
Pull request description:
There are a few cases where we can mark a block and undo file as pruned in our block index, but not actually remove the files from disk.
1. If we call `FindFilesToPrune` or `FindFilesToPruneManual` and crash before `UnlinkPrunedFiles`.
2. If on Windows there is an open file handle to the file somewhere else when calling `fs::remove` in `UnlinkPrunedFiles` (https://en.cppreference.com/w/cpp/filesystem/remove, https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-deletefilew#remarks). This could be from another process, or if we are calling `ReadBlockFromDisk`/`ReadRawBlockFromDisk` without having a lock on `cs_main` (which has been allowed since ccd8ef65f9).
This PR mitigates this by scanning all pruned block files on startup after `LoadBlockIndexDB` and unlinking them again.
ACKs for top commit:
achow101:
ACK 3141eab9c6
pablomartin4btc:
re-ACK with added functional test 3141eab9c6.
furszy:
Code review ACK 3141eab9
theStack:
Code-review ACK 3141eab9c6
Tree-SHA512: 6c73bc57838ad1b7e5d441af3c4d6bf4c61c4382e2b86485e57fbb74a61240710c0ceeceb8b4834e610ecfa3175c6955c81ea4b2285fee11ca6383f472979d8d
bcb0cacac2 reindex, log, test: fixes #21379 (mruddy)
Pull request description:
Fixes #21379.
The blocks/blk?????.dat files are mutated and become increasingly malformed, or corrupt, as a result of running the re-indexing process.
The mutations occur after the re-indexing process has finished, as new blocks are appended, but are a result of a re-indexing process miscalculation that lingers in the block manager's `m_blockfile_info` `nSize` data until node restart.
These additions to the blk files are non-fatal, but also not desirable.
That is, this is a form of data corruption that the reading code is lenient enough to process (it skips the extra bytes), but it adds some scary looking log messages as it encounters them.
The summary of the problem is that the re-index process double counts the size of the serialization header (magic message start bytes [4 bytes] + length [4 bytes] = 8 bytes) while calculating the blk data file size (both values already account for the serialization header's size, hence why it is over accounted).
This bug manifests itself in a few different ways, after re-indexing, when a new block from a peer is processed:
1. If the new block will not fit into the last blk file processed while re-indexing, while remaining under the 128MiB limit, then the blk file is flushed to disk and truncated to a size that is 8 greater than it should be. The truncation adds zero bytes (see `FlatFileSeq::Flush` and `TruncateFile`).
1. If the last blk file processed while re-indexing has logical space for the new block under the 128 MiB limit:
1. If the blk file was not already large enough to hold the new block, then the zeros are, in effect, added by `fseek` when the file is opened for writing. Eight zero bytes are added to the end of the last blk file just before the new block is written. This happens because the write offset is 8 too great due to the miscalculation. The result is 8 zero bytes between the end of the last block and the beginning of the next block's magic + length + block.
1. If the blk file was already large enough to hold the new block, then the current existing file contents remain in the 8 byte gap between the end of the last block and the beginning of the next block's magic + length + block. Commonly, when this occcurs, it is due to the blk file containing blocks that are not connected to the block tree during reindex and are thus left behind by the reindex process and later overwritten when new blocks are added. The orphaned blocks can be valid blocks, but due to the nature of concurrent block download, the parent may not have been retrieved and written by the time the node was previously shutdown.
ACKs for top commit:
LarryRuane:
tested code-review ACK bcb0cacac2
ryanofsky:
Code review ACK bcb0cacac2. This is a disturbing bug with an easy fix which seems well-worth merging.
mzumsande:
ACK bcb0cacac2 (reviewed code and did some testing, I agree that it fixes the bug).
w0xlt:
tACK bcb0cacac2
Tree-SHA512: acc97927ea712916506772550451136b0f1e5404e92df24cc05e405bb09eb6fe7c3011af3dd34a7723c3db17fda657ae85fa314387e43833791e9169c0febe51
Also:
1. Have CChainState::LoadMempool and ::ThreadImport take in paths and
pass it through untouched to LoadMempool.
2. Make LoadMempool exit early if the load_path is empty.
3. Adjust the call to ::ThreadImport in ::AppInitMain to correctly pass
in an empty path if mempool persistence is disabled.
This fixes a blk file size calculation made during reindex that results in increased blk file malformity.
The fix is to avoid double counting the size of the serialization header during reindex.
This adds a unit test to reproduce the bug before the fix and to ensure that it does not recur.
These changes include a log message change also so as to not be as alarming. This is a common and recoverable
data corruption. These messages can now be filtered by the debug log reindex category.
In previous commits in this patchset, we've made sure that every
Unload/UnloadBlockIndex member function resets its own members, and does
not reach out to globals.
This means that their corresponding classes' default destructors can now
replace them, and do an even more thorough job without the need to be
updated for every new member variable.
Therefore, we can remove them, and also remove UnloadBlockIndex since
that's not used anymore.
Unfortunately, chainstatemanager_loadblockindex relies on
CChainState::UnloadBlockIndex, so that needs to stay for now.
fa1970f075 Make BlockManager::LoadBlockIndex private (MarcoFalke)
Pull request description:
* After commit fa27f03b49 `BlockManager::LoadBlockIndex` is only called by `BlockManager::LoadBlockIndexDB`. Thus, it can be made `private`.
* After commit c600ee3816 `m_best_invalid` is no longer accessed by `BlockManager::LoadBlockIndex`. Thus, the unused `friend` can be removed.
ACKs for top commit:
mruddy:
ACK fa1970f075 I verified by double checking references, then applying the patch, and running `make check`. LGTM.
Tree-SHA512: 9b36b4c59bf7ad01171764ce61b1be9750fc92d105c4fe939b1a6a70027ab6300d5d2a2fc3e82f981e22c3987f2ca84e092d2e1f8463fa320af9f05048580c0a
This change also introduces an aditional buffer of 10 blocks (PRUNE_LOCK_BUFFER) that will not be pruned before the best block.
Co-authored-by: Luke Dashjr <luke-jr+git@utopios.org>
This commit effectively splits the "load block index itself" logic from
"derive Chainstate variables from loaded block index" logic.
This means that BlockManager::LoadBlockIndex{,DB} will only load what's
relevant to the BlockManager.
I strongly recommend reviewing with the following git-diff flags:
--color-moved=dimmed_zebra --color-moved-ws=allow-indentation-change
Instead of having CBlockIndex's live on the heap, which requires manual
memory management, have them be owned by m_block_index. This means that
they will live and die with BlockManager.
A change to BlockManager::LookupBlockIndex:
- Previously, it was a const member function returning a non-const CBlockIndex*
- Now, there's are const and non-const versions of
BlockManager::LookupBlockIndex returning a CBlockIndex with the same
const-ness as the member function:
(e.g. const CBlockIndex* LookupBlockIndex(...) const)
See next commit for some weirdness that this eliminates.
The range based for-loops are modernize (using auto + destructuring) in
a future commit.
Mutex cs_main is already held by the caller of WriteUndoDataForBlock().
This change is needed to require CBlockIndex::GetUndoPos() to hold
cs_main and CBlockIndex::nStatus to be guarded by cs_main in the
following commits without adding 2 unnecessary cs_main locks to
WriteUndoDataForBlock().