Make it a static function in dbwrapper.cpp, since it is not used
elsewhere and when left in the header, would expose a leveldb type.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Hide the leveldb::Iterator member variable with a pimpl in order not to
expose it directly in the header.
Also, move CDBWrapper::NewIterator to the dbwrapper implementation to
use the pimpl for CDBIterator initialziation.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Hide the leveldb::WriteBatch member variable with a pimpl in order not
to expose it directly in the header.
Also move CDBBatch::Clear to the dbwrapper implementation to use the new
impl_batch.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
Keep the generic serialization in the header, while moving
leveldb-specifics to the implementation file.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
f054bd072a refactor: use "if constexpr" in std::vector's Unserialize() (Martin Leitner-Ankerl)
088caa68fb refactor: use "if constexpr" in std::vector's Serialize() (Martin Leitner-Ankerl)
0fafaca4d3 refactor: use "if constexpr" in prevector's Unserialize() (Martin Leitner-Ankerl)
c8839ec5cd refactor: use "if constexpr" in prevector's Serialize() (Martin Leitner-Ankerl)
1403d181c1 refactor: use fold expressions instead of recursive calls in UnserializeMany() (Martin Leitner-Ankerl)
bd08a008b4 refactor: use fold expressions instead of recursive calls in SerializeMany() (Martin Leitner-Ankerl)
Pull request description:
This simplifies the serialization code a bit and should also make it a bit faster.
* use fold expressions instead of recursive calls. This simplifies the code, makes it most likely faster because it reduces the number of function calls, and compiles faster because there are fewer template instantiations.
* use `if constexpr` instead of unnecessarily creating a temporary object only to call the right overload. This is used for `std::vector` and `prevector` serialization.
ACKs for top commit:
MarcoFalke:
only change is to add a missing `&`. lgtm, re-ACK f054bd072a📦
jonatack:
ACK f054bd072a
sipa:
utACK f054bd072a
john-moffett:
ACK f054bd072a
Tree-SHA512: 0417bf2d6be486c581732297945449211fc3481bac82964e27628b38ef55a47dfa58d730148aeaf1b19fa8eb1076489cc646ceebb178162a9afa59034601501d
It is incorrect to assert that `cache.HaveCoin()` will always be `true`
if `backend.HaveCoin()` is. The coin could well have been marked as
spent in the cache but not yet flushed, in which case `cache.HaveCoin()`
would return `false`.
Note this was never hit because `exists_using_have_coin_in_backend` is
currently never `true` (it's the default implementation of `CCoinsView`.
However this might change if we were to add a target where the backend
is a `CCoinsViewDB`.
Diversify outbound connections with respect to
networks: Every ~5 minutes, try to add an extra connection
to a reachable network which we currently don't have a connection to.
This is done defensively - only try management with respect to networks
after all existing outbound slots are filled.
The resulting situation with an extra outbound peer will be handled
by the extra outbound eviction logic, which protects peers from
eviction if they are the only ones for their network.
Co-authored-by: Amiti Uttarwar <amiti@uttarwar.org>
If a peer is the only one of its network, protect it from eviction.
This improves the diversity of outbound connections with respect to
reachable networks.
Co-authored-by: Amiti Uttarwar <amiti@uttarwar.org>
Connman uses this new map to keep a count of active OUTBOUND_FULL_RELAY and
MANUAL connections. Unused until next commit.
Co-authored-by: Martin Zumsande <mzumsande@gmail.com>
faafc35a77 doc: Clarify that -datacarriersize applies to the full raw scriptPubKey, not the data push (MarcoFalke)
55550e7fe7 test: Add -datacarriersize=2 tests (MarcoFalke)
Pull request description:
Clarify with a test that `-datacarriersize` applies to the serialized size of the scriptPubKey, not the size of the pushed data. So for example,
* `-datacarriersize=2` will reject a `raw(6a01aa)`, even though only one byte is pushed
* `-datacarriersize=0` (or `-datacarrier=0`) will reject a `raw(6a)`, even though no byte is pushed
* `-datacarriersize=0` (or `-datacarrier=0`) will reject a `raw(6a00)`, even though zero bytes are pushed
ACKs for top commit:
ajtowns:
ACK faafc35a77
instagibbs:
ACK faafc35a77
Tree-SHA512: f01ace02798f596ac2a02461e9f2a6ef91b3b37c976ea0b3bc860e2d3efb0ace0fd8b779dd18249cee7f84ebbe5fd21d8506afd3a15edadc00b843ff3b4aacc7
025fda0a76 fuzz: addrman, avoid `ConsumeDeserializable` when possible (brunoerg)
Pull request description:
Using specific functions like `ConsumeService`, `ConsumeAddress` and `ConsumeNetAddr` may be more effective than using `ConsumeDeserializable`. They always return some value while `ConsumeDeserializable` may return `std::nullopt`.
E.g.: In this part of the code, if `op_net_addr` is `std::nullopt`, we basically generated the addresses (if so) unnecessarily, because we won't be able to use them:
```cpp
std::vector<CAddress> addresses;
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10000) {
const std::optional<CAddress> opt_address = ConsumeDeserializable<CAddress>(fuzzed_data_provider);
if (!opt_address) {
break;
}
addresses.push_back(*opt_address);
}
const std::optional<CNetAddr> opt_net_addr = ConsumeDeserializable<CNetAddr>(fuzzed_data_provider);
if (opt_net_addr) {
addr_man.Add(addresses, *opt_net_addr, std::chrono::seconds{ConsumeTime(fuzzed_data_provider, 0, 100000000)});
}
```
Also, if we are not calling `Add` effectively, it would also be affect other functions that may "depend" on it.
ACKs for top commit:
dergoegge:
Code review ACK 025fda0a76
Tree-SHA512: 02450bec0b084c15ba0cd1cbdfbac067c8fea4ccf27be0c86d54e020f029a6c749a16d8e0558f9d6d35a7ca9db8916f180c872f09474702b5591129e9be0d192
Rather than using a bloom filter to track announced invs, simply allow
a peer to request any tx that entered the mempool prior to the last INV
message we sent them. This also obsoletes the UNCONDITIONAL_RELAY_DELAY.
Instead of recursively calling `UnserializeMany` and peeling off one
argument at a time, use a fold expression. This simplifies the code,
makes it most likely faster because it reduces the number of function
calls, and compiles faster because there are fewer template
instantiations.
Instead of recursively calling `SerializeMany` and peeling off one
argument at a time, use a fold expression. This simplifies the code,
makes it most likely faster because it reduces the number of function
calls, and compiles faster because there are fewer template
instantiations.
Wrap leveldb::DestroyDB in a helper function without exposing
leveldb-specifics.
Also, add missing optional include.
The context of this commit is an effort to decouple the dbwrapper header
file from leveldb includes. To this end, the includes are moved to the
dbwrapper implementation file. This is done as part of the kernel
project to reduce the number of required includes for users of the
kernel.
fa633aa690 streams: Teach AutoFile how to XOR (MarcoFalke)
000019e158 Add AutoFile::detail_fread member function (MarcoFalke)
fa7724bc9d refactor: Modernize AutoFile (MarcoFalke)
fa8d227d58 doc: Remove comments that just repeat what the code does (MarcoFalke)
fafe2ca0ce refactor: Remove redundant file check from AutoFile shift operators (MarcoFalke)
9999a49b32 Extract util::Xor, Add key_offset option, Add bench (MarcoFalke)
Pull request description:
This allows `AutoFile` to roll an XOR pattern while reading or writing to the underlying file.
This is needed for https://github.com/bitcoin/bitcoin/pull/28052, but can also be used in any other place.
Also, there are tests, so I've split this up from the larger pull to make review easier, hopefully.
ACKs for top commit:
Crypt-iQ:
crACK fa633aa
willcl-ark:
Lightly tested ACK fa633aa690
jamesob:
reACK fa633aa690 ([`jamesob/ackr/28060.4.MarcoFalke.util_add_xorfile`](https://github.com/jamesob/bitcoin/tree/ackr/28060.4.MarcoFalke.util_add_xorfile))
Tree-SHA512: 6d66cad0a089a096d3f95e4f2b28bce80b349d4b76f53d09dc9a0bea4fc1b7c0652724469c37971ba27728c7d46398a4c1d289c252af4c0f83bb2fcbc6f8e90b
The block index (CBlockTreeDB) is required to write and read blocks, so
move it to blockstorage. This allows to drop the txdb.h include from
`node/blockstorage.h`.
Can be reviewed with:
--color-moved=dimmed-zebra --color-moved-ws=ignore-all-space
bee2d57a65 script: update flake8 to 6.1.0 (Jon Atack)
38c3fd846b test: python E721 updates (Jon Atack)
Pull request description:
Update our functional tests per [E721](https://www.flake8rules.com/rules/E721.html) enforced by [flake8 6.1.0](https://flake8.pycqa.org/en/latest/release-notes/6.1.0.html), and update our CI lint task to use that release. This makes the following linter output on current master with flake8 6.1.0 green.
```
$ ./test/lint/lint-python.py ; ./test/lint/lint-spelling.py
test/functional/p2p_invalid_locator.py:35:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
test/functional/test_framework/siphash.py:34:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
test/functional/test_framework/siphash.py:64:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
src/test/fuzz/descriptor_parse.cpp:88: occurences ==> occurrences
^ Warning: codespell identified likely spelling errors. Any false positives? Add them to the list of ignored words in test/lint/spelling.ignore-words.txt
```
ACKs for top commit:
MarcoFalke:
lgtm ACK bee2d57a65
Tree-SHA512: f3788a543ca98e44eeeba1d06c32f1b11eec95d4aef068aa1b6b5c401261adfa3fb6c6d6c769f3fe6839d78e74a310d5c926867e7c367d6513a53d580fd376f3
a733dd79e2 Remove unused function `reliesOnAssumedValid` (Suhas Daftuar)
d4a11abb19 Cache block index entry corresponding to assumeutxo snapshot base blockhash (Suhas Daftuar)
3556b85022 Move CheckBlockIndex() from Chainstate to ChainstateManager (Suhas Daftuar)
0ce805b632 Documentation improvements for assumeutxo (Ryan Ofsky)
768690b7ce Fix initialization of setBlockIndexCandidates when working with multiple chainstates (Suhas Daftuar)
d43a1f1a2f Tighten requirements for adding elements to setBlockIndexCandidates (Suhas Daftuar)
d0d40ea9a6 Move block-storage-related logic to ChainstateManager (Suhas Daftuar)
3cfc75366e test: Clear block index flags when testing snapshots (Suhas Daftuar)
272fbc370c Update CheckBlockIndex invariants for chains based on an assumeutxo snapshot (Suhas Daftuar)
10c05710ce Add wrapper for adding entries to a chainstate's block index candidates (Suhas Daftuar)
471da5f6e7 Move block-arrival information / preciousblock counters to ChainstateManager (Suhas Daftuar)
1cfc887d00 Remove CChain dependency in node/blockstorage (Suhas Daftuar)
fe86a7cd48 Explicitly track maximum block height stored in undo files (Suhas Daftuar)
Pull request description:
This PR proposes a clean up of the relationship between block storage and the chainstate objects, by moving the decision of whether to store a block on disk to something that is not chainstate-specific. Philosophically, the decision of whether to store a block on disk is related to validation rules that do not require any UTXO state; for anti-DoS reasons we were using some chainstate-specific heuristics, and those have been reworked here to achieve the proposed separation.
This PR also fixes a bug in how a chainstate's `setBlockIndexCandidates` was being initialized; it should always have all the HAVE_DATA block index entries that have more work than the chain tip. During startup, we were not fully populating `setBlockIndexCandidates` in some scenarios involving multiple chainstates.
Further, this PR establishes a concept that whenever we have 2 chainstates, that we always know the snapshotted chain's base block and the base block's hash must be an element of our block index. Given that, we can establish a new invariant that the background validation chainstate only needs to consider blocks leading to that snapshotted block entry as potential candidates for its tip. As a followup I would imagine that when writing net_processing logic to download blocks for the background chainstate, that we would use this concept to only download blocks towards the snapshotted entry as well.
ACKs for top commit:
achow101:
ACK a733dd79e2
jamesob:
reACK a733dd79e2 ([`jamesob/ackr/27746.5.sdaftuar.rework_validation_logic`](https://github.com/jamesob/bitcoin/tree/ackr/27746.5.sdaftuar.rework_validation_logic))
Sjors:
Code review ACK a733dd79e2.
ryanofsky:
Code review ACK a733dd79e2. Just suggested changes since the last review. There are various small things that could be followed up on, but I think this is ready for merge.
Tree-SHA512: 9ec17746f22b9c27082743ee581b8adceb2bd322fceafa507b428bdcc3ffb8b4c6601fc61cc7bb1161f890c3d38503e8b49474da7b5ab1b1f38bda7aa8668675
ecfe507e07 fuzz: use `ConnmanTestMsg` in `connman` (brunoerg)
Pull request description:
Fixes #27980
Using `ConnmanTestMsg` we can add nodes and be
more effective fuzzing functions like `DisconnectNode`,
`FindNode`, `GetNodeStats` and other ones.
ACKs for top commit:
MarcoFalke:
review ACK ecfe507e07
dergoegge:
utACK ecfe507e07
Tree-SHA512: 97c363b422809f2e9755c082d1102237347abfab72c7baca417bd8975f8a595ddf3a085f8353dbdb9f17fb98fbfe830792bfc0b83451168458018faf6c239efa
fabef121b0 refactor: Use EnsureAnyNodeContext (MarcoFalke)
fa1640617e test: Add SyncWithValidationInterfaceQueue to mockscheduler RPC (MarcoFalke)
Pull request description:
There should be no risk or downside in adding a call to `SyncWithValidationInterfaceQueue` here. In fact, it will make tests less brittle. For example,
* If one sets the timeouts in `test/functional/feature_fee_estimation.py` to `0`, on `master` the test will fail and here it will pass.
* It may avoid a rare (theoretic) intermittent issue in https://github.com/bitcoin/bitcoin/pull/28108/files#r1268966663
ACKs for top commit:
TheCharlatan:
ACK fabef121b0
furszy:
Code review ACK fabef121. Convinced by checking all current tests usages.
Tree-SHA512: c9e9a536a8721d1b3f267a66b40578b34948892301affdcad121ef8e02bf17037305d0dd53aa94b1b064753e66f9cfb31823b916b707a9d812627f502b818003
06199a995f refactor: Revert addition of univalue sighash string check (TheCharlatan)
0b47c16215 doc: Correct release-notes for sighashtype exceptions (TheCharlatan)
Pull request description:
This is a follow up for #28113.
The string type check is already done by the rpc parser / RPCHelpMan. Re-doing it is adding dead code. Instead, throwing an exception when the assumption does not hold is the already correct behavior. Pointed out in this [comment](https://github.com/bitcoin/bitcoin/pull/28113/files#r1274568557).
Also correct the release note for the correct sighashtype exception change. There is no change in the handling of non-string sighashtype arugments. Pointed out in this [comment](https://github.com/bitcoin/bitcoin/pull/28113/files#r1274567555).
ACKs for top commit:
MarcoFalke:
lgtm ACK 06199a995f
jonatack:
Tested ACK 06199a995f
stickies-v:
ACK 06199a995f
Tree-SHA512: 3faa6b3d2247624c0973df8d79c09fbf1f90ffb99f1be484e359b528f485c31affea45976759bd206e4c81cbb54ebba5ad0ef4127d1deacbfe2a58153fcc94ee
fa940f41ea Remove unused raw-pointer read helper from univalue (MarcoFalke)
Pull request description:
The helpers are unused outside of tests and redundant with the existing `bool read(std::string_view raw);`.
Fix both issues by removing them.
Also, simplify the tests code by removing a `std::string` constructor where possible.
ACKs for top commit:
stickies-v:
utACK fa940f41ea
TheCharlatan:
tACK fa940f41ea
Tree-SHA512: 60c154c1046f01551335af79bf820a6104844f63e89977271b4336b3cd06ac3bab1379e18b7bc61d12bef7446029e91c16541ddecf9e88bc8bc897fc1f6ee2c8
131314b62e fuzz: increase coverage of the descriptor targets (Antoine Poinsot)
90a24741e7 fuzz: add a new, more efficient, descriptor parsing target (Antoine Poinsot)
d60229ede5 fuzz: make the parsed descriptor testing into a function (Antoine Poinsot)
Pull request description:
The current descriptor parsing fuzz target requires valid public or private keys to be provided. This is unnecessary as we are only interested in fuzzing the descriptor parsing logic here (other targets are focused on fuzzing keys serializations). And it's pretty inefficient, especially for formats that need a checksum (`xpub`, `xprv`, WIF).
This introduces a new target that mocks the keys as an index in a list of precomputed keys. Keys are represented as 2 hex characters in the descriptor. The key type (private, public, extended, ..) is deterministically based on this one-byte value. Keys are deterministically generated at target initialization. This is much more efficient and also largely reduces the size of the seeds.
TL;DR: for instance instead of requiring the fuzzer to generate a `pk(xpub6DdBu7pBoyf7RjnUVhg8y6LFCfca2QAGJ39FcsgXM52Pg7eejUHLBJn4gNMey5dacyt4AjvKzdTQiuLfRdK8rSzyqZPJmNAcYZ9kVVEz4kj)` to parse a valid descriptor, it just needs to generate a `pk(03)`.
Note we only mock the keys themselves, not the entire descriptor key expression. As we want to fuzz the real code that parses the rest of the key expression (origin, derivation paths, ..).
This is a target i used for reviewing #17190 and #27255, and figured it was worth PR'ing on its own since the added complexity for mocking the keys is minimal and it could help prevent introducing bugs to the descriptor parsing logic much more efficiently.
ACKs for top commit:
MarcoFalke:
re-ACK 131314b62e🐓
achow101:
ACK 131314b62e
Tree-SHA512: 485a8d6a0f31a3a132df94dc57f97bdd81583d63507510debaac6a41dbbb42fa83c704ff3f2bd0b78c8673c583157c9a3efd79410e5e79511859e1470e629118