0
0
Fork 0
mirror of https://github.com/bitcoin/bitcoin.git synced 2025-02-02 09:46:52 -05:00
bitcoin-bitcoin-core/test
Ava Chow a52837b9e9
Merge bitcoin/bitcoin#29575: net_processing: make any misbehavior trigger immediate discouragement
6eecba475e net_processing: make MaybePunishNodeFor{Block,Tx} return void (Pieter Wuille)
ae60d485da net_processing: remove Misbehavior score and increments (Pieter Wuille)
6457c31197 net_processing: make all Misbehaving increments = 100 (Pieter Wuille)
5120ab1478 net_processing: drop 8 headers threshold for incoming BIP130 (Pieter Wuille)
944c54290d net_processing: drop Misbehavior for unconnecting headers (Pieter Wuille)
9f66ac7cf1 net_processing: do not treat non-connecting headers as response (Pieter Wuille)

Pull request description:

  So far, discouragement of peers triggers when their misbehavior score exceeds 100 points. Most types of misbehavior increment the score by 100, triggering immediate discouragement, but some types do not. This PR makes all increments equal to either 100 (meaning any misbehavior will immediately cause disconnection and discouragement) or 0 (making the behavior effectively unconditionally allowed), and then removes the logic for score accumulation.

  This simplifies the code a bit, but also makes protocol expectations clearer: if a peer misbehaves, they get disconnected. There is no good reason why certain types of protocol violations should be permitted 4 times (howmuch=20) or 9 times (howmuch=10), while many others are never allowed. Furthermore, the distinction between these looks arbitrary.

  The specific types of misbehavior that are changed to 100 are:
  * Sending us a `block` which does not connect to our header tree (which necessarily must have been unsollicited). [used to be score 10]
  * Sending us a `headers` with a non-continuous headers sequence. [used to be score 20]
  * Sending us more than 1000 addresses in a single `addr` or `addrv2` message [used to be score 20]
  * Sending us more than 50000 invs in a single `inv` message [used to be score 20]
  * Sending us more than 2000 headers in a single `headers` message [used to be score 20]

  The specific types of misbehavior that are changed to 0 are:
  * Sending us 10 (*) separate BIP130 headers announcements that do not connect to our block tree [used to be score 20]
  * Sending us more than 8 headers in a single `headers` message (which thus does not get treated as a BIP130 announcement) that does not connect to our block tree. [used to be score 10]

  I believe that none of these behaviors are unavoidable, except for the one marked (*) which can in theory happen still due to interaction between BIP130 and variations in system clocks (the max 2 hour in the future rule). This one has been removed entirely. In order to remove the impact of the bug it was designed to deal with, without relying on misbehavior, a separate improvement is included that makes `getheaders`-tracking more accurate.

  In another unrelated improvement, this also gets rid of the 8 header limit heuristic to determine whether an incoming non-connecting `headers` is a potential BIP130 announcement, as this rule is no longer needed to prevent spurious Misbehavior. Instead, any non-connecting `headers` is now treated as a potential announcement.

ACKs for top commit:
  sr-gi:
    ACK [6eecba4](6eecba475e)
  achow101:
    ACK 6eecba475e
  mzumsande:
    Code Review ACK 6eecba475e
  glozow:
    light code review / concept ACK 6eecba475e

Tree-SHA512: e11e8a652c4ec048d8961086110a3594feefbb821e13f45c14ef81016377be0db44b5311751ef635d6e026def1960aff33f644e78ece11cfb54f2b7daa96f946
2024-06-20 13:28:38 -04:00
..
functional Merge bitcoin/bitcoin#29575: net_processing: make any misbehavior trigger immediate discouragement 2024-06-20 13:28:38 -04:00
fuzz Merge bitcoin/bitcoin#29421: net: make the list of known message types a compile time constant 2024-05-21 13:59:33 -04:00
lint Support running individual lint checks 2024-06-04 09:13:44 -04:00
sanitizer_suppressions util: add BitSet 2024-06-10 07:54:48 -04:00
util tests: Add unit tests for bitcoin-tx replaceable command 2023-12-08 20:27:13 -05:00
config.ini.in Merge bitcoin/bitcoin#27932: test: Fuzz on macOS 2023-06-29 13:08:58 +01:00
get_previous_releases.py Merge bitcoin/bitcoin#28027: test: Fixes and updates to wallet_backwards_compatibility.py for 25.0 and descriptor wallets 2023-10-05 16:51:15 +01:00
README.md doc: move-only lint docs to one place 2024-01-16 10:54:14 +01:00

This directory contains integration tests that test bitcoind and its utilities in their entirety. It does not contain unit tests, which can be found in /src/test, /src/wallet/test, etc.

This directory contains the following sets of tests:

  • fuzz A runner to execute all fuzz targets from /src/test/fuzz.
  • functional which test the functionality of bitcoind and bitcoin-qt by interacting with them through the RPC and P2P interfaces.
  • util which tests the utilities (bitcoin-util, bitcoin-tx, ...).
  • lint which perform various static analysis checks.

The util tests are run as part of make check target. The fuzz tests, functional tests and lint scripts can be run as explained in the sections below.

Running tests locally

Before tests can be run locally, Bitcoin Core must be built. See the building instructions for help.

Fuzz tests

See /doc/fuzzing.md

Functional tests

Dependencies and prerequisites

The ZMQ functional test requires a python ZMQ library. To install it:

  • on Unix, run sudo apt-get install python3-zmq
  • on mac OS, run pip3 install pyzmq

On Windows the PYTHONUTF8 environment variable must be set to 1:

set PYTHONUTF8=1

Running the tests

Individual tests can be run by directly calling the test script, e.g.:

test/functional/feature_rbf.py

or can be run through the test_runner harness, eg:

test/functional/test_runner.py feature_rbf.py

You can run any combination (incl. duplicates) of tests by calling:

test/functional/test_runner.py <testname1> <testname2> <testname3> ...

Wildcard test names can be passed, if the paths are coherent and the test runner is called from a bash shell or similar that does the globbing. For example, to run all the wallet tests:

test/functional/test_runner.py test/functional/wallet*
functional/test_runner.py functional/wallet* (called from the test/ directory)
test_runner.py wallet* (called from the test/functional/ directory)

but not

test/functional/test_runner.py wallet*

Combinations of wildcards can be passed:

test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
test_runner.py tool* mempool*

Run the regression test suite with:

test/functional/test_runner.py

Run all possible tests with

test/functional/test_runner.py --extended

In order to run backwards compatibility tests, first run:

test/get_previous_releases.py -b

to download the necessary previous release binaries.

By default, up to 4 tests will be run in parallel by test_runner. To specify how many jobs to run, append --jobs=n

The individual tests and the test_runner harness have many command-line options. Run test/functional/test_runner.py -h to see them all.

Speed up test runs with a RAM disk

If you have available RAM on your system you can create a RAM disk to use as the cache and tmp directories for the functional tests in order to speed them up. Speed-up amount varies on each system (and according to your RAM speed and other variables), but a 2-3x speed-up is not uncommon.

Linux

To create a 4 GiB RAM disk at /mnt/tmp/:

sudo mkdir -p /mnt/tmp
sudo mount -t tmpfs -o size=4g tmpfs /mnt/tmp/

Configure the size of the RAM disk using the size= option. The size of the RAM disk needed is relative to the number of concurrent jobs the test suite runs. For example running the test suite with --jobs=100 might need a 4 GiB RAM disk, but running with --jobs=32 will only need a 2.5 GiB RAM disk.

To use, run the test suite specifying the RAM disk as the cachedir and tmpdir:

test/functional/test_runner.py --cachedir=/mnt/tmp/cache --tmpdir=/mnt/tmp

Once finished with the tests and the disk, and to free the RAM, simply unmount the disk:

sudo umount /mnt/tmp

macOS

To create a 4 GiB RAM disk named "ramdisk" at /Volumes/ramdisk/:

diskutil erasevolume HFS+ ramdisk $(hdiutil attach -nomount ram://8388608)

Configure the RAM disk size, expressed as the number of blocks, at the end of the command (4096 MiB * 2048 blocks/MiB = 8388608 blocks for 4 GiB). To run the tests using the RAM disk:

test/functional/test_runner.py --cachedir=/Volumes/ramdisk/cache --tmpdir=/Volumes/ramdisk/tmp

To unmount:

umount /Volumes/ramdisk

Troubleshooting and debugging test failures

Resource contention

The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another bitcoind process running on the system (perhaps from a previous test which hasn't successfully killed all its bitcoind nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other bitcoind processes are running.

On linux, the test framework will warn if there is another bitcoind process running when the tests are started.

If there are zombie bitcoind processes after test failure, you can kill them by running the following commands. Note that these commands will kill all bitcoind processes running on the system, so should not be used if any non-test bitcoind processes are being run.

killall bitcoind

or

pkill -9 bitcoind
Data directory cache

A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure bitcoind processes are stopped as above):

rm -rf test/cache
killall bitcoind
Test logging

The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR and CRITICAL). From within your functional tests you can log to these different levels using the logger included in the test_framework, e.g. self.log.debug(object). By default:

  • when run through the test_runner harness, all logs are written to test_framework.log and no logs are output to the console.
  • when run directly, all logs are written to test_framework.log and INFO level and above are output to the console.
  • when run by our CI (Continuous Integration), no logs are output to the console. However, if a test fails, the test_framework.log and bitcoind debug.logs will all be dumped to the console to help troubleshooting.

These log files can be located under the test data directory (which is always printed in the first line of test output):

  • <test data directory>/test_framework.log
  • <test data directory>/node<node number>/regtest/debug.log.

The node number identifies the relevant test node, starting from node0, which corresponds to its position in the nodes list of the specific test, e.g. self.nodes[0].

To change the level of logs output to the console, use the -l command line argument.

test_framework.log and bitcoind debug.logs can be combined into a single aggregate log by running the combine_logs.py script. The output can be plain text, colorized text or html. For example:

test/functional/combine_logs.py -c <test data directory> | less -r

will pipe the colorized logs from the test into less.

Use --tracerpc to trace out all the RPC calls and responses to the console. For some tests (eg any that use submitblock to submit a full block over RPC), this can result in a lot of screen output.

By default, the test data directory will be deleted after a successful run. Use --nocleanup to leave the test data directory intact. The test data directory is never deleted after a failed test.

Attaching a debugger

A python debugger can be attached to tests at any point. Just add the line:

import pdb; pdb.set_trace()

anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the bitcoind nodes-under-test.

If further introspection of the bitcoind instances themselves becomes necessary, this can be accomplished by first setting a pdb breakpoint at an appropriate location, running the test to that point, then using gdb (or lldb on macOS) to attach to the process and debug.

For instance, to attach to self.node[1] during a run you can get the pid of the node within pdb.

(pdb) self.node[1].process.pid

Alternatively, you can find the pid by inspecting the temp folder for the specific test you are running. The path to that folder is printed at the beginning of every test run:

2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3

Use the path to find the pid file in the temp folder:

cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid

Then you can use the pid to start gdb:

gdb /home/example/bitcoind <pid>

Note: gdb attach step may require ptrace_scope to be modified, or sudo preceding the gdb. See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt

Often while debugging RPC calls in functional tests, the test might time out before the process can return a response. Use --timeout-factor 0 to disable all RPC timeouts for that particular functional test. Ex: test/functional/wallet_hd.py --timeout-factor 0.

Profiling

An easy way to profile node performance during functional tests is provided for Linux platforms using perf.

Perf will sample the running node and will generate profile data in the node's datadir. The profile data can then be presented using perf report or a graphical tool like hotspot.

To generate a profile during test suite runs, use the --perf flag.

To see render the output to text, run

perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less

For ways to generate more granular profiles, see the README in test/functional.

Util tests

Util tests can be run locally by running test/util/test_runner.py. Use the -v option for verbose output.

Lint tests

See the README in test/lint.

Writing functional tests

You are encouraged to write functional tests for new or existing features. Further information about the functional test framework and individual tests is found in test/functional.