This page contains information about benchmarks that are or were useful for solving issues. The issue that first motivated the creation of this page is #8885.
## Benchmark tests
## Calibration
Pytest has 2 modes of [calibration](https://pytest-benchmark.readthedocs.io/en/stable/calibration.html):
***automatic:** By default pytest-benchmark will try to run your function as many times needed to fit a 10 x `TIMER_RESOLUTION` period.
***pedantic:** defaults to `rounds=1`, `warmup_rounds=0` and `iterations=1`.
## Tests table
## Configuration
Configurations:
*`warmup_rounds=0` for all tests.
*`iterations=1` for all tests.
*`--min-rounds=5` for all tests (the default).
* for pytest-benchmar, the precision of `time.time` is `9.5367431640625e-07 s`, that is $`10^{-6}`$ seconds.
*`warmup_rounds=0` for all tests (default).
*`iterations=1` for all tests (default).
*`--benchmark-min-rounds=5` for all tests (the default).
* for pytest-benchmark, the precision of `time.time` is `9.5367431640625e-07 s`, that is $`10^{-6}`$ seconds (taken by running `pytest_benchmark.timers.compute_timer_precision(time.time)`)
*`pytest-benchmark` will try to run the function to until it takes at least around $`10^{-5}`$ seconds (0.01 ms).
*`--benchmark-min-time=0.000005`, for all tests (default).
# Mean time table
The following table was taken from a run of benchmarks by the CI on top of (effectivelly) the current master branch: https://0xacab.org/drebs/soledad/-/jobs/13422