tests/run-perfbench.py: Skip large tests based on bm_params. by dpgeorge · Pull Request #19128 · micropython/micropython · GitHub
Skip to content

tests/run-perfbench.py: Skip large tests based on bm_params.#19128

Open
dpgeorge wants to merge 1 commit intomicropython:masterfrom
dpgeorge:tests-run-perfbench-skip-too-large
Open

tests/run-perfbench.py: Skip large tests based on bm_params.#19128
dpgeorge wants to merge 1 commit intomicropython:masterfrom
dpgeorge:tests-run-perfbench-skip-too-large

Conversation

@dpgeorge
Copy link
Copy Markdown
Member

Summary

A large benchmark test cannot run on small targets if the target doesn't have enough RAM to load the test, or to run it. Prior to the change in this commit, run-perfbench.py had an explicit list of tests which were too large for small targets to load, and bm_params took care of deciding if the target had enough RAM to run the test.

Having an explicit list for large test scripts is not very general. This commit improves the situation by using bm_params to also decide if the target will be able to load the test. It does this by using a regex to search for bm_params and extracting the first pair of N/M values. The M is the minimum memory the target needs in order to run the test, and the test will be skipped entirely (not even loaded) if the target has less than that minimum.

This means that tests that are too big to load on the target will be properly skipped, instead of attempting to download to the target and fail with a MemoryError.

Note: it's not really possible to exec the test script on the host to extract bm_params because some tests cannot run under CPython.

Testing

Tested on PYBV10, PYBD_SF6 and ESP32_GENERIC_C3. All tests still run (none are skipped).

Tested on ADAFRUIT_ITSYBITSY_M0_EXPRESS using N=48 and M=20. 8 tests are now skipped because they are too large and the test run completes without failure.

Trade-offs and Alternatives

Instead of regex I thought to exec() the code on the host to extract bm_params, but as mentioned above that can't work because CPython cannot execute some of the tests (eg due to @micropython.viper).

Generative AI

I did not use generative AI tools when creating this PR.

A large benchmark test cannot run on small targets if the target doesn't
have enough RAM to load the test, or to run it.  Prior to the change in
this commit, `run-perfbench.py` had an explicit list of tests which were
too large for small targets to load, and `bm_params` took care of deciding
if the target had enough RAM to run the test.

Having an explicit list for large test scripts is not very general.  This
commit improves the situation by using `bm_params` to also decide if the
target will be able to load the test.  It does this by using a regex to
search for `bm_params` and extracting the first pair of N/M values.  The M
is the minimum memory the target needs in order to run the test, and the
test will be skipped entirely (not even loaded) if the target has less than
that minimum.

This means that tests that are too big to load on the target will be
properly skipped, instead of attempting to download to the target and fail
with a `MemoryError`.

Note: it's not really possible to exec the test script on the host to
extract `bm_params` because some tests cannot run under CPython.

Signed-off-by: Damien George <damien@micropython.org>
@dpgeorge dpgeorge added the tests Relates to tests/ directory in source label Apr 20, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 20, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tests Relates to tests/ directory in source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant