Python perf moduleΒΆ

The Python perf module is a toolkit to write, run and analyze benchmarks.

Documenation:

Features of the perf module:

  • Simple API to run reliable benchmarks: see examples.
  • Automatically calibrate a benchmark for a time budget.
  • Spawn multiple worker processes.
  • Compute the mean and standard deviation.
  • Detect if a benchmark result seems unstable: see the perf check command.
  • perf stats command to analyze the distribution of benchmark results (min/max, mean, median, percentiles, etc.).
  • perf compare_to command tests if a difference if significant. It supports comparison between multiple benchmark suites (made of multiple benchmarks)
  • perf timeit command line tool for quick but reliable Python microbenchmarks
  • perf system tune command to tune your system to run stable benchmarks.
  • Automatically collect metadata on the computer and the benchmark: use the perf metadata command to display them, or the perf collect_metadata command to manually collect them.
  • --track-memory and --tracemalloc options to track the memory usage of a benchmark.
  • JSON format to store benchmark results.
  • Support multiple units: seconds, bytes and integer.

Quick Links:

Other Python benchmark projects: