Description
There are two ways in which "real work" that we'd want to measure ends up in a subprocess. Neither of these cases presently collects pystats in the subprocess.
-
Using the
pyperf.Runner.bench_command
API (eg.python_startup
). These should be handled automatically by this update to pyperf, we just need a new pyperf release and then update pyperformance and bench_runner to use it. -
The benchmark itself fires off a subprocess to run something like a webserver (eg.
djangocms
). These benchmarks will need to be individually updated to enable pystats in the subprocess, under the right circumstances. It should basically amount to checking for--hook pystats
on the commandline and then setting the envvarPYTHONSTATS=1
in the subprocess. I can't think of a way to do that automatically for all of these, but there aren't that many, we should just update them individually. (This also requires the pyperf update above, first).
Cc: @brandtbucher