Skip to content

investigate performance of memory pool for coverage tracer #479

Open
@0xalpharush

Description

@0xalpharush

The coverage tracer currently deallocates and reallocates memory very frequently and potentially it is upwards of 24kb (the PC coverage tracker is the length of the code and EVM codesize checks are often disabled) per contract that is touched during a tx's execution. This design, in part, contributes to Medusa spending a lot of time in the GC runtime. Eliminating this overhead may increase the throughput of the fuzzer

We could explore each worker having a sync.Pool of bytes.Buffer's that are cleared and re-used, only allocating when the pool's fully used. I'm not sure if this approach would allow workers to be long-lived and not frequently restarted when the worker reset limit is reached.

func (t *CoverageTracer) OnTxStart(vm *tracing.VMContext, tx *coretypes.Transaction, from common.Address) {
// Reset our call frame states
t.callDepth = 0
t.coverageMaps = NewCoverageMaps()
t.callFrameStates = make([]*coverageTracerCallFrameState, 0)
t.evmContext = vm
}

As an aside, using edges instead of PC's may make this performance optimization necessary. That said, it may make it feasible for us to track the hit count and generate line-based reports for the full campaign even if we're only using edges to decide whether a sequence should be inserted into the corpus (see #326 (comment))

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions