Skip to content

Benchmark accuracy #11

@ioquatix

Description

@ioquatix

I thought I'd add some notes while I'm looking through the code.

  • Agoo doesn't implement the same benchmark as the other rack compatible servers, because it serves from a static directory by default. Whether or not this reflects the real world (e.g. does passenger do this by default too?) should probably be discussed, but at the very least, I think we should have the SAME rackup file and run that for all servers.

  • It's not clear to me why we are using perfer vs wrk and ab or a variety of other testing tools. wrk can definitely push a large number of requests. I'll be interested to see the results I get with perfer

  • The puma benchmark uses rackup command. At least in the case of falcon, the rackup command imposes severe performance limitations. It might not be the same for puma, but I don't know. The best way to test puma would be in cluster mode.

  • If we used wrk to perform test, we can also report on latency, which is a useful metric. Throughput and latency are related and both useful numbers to report.

  • The benchmark page doesn't feel very impartial. I think we should make the benchmark results as objective as possible. There should be some caveats section so that people know the limitations of such benchmarks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions