Skip to content

Inference improvements #130

@mortonjt

Description

@mortonjt

On inference time, we are currently not batching. So there is room for improvement for speeding up the DeepBLAST.align method in deepblast/trainer.py. I anticipate that we can get a 10-20x improvement in speed. Batching is already being done on training these models, so it is a matter of porting over some of the existing code.

  • Batching on protrans models
  • Batching on alignments

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions