Skip to content

Chunk process for summary and sample-prob #389

Open
@KunFang93

Description

@KunFang93

Hi @ArtRand,

I was wondering if it might be possible to add chunk-based processing (similar to the pileup method) for the –no-sampling option in summary and sample-prob in the future. Currently, the –no-sampling option is very resource-intensive—in my case, processing 150,000 reads requires around 60GB of RAM. Because my modifications are sparse, –no-sampling seems the only viable option I have. While I can work around this by splitting my BAM file into smaller segments and then aggregating the results, it would be ideal if the –no-sampling option could incorporate chunk processing strategy like pileup in the future.

Thanks for your help!

Best,
Kun

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions