-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Labels
Description
Hi,
I have recently tested haploflow on a complex metagenomics dataset, and it seems to be performing very well compared to other tools in producing correctly assembled viral contigs. For testing, I have used half of my dataset (i.e. only forward reads), with an uncompressed file size of 8 GB, and this has run without issues. I have used the conda installation, and am running this on a Linux system with 250 GB RAM. However, when I try to use the full dataset (16 GB), the RAM usage increases until it is maxed out and the program eventually crashes. Is there any way to control the memory use to avoid these issues?