Skip to content

historical_indexer runs out of memory and must start from beginning #2

Open
@azigler

Description

@azigler

Hi @redsolver -- making a new issue so we can stay organized. 🚀

My machine:

2 GB Memory
1 vCPU
25 GB Disk + 30 GB mounted
Ubuntu 22.04 (LTS) x64

If I run this script, CPU immediately hits 100% (understandable, since this is a very weak machine) and memory slowly crawls up to 100% over the course of ~1hr before hitting a maximum and my machine killing the PID. It does manage to count all the repos and then start downloading them, and the script works as I can confirm SurrealDB stores the blocks. I'll get a Process killed message in my terminal when my machine kills it due to lack of RAM, and then the memory is released.

If I start again, it starts over from the very beginning, not where it left off. This means that unless I have sufficient RAM, I can't get the whole historical index. Again, that's understandable. This is a super weak machine just for testing. But do you have a recommended spec to run this at so I can use the script?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions