When loading the docvqa datasets, RAM usage keeps going high and nearly reaches 30+GB, which almost crashed a machine set up with 32GB RAM.
Running commands:
python -m eval.run eval_vllm --model_name HuggingFaceTB/SmolVLM-256M-Instruct --url http://0.0.0.0:8000 --output_dir ~/tmp --eval_name "docvqa"
Memory profiling results:

