-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Thank you for your excellent work in next event prediction. However, when I try to evaluate models on your futurebench, some errors happened when I set the --tasks of lmms_eval arguments to futurebench. The detailed Traceback are shown as below:
Traceback (most recent call last): File "/data/raid-0/suqile/anaconda3/envs/nep/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/data/raid-0/suqile/anaconda3/envs/nep/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/__main__.py", line 532, in <module> cli_evaluate() File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/__main__.py", line 346, in cli_evaluate raise e File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/__main__.py", line 330, in cli_evaluate results, samples = cli_evaluate_single(args) File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/__main__.py", line 471, in cli_evaluate_single results = evaluator.simple_evaluate( File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/utils.py", line 533, in _wrapper return fn(*args, **kwargs) File "/data/raid-0/suqile/code/Video-Next-Event-Prediction/third_party/lmms-eval/lmms_eval/evaluator.py", line 157, in simple_evaluate assert tasks != [], "No tasks specified, or no tasks found. Please verify the task names."
Do you know how to fix this?
Thank you for your time. Looking forward to your reply!