Skip to content

Issue with beatmap generation (AcceleratorError) #72

@rawrdomin

Description

@rawrdomin

Beatmap generation worked fine for me few days ago but seems to be having trouble today. Am currently running into an AcceleratorError
I'm also running this on the collab notebook. Here was the entire cell output when running into the error:

WARNING: V30 does not support descriptors or negative descriptors, ignoring.
Using CUDA for inference (auto-selected).
Random seed: 48105

AcceleratorError Traceback (most recent call last)
/tmp/ipython-input-381320550.py in <cell line: 0>()
155 conf.timer_bpm_threshold = timer_bpm_threshold
156
--> 157 _, result_path, osz_path = main(conf)
158
159 if osz_path is not None:

9 frames/usr/local/lib/python3.12/dist-packages/hydra/main.py in decorated_main(cfg_passthrough)
81 def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:
82 if cfg_passthrough is not None:
---> 83 return task_function(cfg_passthrough)
84 else:
85 args_parser = get_args_parser()

/content/Mapperatorinator/inference.py in main(args)
527 @hydra.main(config_path="configs/inference", config_name="v30", version_base="1.1")
528 def main(args: InferenceConfig):
--> 529 prepare_args(args)
530
531 model, tokenizer = load_model_with_server(

/content/Mapperatorinator/inference.py in prepare_args(args)
60 args.seed = random.randint(0, 2 ** 16)
61 print(f"Random seed: {args.seed}")
---> 62 set_seed(args.seed)
63
64

/usr/local/lib/python3.12/dist-packages/accelerate/utils/random.py in set_seed(seed, device_specific, deterministic)
53 random.seed(seed)
54 np.random.seed(seed)
---> 55 torch.manual_seed(seed)
56 if is_xpu_available():
57 torch.xpu.manual_seed_all(seed)

/usr/local/lib/python3.12/dist-packages/torch/_compile.py in inner(*args, **kwargs)
51 fn.__dynamo_disable = disable_fn # type: ignore[attr-defined]
52
---> 53 return disable_fn(*args, **kwargs)
54
55 return inner

/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py in _fn(*args, **kwargs)
1042 _maybe_set_eval_frame(_callback_from_stance(self.callback))
1043 try:
-> 1044 return fn(*args, **kwargs)
1045 finally:
1046 set_eval_frame(None)

/usr/local/lib/python3.12/dist-packages/torch/random.py in manual_seed(seed)
44
45 if not torch.cuda._is_in_bad_fork():
---> 46 torch.cuda.manual_seed_all(seed)
47
48 import torch.mps

/usr/local/lib/python3.12/dist-packages/torch/cuda/random.py in manual_seed_all(seed)
129 default_generator.manual_seed(seed)
130
--> 131 _lazy_call(cb, seed_all=True)
132
133

/usr/local/lib/python3.12/dist-packages/torch/cuda/init.py in _lazy_call(callable, **kwargs)
339 with _initialization_lock:
340 if is_initialized():
--> 341 callable()
342 else:
343 # TODO(torch_deploy): this accesses linecache, which attempts to read the

/usr/local/lib/python3.12/dist-packages/torch/cuda/random.py in cb()
127 for i in range(device_count()):
128 default_generator = torch.cuda.default_generators[i]
--> 129 default_generator.manual_seed(seed)
130
131 _lazy_call(cb, seed_all=True)

AcceleratorError: CUDA error: device-side assert triggered
Search for cudaErrorAssert' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions