-
Notifications
You must be signed in to change notification settings - Fork 183
/
Copy pathrevision_logs_2025-01-27-06-35.log
62 lines (59 loc) · 6.38 KB
/
revision_logs_2025-01-27-06-35.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
2025-01-27 06:35:31,196 [INFO]
==================================================
Starting experiment set: refined_base
==================================================
2025-01-27 06:35:31,234 [INFO] Starting experiment: refined_base
2025-01-27 06:35:31,234 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda')}
2025-01-27 06:35:37,904 [ERROR] Error in experiment refined_base: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:37,906 [ERROR] Traceback (most recent call last):
File "/workplace/project/run_model_revision.py", line 246, in main
results[name] = train_and_evaluate(config, name)
File "/workplace/project/run_model_revision.py", line 120, in train_and_evaluate
loss.backward()
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/__init__.py", line 267, in backward
_engine_run_backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:37,965 [INFO]
==================================================
Starting experiment set: hierarchical_intent
==================================================
2025-01-27 06:35:37,965 [INFO] Starting experiment: hierarchical_intent
2025-01-27 06:35:37,966 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 256, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda')}
2025-01-27 06:35:43,167 [ERROR] Error in experiment hierarchical_intent: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:43,168 [ERROR] Traceback (most recent call last):
File "/workplace/project/run_model_revision.py", line 246, in main
results[name] = train_and_evaluate(config, name)
File "/workplace/project/run_model_revision.py", line 120, in train_and_evaluate
loss.backward()
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/__init__.py", line 267, in backward
_engine_run_backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:43,245 [INFO]
==================================================
Starting experiment set: deep_gnn
==================================================
2025-01-27 06:35:43,245 [INFO] Starting experiment: deep_gnn
2025-01-27 06:35:43,245 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 4, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.3, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda')}
2025-01-27 06:35:48,510 [ERROR] Error in experiment deep_gnn: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:48,510 [ERROR] Traceback (most recent call last):
File "/workplace/project/run_model_revision.py", line 246, in main
results[name] = train_and_evaluate(config, name)
File "/workplace/project/run_model_revision.py", line 120, in train_and_evaluate
loss.backward()
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/__init__.py", line 267, in backward
_engine_run_backward(
File "/home/user/micromamba/envs/autogpt/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor []], which is output 0 of LinalgVectorNormBackward0, is at version 9; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
2025-01-27 06:35:48,560 [INFO]
Revision experiments completed. Results saved.