-
Notifications
You must be signed in to change notification settings - Fork 180
/
Copy pathrevision_logs_2025-01-27-06-23.log
58 lines (55 loc) · 4.75 KB
/
revision_logs_2025-01-27-06-23.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
2025-01-27 06:23:06,451 [INFO]
==================================================
Starting experiment set: refined_base
==================================================
2025-01-27 06:23:06,508 [INFO] Starting experiment: refined_base
2025-01-27 06:23:06,509 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:09,362 [ERROR] Error in experiment refined_base: module 'torch.optim' has no attribute 'ReduceLROnPlateau'
2025-01-27 06:23:09,415 [INFO]
==================================================
Starting experiment set: hierarchical_intent
==================================================
2025-01-27 06:23:09,416 [INFO] Starting experiment: hierarchical_intent
2025-01-27 06:23:09,416 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 256, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:11,303 [ERROR] Error in experiment hierarchical_intent: module 'torch.optim' has no attribute 'ReduceLROnPlateau'
2025-01-27 06:23:11,357 [INFO]
==================================================
Starting experiment set: deep_gnn
==================================================
2025-01-27 06:23:11,357 [INFO] Starting experiment: deep_gnn
2025-01-27 06:23:11,357 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 4, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.3, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:13,325 [ERROR] Error in experiment deep_gnn: module 'torch.optim' has no attribute 'ReduceLROnPlateau'
2025-01-27 06:23:13,381 [INFO]
Revision experiments completed. Results saved.
2025-01-27 06:23:47,008 [INFO]
==================================================
Starting experiment set: refined_base
==================================================
2025-01-27 06:23:47,047 [INFO] Starting experiment: refined_base
2025-01-27 06:23:47,047 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:48,965 [ERROR] Error in experiment refined_base: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-01-27 06:23:49,030 [INFO]
==================================================
Starting experiment set: hierarchical_intent
==================================================
2025-01-27 06:23:49,030 [INFO] Starting experiment: hierarchical_intent
2025-01-27 06:23:49,030 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 3, 'n_intents': 256, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.2, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:50,807 [ERROR] Error in experiment hierarchical_intent: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-01-27 06:23:50,862 [INFO]
==================================================
Starting experiment set: deep_gnn
==================================================
2025-01-27 06:23:50,862 [INFO] Starting experiment: deep_gnn
2025-01-27 06:23:50,862 [INFO] Configuration: {'embed_dim': 64, 'n_layers': 4, 'n_intents': 128, 'use_residual': True, 'temp': 0.2, 'lambda_1': 0.5, 'lambda_2': 0.5, 'lambda_3': 0.0001, 'dropout': 0.3, 'batch_size': 2048, 'inter_batch': 4096, 'lr': 0.001, 'epochs': 50, 'device': device(type='cuda', index=1)}
2025-01-27 06:23:52,574 [ERROR] Error in experiment deep_gnn: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-01-27 06:23:52,631 [INFO]
Revision experiments completed. Results saved.