594zyc's picture
Add model
f39ace3
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 144, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 134, in main
run_eval(eval_args, mode="compute_metrics", verbose=True)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 70, in run_eval
print("Model: ", args.model_path, "device: ", model.device)
AttributeError: 'NoneType' object has no attribute 'device'
Model: /fsx_0/user/imzyc/proact_exps/20240822-L4096-I5-ep4-NOSEP-nr0.1-klgmix-1s-lora-bs384-debug
{'assembly101/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]},
'ego4d/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.05},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]},
'egoexolearn/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]},
'epickitchens/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]},
'holoassist/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]},
'wtag/dialog_val_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.1},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4},
{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]}}
Evaluation datasets:
* ego4d/dialog_val | num samples: 96
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.05
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.05-maxlen_4k
Metrics:
missing_rate: 0.8143
redundant_rate: 0.0418
semantic_score: 0.7024
jaccard_index: 0.1469
precision: 0.7642
recall: 0.1481
F1: 0.2481
num_matched: 713.0000
num_mismatched: 181.0000
num_missed: 3920.0000
num_redundant: 39.0000
Bleu_1: 0.4112
Bleu_1_w: 0.0604
Bleu_2: 0.3002
Bleu_2_w: 0.0441
Bleu_3: 0.2331
Bleu_3_w: 0.0342
Bleu_4: 0.1877
Bleu_4_w: 0.0276
CIDEr: 1.1146
CIDEr_w: 0.1638
METEOR: 0.2052
METEOR_w: 0.0302
Updating eval setup: not_talk_threshold: 0.05 -> 0.1
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.7900
redundant_rate: 0.1124
semantic_score: 0.7032
jaccard_index: 0.1566
precision: 0.6795
recall: 0.1608
F1: 0.2600
num_matched: 774.0000
num_mismatched: 237.0000
num_missed: 3803.0000
num_redundant: 128.0000
Bleu_1: 0.4120
Bleu_1_w: 0.0645
Bleu_2: 0.2990
Bleu_2_w: 0.0468
Bleu_3: 0.2308
Bleu_3_w: 0.0362
Bleu_4: 0.1849
Bleu_4_w: 0.0290
CIDEr: 1.1293
CIDEr_w: 0.1769
METEOR: 0.2049
METEOR_w: 0.0321
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.7179
redundant_rate: 0.2687
semantic_score: 0.6990
jaccard_index: 0.1850
precision: 0.5293
recall: 0.2042
F1: 0.2947
num_matched: 983.0000
num_mismatched: 375.0000
num_missed: 3456.0000
num_redundant: 499.0000
Bleu_1: 0.4005
Bleu_1_w: 0.0741
Bleu_2: 0.2853
Bleu_2_w: 0.0528
Bleu_3: 0.2176
Bleu_3_w: 0.0403
Bleu_4: 0.1735
Bleu_4_w: 0.0321
CIDEr: 0.9623
CIDEr_w: 0.1780
METEOR: 0.1921
METEOR_w: 0.0355
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.6342
redundant_rate: 0.4073
semantic_score: 0.6859
jaccard_index: 0.2117
precision: 0.4291
recall: 0.2649
F1: 0.3276
num_matched: 1275.0000
num_mismatched: 486.0000
num_missed: 3053.0000
num_redundant: 1210.0000
Bleu_1: 0.3777
Bleu_1_w: 0.0799
Bleu_2: 0.2595
Bleu_2_w: 0.0549
Bleu_3: 0.1918
Bleu_3_w: 0.0406
Bleu_4: 0.1491
Bleu_4_w: 0.0316
CIDEr: 0.7785
CIDEr_w: 0.1648
METEOR: 0.1803
METEOR_w: 0.0382
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.5374
redundant_rate: 0.5324
semantic_score: 0.6808
jaccard_index: 0.2079
precision: 0.3208
recall: 0.3174
F1: 0.3191
num_matched: 1528.0000
num_mismatched: 699.0000
num_missed: 2587.0000
num_redundant: 2536.0000
Bleu_1: 0.3763
Bleu_1_w: 0.0782
Bleu_2: 0.2556
Bleu_2_w: 0.0531
Bleu_3: 0.1874
Bleu_3_w: 0.0390
Bleu_4: 0.1447
Bleu_4_w: 0.0301
CIDEr: 0.7667
CIDEr_w: 0.1594
METEOR: 0.1752
METEOR_w: 0.0364
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: ego4d-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.4053
redundant_rate: 0.7022
semantic_score: 0.6725
jaccard_index: 0.1596
precision: 0.1920
recall: 0.3835
F1: 0.2559
num_matched: 1846.0000
num_mismatched: 1017.0000
num_missed: 1951.0000
num_redundant: 6750.0000
Bleu_1: 0.3556
Bleu_1_w: 0.0568
Bleu_2: 0.2372
Bleu_2_w: 0.0379
Bleu_3: 0.1705
Bleu_3_w: 0.0272
Bleu_4: 0.1289
Bleu_4_w: 0.0206
CIDEr: 0.6627
CIDEr_w: 0.1058
METEOR: 0.1699
METEOR_w: 0.0271
Evaluation datasets:
* holoassist/dialog_val | num samples: 291
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.1
Evalulation: holoassist-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.8169
redundant_rate: 0.0096
semantic_score: 0.6931
jaccard_index: 0.1324
precision: 0.7175
recall: 0.1326
F1: 0.2239
num_matched: 2024.0000
num_mismatched: 770.0000
num_missed: 12467.0000
num_redundant: 27.0000
Bleu_1: 0.4319
Bleu_1_w: 0.0572
Bleu_2: 0.3132
Bleu_2_w: 0.0415
Bleu_3: 0.2389
Bleu_3_w: 0.0316
Bleu_4: 0.1871
Bleu_4_w: 0.0248
CIDEr: 1.1122
CIDEr_w: 0.1472
METEOR: 0.2072
METEOR_w: 0.0274
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: holoassist-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.6569
redundant_rate: 0.0176
semantic_score: 0.6920
jaccard_index: 0.2380
precision: 0.6856
recall: 0.2394
F1: 0.3549
num_matched: 3654.0000
num_mismatched: 1582.0000
num_missed: 10025.0000
num_redundant: 94.0000
Bleu_1: 0.4265
Bleu_1_w: 0.1015
Bleu_2: 0.3066
Bleu_2_w: 0.0730
Bleu_3: 0.2318
Bleu_3_w: 0.0552
Bleu_4: 0.1807
Bleu_4_w: 0.0430
CIDEr: 1.0739
CIDEr_w: 0.2556
METEOR: 0.2021
METEOR_w: 0.0481
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: holoassist-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.5975
redundant_rate: 0.0346
semantic_score: 0.6902
jaccard_index: 0.2730
precision: 0.6643
recall: 0.2770
F1: 0.3910
num_matched: 4227.0000
num_mismatched: 1916.0000
num_missed: 9118.0000
num_redundant: 220.0000
Bleu_1: 0.4227
Bleu_1_w: 0.1154
Bleu_2: 0.3021
Bleu_2_w: 0.0825
Bleu_3: 0.2281
Bleu_3_w: 0.0623
Bleu_4: 0.1775
Bleu_4_w: 0.0485
CIDEr: 1.0430
CIDEr_w: 0.2848
METEOR: 0.1995
METEOR_w: 0.0545
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: holoassist-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.4488
redundant_rate: 0.3363
semantic_score: 0.6851
jaccard_index: 0.2771
precision: 0.4268
recall: 0.3544
F1: 0.3873
num_matched: 5409.0000
num_mismatched: 3003.0000
num_missed: 6849.0000
num_redundant: 4262.0000
Bleu_1: 0.4084
Bleu_1_w: 0.1132
Bleu_2: 0.2863
Bleu_2_w: 0.0793
Bleu_3: 0.2127
Bleu_3_w: 0.0589
Bleu_4: 0.1632
Bleu_4_w: 0.0452
CIDEr: 0.9756
CIDEr_w: 0.2703
METEOR: 0.1921
METEOR_w: 0.0532
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: holoassist-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.2812
redundant_rate: 0.6232
semantic_score: 0.6782
jaccard_index: 0.2047
precision: 0.2349
recall: 0.4481
F1: 0.3082
num_matched: 6838.0000
num_mismatched: 4131.0000
num_missed: 4292.0000
num_redundant: 18140.0000
Bleu_1: 0.3886
Bleu_1_w: 0.0795
Bleu_2: 0.2671
Bleu_2_w: 0.0547
Bleu_3: 0.1963
Bleu_3_w: 0.0402
Bleu_4: 0.1491
Bleu_4_w: 0.0305
CIDEr: 0.8726
CIDEr_w: 0.1786
METEOR: 0.1812
METEOR_w: 0.0371
Evaluation datasets:
* epickitchens/dialog_val | num samples: 150
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.1
Evalulation: epickitchens-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.7505
redundant_rate: 0.0771
semantic_score: 0.6804
jaccard_index: 0.1637
precision: 0.6182
recall: 0.1671
F1: 0.2631
num_matched: 1075.0000
num_mismatched: 530.0000
num_missed: 4827.0000
num_redundant: 134.0000
Bleu_1: 0.4001
Bleu_1_w: 0.0655
Bleu_2: 0.2872
Bleu_2_w: 0.0470
Bleu_3: 0.2182
Bleu_3_w: 0.0357
Bleu_4: 0.1706
Bleu_4_w: 0.0279
CIDEr: 1.1686
CIDEr_w: 0.1913
METEOR: 0.2001
METEOR_w: 0.0328
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: epickitchens-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.6191
redundant_rate: 0.1988
semantic_score: 0.6640
jaccard_index: 0.2061
precision: 0.4745
recall: 0.2256
F1: 0.3058
num_matched: 1451.0000
num_mismatched: 999.0000
num_missed: 3982.0000
num_redundant: 608.0000
Bleu_1: 0.3887
Bleu_1_w: 0.0801
Bleu_2: 0.2711
Bleu_2_w: 0.0559
Bleu_3: 0.2005
Bleu_3_w: 0.0413
Bleu_4: 0.1546
Bleu_4_w: 0.0319
CIDEr: 1.0446
CIDEr_w: 0.2153
METEOR: 0.1885
METEOR_w: 0.0389
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: epickitchens-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.4639
redundant_rate: 0.3537
semantic_score: 0.6534
jaccard_index: 0.2261
precision: 0.3526
recall: 0.2924
F1: 0.3197
num_matched: 1881.0000
num_mismatched: 1567.0000
num_missed: 2984.0000
num_redundant: 1887.0000
Bleu_1: 0.3547
Bleu_1_w: 0.0802
Bleu_2: 0.2373
Bleu_2_w: 0.0537
Bleu_3: 0.1686
Bleu_3_w: 0.0381
Bleu_4: 0.1262
Bleu_4_w: 0.0285
CIDEr: 0.8348
CIDEr_w: 0.1888
METEOR: 0.1764
METEOR_w: 0.0399
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: epickitchens-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.3136
redundant_rate: 0.5891
semantic_score: 0.6491
jaccard_index: 0.1801
precision: 0.2138
recall: 0.3573
F1: 0.2676
num_matched: 2298.0000
num_mismatched: 2117.0000
num_missed: 2017.0000
num_redundant: 6331.0000
Bleu_1: 0.3525
Bleu_1_w: 0.0635
Bleu_2: 0.2293
Bleu_2_w: 0.0413
Bleu_3: 0.1598
Bleu_3_w: 0.0288
Bleu_4: 0.1185
Bleu_4_w: 0.0213
CIDEr: 0.7864
CIDEr_w: 0.1416
METEOR: 0.1703
METEOR_w: 0.0307
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: epickitchens-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.2040
redundant_rate: 0.7203
semantic_score: 0.6452
jaccard_index: 0.1408
precision: 0.1509
recall: 0.4296
F1: 0.2234
num_matched: 2763.0000
num_mismatched: 2357.0000
num_missed: 1312.0000
num_redundant: 13187.0000
Bleu_1: 0.3569
Bleu_1_w: 0.0503
Bleu_2: 0.2305
Bleu_2_w: 0.0325
Bleu_3: 0.1585
Bleu_3_w: 0.0223
Bleu_4: 0.1160
Bleu_4_w: 0.0163
CIDEr: 0.7808
CIDEr_w: 0.1100
METEOR: 0.1699
METEOR_w: 0.0239
Evaluation datasets:
* egoexolearn/dialog_val | num samples: 123
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.1
Evalulation: egoexolearn-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.8310
redundant_rate: 0.0334
semantic_score: 0.6980
jaccard_index: 0.1260
precision: 0.7252
recall: 0.1268
F1: 0.2158
num_matched: 1520.0000
num_mismatched: 506.0000
num_missed: 9965.0000
num_redundant: 70.0000
Bleu_1: 0.4299
Bleu_1_w: 0.0542
Bleu_2: 0.3105
Bleu_2_w: 0.0391
Bleu_3: 0.2375
Bleu_3_w: 0.0299
Bleu_4: 0.1875
Bleu_4_w: 0.0236
CIDEr: 1.1086
CIDEr_w: 0.1397
METEOR: 0.2051
METEOR_w: 0.0258
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: egoexolearn-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.8105
redundant_rate: 0.1288
semantic_score: 0.6944
jaccard_index: 0.1330
precision: 0.6288
recall: 0.1368
F1: 0.2247
num_matched: 1640.0000
num_mismatched: 632.0000
num_missed: 9719.0000
num_redundant: 336.0000
Bleu_1: 0.4263
Bleu_1_w: 0.0567
Bleu_2: 0.3069
Bleu_2_w: 0.0408
Bleu_3: 0.2339
Bleu_3_w: 0.0311
Bleu_4: 0.1840
Bleu_4_w: 0.0245
CIDEr: 1.0771
CIDEr_w: 0.1433
METEOR: 0.2020
METEOR_w: 0.0269
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: egoexolearn-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.7804
redundant_rate: 0.2401
semantic_score: 0.6867
jaccard_index: 0.1436
precision: 0.5313
recall: 0.1535
F1: 0.2382
num_matched: 1841.0000
num_mismatched: 792.0000
num_missed: 9358.0000
num_redundant: 832.0000
Bleu_1: 0.4164
Bleu_1_w: 0.0598
Bleu_2: 0.2942
Bleu_2_w: 0.0422
Bleu_3: 0.2212
Bleu_3_w: 0.0318
Bleu_4: 0.1721
Bleu_4_w: 0.0247
CIDEr: 1.0102
CIDEr_w: 0.1450
METEOR: 0.1951
METEOR_w: 0.0280
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: egoexolearn-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.7114
redundant_rate: 0.3865
semantic_score: 0.6745
jaccard_index: 0.1580
precision: 0.3969
recall: 0.1867
F1: 0.2540
num_matched: 2239.0000
num_mismatched: 1222.0000
num_missed: 8530.0000
num_redundant: 2180.0000
Bleu_1: 0.4039
Bleu_1_w: 0.0638
Bleu_2: 0.2814
Bleu_2_w: 0.0445
Bleu_3: 0.2088
Bleu_3_w: 0.0330
Bleu_4: 0.1606
Bleu_4_w: 0.0254
CIDEr: 0.9108
CIDEr_w: 0.1439
METEOR: 0.1870
METEOR_w: 0.0296
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: egoexolearn-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.5728
redundant_rate: 0.5894
semantic_score: 0.6633
jaccard_index: 0.1569
precision: 0.2432
recall: 0.2531
F1: 0.2481
num_matched: 3035.0000
num_mismatched: 2088.0000
num_missed: 6868.0000
num_redundant: 7355.0000
Bleu_1: 0.3851
Bleu_1_w: 0.0604
Bleu_2: 0.2611
Bleu_2_w: 0.0410
Bleu_3: 0.1877
Bleu_3_w: 0.0294
Bleu_4: 0.1406
Bleu_4_w: 0.0221
CIDEr: 0.7626
CIDEr_w: 0.1196
METEOR: 0.1741
METEOR_w: 0.0273
Evaluation datasets:
* wtag/dialog_val | num samples: 21
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.1
Evalulation: wtag-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.5732
redundant_rate: 0.1089
semantic_score: 0.6824
jaccard_index: 0.2099
precision: 0.4611
recall: 0.2209
F1: 0.2987
num_matched: 237.0000
num_mismatched: 221.0000
num_missed: 615.0000
num_redundant: 56.0000
Bleu_1: 0.3633
Bleu_1_w: 0.0763
Bleu_2: 0.2567
Bleu_2_w: 0.0539
Bleu_3: 0.1885
Bleu_3_w: 0.0396
Bleu_4: 0.1448
Bleu_4_w: 0.0304
CIDEr: 0.9644
CIDEr_w: 0.2025
METEOR: 0.2138
METEOR_w: 0.0449
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: wtag-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.5005
redundant_rate: 0.1612
semantic_score: 0.6793
jaccard_index: 0.2109
precision: 0.3881
recall: 0.2311
F1: 0.2897
num_matched: 248.0000
num_mismatched: 288.0000
num_missed: 537.0000
num_redundant: 103.0000
Bleu_1: 0.3624
Bleu_1_w: 0.0764
Bleu_2: 0.2559
Bleu_2_w: 0.0540
Bleu_3: 0.1871
Bleu_3_w: 0.0395
Bleu_4: 0.1435
Bleu_4_w: 0.0303
CIDEr: 0.9300
CIDEr_w: 0.1961
METEOR: 0.2112
METEOR_w: 0.0445
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: wtag-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.4352
redundant_rate: 0.1844
semantic_score: 0.6717
jaccard_index: 0.2099
precision: 0.3419
recall: 0.2367
F1: 0.2797
num_matched: 254.0000
num_mismatched: 352.0000
num_missed: 467.0000
num_redundant: 137.0000
Bleu_1: 0.3506
Bleu_1_w: 0.0736
Bleu_2: 0.2455
Bleu_2_w: 0.0515
Bleu_3: 0.1793
Bleu_3_w: 0.0376
Bleu_4: 0.1362
Bleu_4_w: 0.0286
CIDEr: 0.8458
CIDEr_w: 0.1775
METEOR: 0.2037
METEOR_w: 0.0428
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: wtag-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.4101
redundant_rate: 0.2383
semantic_score: 0.6682
jaccard_index: 0.2242
precision: 0.3430
recall: 0.2656
F1: 0.2994
num_matched: 285.0000
num_mismatched: 348.0000
num_missed: 440.0000
num_redundant: 198.0000
Bleu_1: 0.3571
Bleu_1_w: 0.0801
Bleu_2: 0.2481
Bleu_2_w: 0.0556
Bleu_3: 0.1788
Bleu_3_w: 0.0401
Bleu_4: 0.1315
Bleu_4_w: 0.0295
CIDEr: 0.8673
CIDEr_w: 0.1945
METEOR: 0.2046
METEOR_w: 0.0459
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: wtag-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.3849
redundant_rate: 0.3038
semantic_score: 0.6623
jaccard_index: 0.2035
precision: 0.2922
recall: 0.2582
F1: 0.2741
num_matched: 277.0000
num_mismatched: 383.0000
num_missed: 413.0000
num_redundant: 288.0000
Bleu_1: 0.3455
Bleu_1_w: 0.0703
Bleu_2: 0.2390
Bleu_2_w: 0.0486
Bleu_3: 0.1720
Bleu_3_w: 0.0350
Bleu_4: 0.1264
Bleu_4_w: 0.0257
CIDEr: 0.7777
CIDEr_w: 0.1583
METEOR: 0.1970
METEOR_w: 0.0401
Evaluation datasets:
* assembly101/dialog_val | num samples: 336
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.1
Evalulation: assembly101-dialog_val_L0_I5/stream/notalk0.1-maxlen_4k
Metrics:
missing_rate: 0.7505
redundant_rate: 0.0598
semantic_score: 0.6999
jaccard_index: 0.1639
precision: 0.6275
recall: 0.1665
F1: 0.2632
num_matched: 1385.0000
num_mismatched: 690.0000
num_missed: 6243.0000
num_redundant: 132.0000
Bleu_1: 0.4494
Bleu_1_w: 0.0737
Bleu_2: 0.3420
Bleu_2_w: 0.0560
Bleu_3: 0.2687
Bleu_3_w: 0.0440
Bleu_4: 0.2174
Bleu_4_w: 0.0356
CIDEr: 1.2016
CIDEr_w: 0.1969
METEOR: 0.2251
METEOR_w: 0.0369
Updating eval setup: not_talk_threshold: 0.1 -> 0.2
Evalulation: assembly101-dialog_val_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
missing_rate: 0.6718
redundant_rate: 0.1407
semantic_score: 0.6904
jaccard_index: 0.1954
precision: 0.5392
recall: 0.2059
F1: 0.2980
num_matched: 1713.0000
num_mismatched: 1017.0000
num_missed: 5588.0000
num_redundant: 447.0000
Bleu_1: 0.4380
Bleu_1_w: 0.0856
Bleu_2: 0.3286
Bleu_2_w: 0.0642
Bleu_3: 0.2547
Bleu_3_w: 0.0498
Bleu_4: 0.2038
Bleu_4_w: 0.0398
CIDEr: 1.1287
CIDEr_w: 0.2206
METEOR: 0.2136
METEOR_w: 0.0417
Updating eval setup: not_talk_threshold: 0.2 -> 0.3
Evalulation: assembly101-dialog_val_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
missing_rate: 0.5380
redundant_rate: 0.2288
semantic_score: 0.6756
jaccard_index: 0.2341
precision: 0.4443
recall: 0.2662
F1: 0.3329
num_matched: 2214.0000
num_mismatched: 1629.0000
num_missed: 4475.0000
num_redundant: 1140.0000
Bleu_1: 0.4198
Bleu_1_w: 0.0983
Bleu_2: 0.3065
Bleu_2_w: 0.0718
Bleu_3: 0.2322
Bleu_3_w: 0.0544
Bleu_4: 0.1824
Bleu_4_w: 0.0427
CIDEr: 0.9634
CIDEr_w: 0.2255
METEOR: 0.2017
METEOR_w: 0.0472
Updating eval setup: not_talk_threshold: 0.3 -> 0.4
Evalulation: assembly101-dialog_val_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
missing_rate: 0.4035
redundant_rate: 0.3546
semantic_score: 0.6672
jaccard_index: 0.2383
precision: 0.3424
recall: 0.3164
F1: 0.3289
num_matched: 2632.0000
num_mismatched: 2330.0000
num_missed: 3356.0000
num_redundant: 2726.0000
Bleu_1: 0.4067
Bleu_1_w: 0.0969
Bleu_2: 0.2925
Bleu_2_w: 0.0697
Bleu_3: 0.2197
Bleu_3_w: 0.0523
Bleu_4: 0.1718
Bleu_4_w: 0.0409
CIDEr: 0.8919
CIDEr_w: 0.2126
METEOR: 0.1935
METEOR_w: 0.0461
Updating eval setup: not_talk_threshold: 0.4 -> 0.5
Evalulation: assembly101-dialog_val_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
missing_rate: 0.2394
redundant_rate: 0.5560
semantic_score: 0.6602
jaccard_index: 0.2055
precision: 0.2342
recall: 0.4012
F1: 0.2957
num_matched: 3337.0000
num_mismatched: 2990.0000
num_missed: 1991.0000
num_redundant: 7922.0000
Bleu_1: 0.3875
Bleu_1_w: 0.0796
Bleu_2: 0.2718
Bleu_2_w: 0.0558
Bleu_3: 0.2006
Bleu_3_w: 0.0412
Bleu_4: 0.1551
Bleu_4_w: 0.0319
CIDEr: 0.7711
CIDEr_w: 0.1584
METEOR: 0.1843
METEOR_w: 0.0379
All Finished! Time: 117.22 minutes
Model: /fsx_0/user/imzyc/proact_exps/20240822-L4096-I5-ep4-NOSEP-nr0.1-klgmix-1s-lora-bs384-debug
Runs:
ego4d/dialog_val_L0_I5|stream|4k|0.05|summarize_and_drop
ego4d/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
holoassist/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
epickitchens/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
egoexolearn/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
wtag/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
assembly101/dialog_val_L0_I5|stream|4k|0.1|summarize_and_drop
ego4d/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
holoassist/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
epickitchens/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
egoexolearn/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
wtag/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
assembly101/dialog_val_L0_I5|stream|4k|0.2|summarize_and_drop
ego4d/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
holoassist/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
epickitchens/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
egoexolearn/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
wtag/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
assembly101/dialog_val_L0_I5|stream|4k|0.3|summarize_and_drop
ego4d/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
holoassist/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
epickitchens/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
egoexolearn/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
wtag/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
assembly101/dialog_val_L0_I5|stream|4k|0.4|summarize_and_drop
ego4d/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
holoassist/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
epickitchens/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
egoexolearn/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
wtag/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
assembly101/dialog_val_L0_I5|stream|4k|0.5|summarize_and_drop
sacct: error: _open_persist_conn: failed to open persistent connection to host:slurmdbd:6819: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused
submitit WARNING (2024-08-22 15:27:43,999) - Call #9 - Bypassing sacct error Command '['sacct', '-o', 'JobID,State,NodeList', '--parsable2', '-j', '14291']' returned non-zero exit status 1., status may be inaccurate.
submitit WARNING (2024-08-22 15:27:43,999) - Call #9 - Bypassing sacct error Command '['sacct', '-o', 'JobID,State,NodeList', '--parsable2', '-j', '14291']' returned non-zero exit status 1., status may be inaccurate.
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 144, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 133, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14293 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14293/14293_0_result.pkl
has not produced any output (state: CANCELLED by 649731)
Error stream produced:
----------------------------------------
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|██▌ | 1/4 [00:13<00:40, 13.58s/it]
Loading checkpoint shards: 50%|█████ | 2/4 [00:25<00:25, 12.90s/it]
Loading checkpoint shards: 75%|███████▌ | 3/4 [00:38<00:12, 12.57s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:40<00:00, 8.70s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:40<00:00, 10.24s/it]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 18.21it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 18.17it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 19.59it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 19.57it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 7.21it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 7.20it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
Run predictions: 100%|██████████| 2/2 [03:32<00:00, 106.01s/it]
Run predictions: 100%|██████████| 2/2 [03:32<00:00, 106.02s/it]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]submitit WARNING (2024-08-22 15:50:23,530) - Bypassing signal SIGCONT
slurmstepd: error: *** JOB 14293 ON h100-st-p548xlarge-13 CANCELLED AT 2024-08-22T15:50:23 ***
slurmstepd: error: *** STEP 14293.0 ON h100-st-p548xlarge-13 CANCELLED AT 2024-08-22T15:50:23 ***
submitit WARNING (2024-08-22 15:50:23,532) - Bypassing signal SIGTERM
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 144, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 133, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14350 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14350/14350_0_result.pkl
has not produced any output (state: CANCELLED by 636977)
Error stream produced:
----------------------------------------
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|██▌ | 1/4 [00:14<00:42, 14.00s/it]
Loading checkpoint shards: 50%|█████ | 2/4 [00:26<00:26, 13.26s/it]
Loading checkpoint shards: 75%|███████▌ | 3/4 [00:38<00:12, 12.80s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:41<00:00, 8.88s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:41<00:00, 10.46s/it]
Run predictions: 0%| | 0/3 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 3/3 [00:00<00:00, 40.05it/s]
Run predictions: 0%| | 0/3 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 3/3 [00:00<00:00, 39.88it/s]
Run predictions: 0%| | 0/3 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 3/3 [00:00<00:00, 38.99it/s]
Run predictions: 0%| | 0/3 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 3/3 [00:00<00:00, 40.50it/s]
Run predictions: 0%| | 0/3 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 3/3 [00:00<00:00, 39.57it/s]
Run predictions: 0%| | 0/10 [00:00<?, ?it/s]We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
Run predictions: 50%|█████ | 5/10 [00:37<00:37, 7.54s/it]
Run predictions: 60%|██████ | 6/10 [02:11<01:46, 26.67s/it]
Run predictions: 70%|███████ | 7/10 [02:31<01:15, 25.16s/it]
Run predictions: 80%|████████ | 8/10 [03:25<01:04, 32.38s/it]
Run predictions: 90%|█████████ | 9/10 [03:49<00:30, 30.35s/it]
Run predictions: 100%|██████████| 10/10 [03:57<00:00, 24.08s/it]
Run predictions: 100%|██████████| 10/10 [03:57<00:00, 23.76s/it]
Run predictions: 0%| | 0/10 [00:00<?, ?it/s]
Run predictions: 10%|█ | 1/10 [00:33<05:02, 33.64s/it]
Run predictions: 20%|██ | 2/10 [01:01<04:02, 30.35s/it]
Run predictions: 30%|███ | 3/10 [01:52<04:36, 39.51s/it]
Run predictions: 40%|████ | 4/10 [02:32<03:57, 39.67s/it]
Run predictions: 50%|█████ | 5/10 [03:25<03:43, 44.70s/it]
Run predictions: 60%|██████ | 6/10 [04:58<04:04, 61.16s/it]
Run predictions: 70%|███████ | 7/10 [05:20<02:24, 48.18s/it]
Run predictions: 80%|████████ | 8/10 [06:17<01:42, 51.00s/it]
Run predictions: 90%|█████████ | 9/10 [06:46<00:44, 44.09s/it]
Run predictions: 100%|██████████| 10/10 [06:53<00:00, 32.75s/it]
Run predictions: 100%|██████████| 10/10 [06:53<00:00, 41.35s/it]
Run predictions: 0%| | 0/10 [00:00<?, ?it/s]
Run predictions: 10%|█ | 1/10 [00:44<06:37, 44.20s/it]
Run predictions: 20%|██ | 2/10 [01:47<07:24, 55.58s/it]
Run predictions: 30%|███ | 3/10 [02:34<06:01, 51.71s/it]
Run predictions: 40%|████ | 4/10 [03:16<04:47, 47.92s/it]
Run predictions: 50%|█████ | 5/10 [04:55<05:30, 66.06s/it]
Run predictions: 60%|██████ | 6/10 [06:33<05:07, 76.86s/it]
Run predictions: 70%|███████ | 7/10 [06:56<02:58, 59.52s/it]
Run predictions: 80%|████████ | 8/10 [08:50<02:33, 76.82s/it]submitit WARNING (2024-08-22 18:25:01,469) - Bypassing signal SIGCONT
slurmstepd: error: *** JOB 14350 ON h100-st-p548xlarge-100 CANCELLED AT 2024-08-22T18:25:01 ***
slurmstepd: error: *** STEP 14350.0 ON h100-st-p548xlarge-100 CANCELLED AT 2024-08-22T18:25:01 ***
submitit WARNING (2024-08-22 18:25:01,476) - Bypassing signal SIGTERM
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 164, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 153, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14390 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14390/14390_0_result.pkl
has not produced any output (state: NODE_FAIL)
No output/error stream produced ! Check: /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14390/14390_0_log.out
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 164, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 153, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14391 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14391/14391_0_result.pkl
has not produced any output (state: NODE_FAIL)
No output/error stream produced ! Check: /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14391/14391_0_log.out
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 164, in <module>
print(f"Runs:\n{runs}")
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 153, in main
)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14393 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14393/14393_0_result.pkl
has not produced any output (state: NODE_FAIL)
Error stream produced:
----------------------------------------
slurmstepd: error: *** JOB 14393 ON h100-st-p548xlarge-129 CANCELLED AT 2024-08-22T20:08:33 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
sbatch: error: Batch job submission failed: Invalid node name specified
subprocess.CalledProcessError: Command '['sbatch', '/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/submission_file_d6a19adc4ffa4a628e5fafb456cb8832.sh']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 170, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 158, in main
job = executor.submit(run_eval, eval_args, "slurm_inference", verbose=True)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 734, in submit
job = self._internal_process_submissions([ds])[0]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/auto/auto.py", line 218, in _internal_process_submissions
return self._executor._internal_process_submissions(delayed_submissions)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/slurm/slurm.py", line 317, in _internal_process_submissions
return super()._internal_process_submissions(delayed_submissions)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 893, in _internal_process_submissions
job = self._submit_command(self._submitit_command_str)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 934, in _submit_command
output = utils.CommandFunction(command_list, verbose=False)() # explicit errors
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/utils.py", line 354, in __call__
raise FailedJobError(stderr) from subprocess_error
submitit.core.utils.FailedJobError: sbatch: error: Batch job submission failed: Invalid node name specified
sbatch: error: Batch job submission failed: Invalid node name specified
subprocess.CalledProcessError: Command '['sbatch', '/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/submission_file_b1632bebd8ee497f9f186b483d3918b7.sh']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 170, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 158, in main
job = executor.submit(run_eval, eval_args, "slurm_inference", verbose=True)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 734, in submit
job = self._internal_process_submissions([ds])[0]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/auto/auto.py", line 218, in _internal_process_submissions
return self._executor._internal_process_submissions(delayed_submissions)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/slurm/slurm.py", line 317, in _internal_process_submissions
return super()._internal_process_submissions(delayed_submissions)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 893, in _internal_process_submissions
job = self._submit_command(self._submitit_command_str)
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 934, in _submit_command
output = utils.CommandFunction(command_list, verbose=False)() # explicit errors
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/utils.py", line 354, in __call__
raise FailedJobError(stderr) from subprocess_error
submitit.core.utils.FailedJobError: sbatch: error: Batch job submission failed: Invalid node name specified
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 170, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 159, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14416 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14416/14416_0_result.pkl
has not produced any output (state: NODE_FAIL)
Error stream produced:
----------------------------------------
slurmstepd: error: *** JOB 14416 ON h100-st-p548xlarge-2 CANCELLED AT 2024-08-22T21:17:45 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 170, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 159, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14419 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14419/14419_0_result.pkl
has not produced any output (state: NODE_FAIL)
No output/error stream produced ! Check: /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14419/14419_0_log.out
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 167, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 156, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14650 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14650/14650_0_result.pkl
has not produced any output (state: NODE_FAIL)
Error stream produced:
----------------------------------------
slurmstepd: error: *** JOB 14650 ON h100-st-p548xlarge-2 CANCELLED AT 2024-08-23T04:38:10 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
slurmstepd: error: *** JOB 14650 ON h100-st-p548xlarge-2 CANCELLED AT 2024-08-23T04:48:27 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|██▌ | 1/4 [00:14<00:42, 14.04s/it]
Loading checkpoint shards: 50%|█████ | 2/4 [00:27<00:26, 13.45s/it]srun: error: slurm_receive_msgs: [[ip-10-200-21-218.us-east-2.compute.internal]:41498] failed: Socket timed out on send/recv operation
srun: error: Task launch for StepId=14650.0 failed on node h100-st-p548xlarge-130: Socket timed out on send/recv operation
srun: error: Application launch failed: Socket timed out on send/recv operation
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
Loading checkpoint shards: 75%|███████▌ | 3/4 [00:39<00:12, 12.81s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:41<00:00, 8.85s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:41<00:00, 10.48s/it]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 39.79it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 40.66it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 40.75it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 41.06it/s]
Run predictions: 0%| | 0/2 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 2/2 [00:00<00:00, 38.21it/s]
Run predictions: 0%| | 0/5 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 5/5 [00:00<00:00, 54.16it/s]
Run predictions: 0%| | 0/5 [00:00<?, ?it/s]
Run predictions: 100%|██████████| 5/5 [00:00<00:00, 53.90it/s]
Run predictions: 0%| | 0/5 [00:00<?, ?it/s]We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
srun: error: Timed out waiting for job step to complete
slurmstepd: error: *** STEP 14650.0 ON h100-st-p548xlarge-2 FAILED (non-zero exit code or other failure mode) ***
submitit WARNING (2024-08-23 04:54:52,019) - Bypassing signal SIGCONT
submitit WARNING (2024-08-23 04:54:52,020) - Bypassing signal SIGTERM
slurmstepd: error: Failed to send MESSAGE_TASK_EXIT: Connection refused
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 165, in <module>
main(eval_args, slurm_args)
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/mmassist/eval/eval.py", line 154, in main
job.results() # wait for the job to finish
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in results
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 287, in <listcomp>
return [tp.cast(R, sub_job.result()) for sub_job in self._sub_jobs]
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 289, in results
outcome, result = self._get_outcome_and_result()
File "/data/home/imzyc/miniconda3/envs/mm/lib/python3.10/site-packages/submitit/core/core.py", line 384, in _get_outcome_and_result
raise utils.UncompletedJobError("\n".join(message))
submitit.core.utils.UncompletedJobError: Job 14932 (task: 0) with path /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14932/14932_0_result.pkl
has not produced any output (state: NODE_FAIL)
No output/error stream produced ! Check: /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/imzyc/project/proactive-assist/slurm_logs/14932/14932_0_log.out
Model: /fsx_0/user/imzyc/proact_exps/20240822-L4096-I5-ep4-NOSEP-nr0.1-klgmix-1s-lora-bs384-debug
{'assembly101/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3}]},
'ego4d/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3}]},
'egoexolearn/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.4}]},
'epickitchens/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.2}]},
'holoassist/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.3}]},
'wtag/dialog-klg_test_L0_I5': {'stream': [{'context_handling_method': 'summarize_and_drop',
'eval_max_seq_len': 4096,
'eval_max_seq_len_str': '4k',
'inference_runner_type': 'stream',
'not_talk_threshold': 0.5}]}}
Evaluation datasets:
* ego4d/dialog-klg_test | num samples: 99
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.3
Evalulation: ego4d-dialog-klg_test_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
jaccard_index: 0.2215
missing_rate: 0.4983
redundant_rate: 0.5262
semantic_score: 0.6923
time_diff: 1.5123
precision: 0.3258
recall: 0.3450
F1: 0.3351
num_matched: 1741.0000
num_mismatched: 791.0000
num_missed: 2515.0000
num_redundant: 2812.0000
Bleu_1: 0.3835
Bleu_1_w: 0.0850
Bleu_2: 0.2653
Bleu_2_w: 0.0588
Bleu_3: 0.1941
Bleu_3_w: 0.0430
Bleu_4: 0.1485
Bleu_4_w: 0.0329
CIDEr: 0.8355
CIDEr_w: 0.1851
METEOR: 0.1879
METEOR_w: 0.0416
Evaluation datasets:
* epickitchens/dialog-klg_test | num samples: 150
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.2
Evalulation: epickitchens-dialog-klg_test_L0_I5/stream/notalk0.2-maxlen_4k
Metrics:
jaccard_index: 0.2144
missing_rate: 0.5547
redundant_rate: 0.3526
semantic_score: 0.6676
time_diff: 0.5445
precision: 0.3873
recall: 0.2664
F1: 0.3157
num_matched: 1607.0000
num_mismatched: 1079.0000
num_missed: 3346.0000
num_redundant: 1463.0000
Bleu_1: 0.3977
Bleu_1_w: 0.0853
Bleu_2: 0.2734
Bleu_2_w: 0.0586
Bleu_3: 0.2000
Bleu_3_w: 0.0429
Bleu_4: 0.1550
Bleu_4_w: 0.0332
CIDEr: 1.0101
CIDEr_w: 0.2166
METEOR: 0.1894
METEOR_w: 0.0406
Evaluation datasets:
* holoassist/dialog-klg_test | num samples: 291
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.3
Evalulation: holoassist-dialog-klg_test_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
jaccard_index: 0.2842
missing_rate: 0.5910
redundant_rate: 0.0834
semantic_score: 0.7066
time_diff: 0.2819
precision: 0.6606
recall: 0.2948
F1: 0.4076
num_matched: 4105.0000
num_mismatched: 1591.0000
num_missed: 8230.0000
num_redundant: 518.0000
Bleu_1: 0.4468
Bleu_1_w: 0.1270
Bleu_2: 0.3305
Bleu_2_w: 0.0939
Bleu_3: 0.2574
Bleu_3_w: 0.0731
Bleu_4: 0.2054
Bleu_4_w: 0.0584
CIDEr: 1.3007
CIDEr_w: 0.3696
METEOR: 0.2151
METEOR_w: 0.0611
Evaluation datasets:
* egoexolearn/dialog-klg_test | num samples: 123
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.4
Evalulation: egoexolearn-dialog-klg_test_L0_I5/stream/notalk0.4-maxlen_4k
Metrics:
jaccard_index: 0.1634
missing_rate: 0.6318
redundant_rate: 0.5117
semantic_score: 0.6693
time_diff: 0.6571
precision: 0.3003
recall: 0.2264
F1: 0.2582
num_matched: 2730.0000
num_mismatched: 1710.0000
num_missed: 7618.0000
num_redundant: 4652.0000
Bleu_1: 0.3995
Bleu_1_w: 0.0653
Bleu_2: 0.2734
Bleu_2_w: 0.0447
Bleu_3: 0.1992
Bleu_3_w: 0.0325
Bleu_4: 0.1510
Bleu_4_w: 0.0247
CIDEr: 0.8387
CIDEr_w: 0.1370
METEOR: 0.1802
METEOR_w: 0.0294
Evaluation datasets:
* assembly101/dialog-klg_test | num samples: 336
Updating eval setup: inference_runner_type: None -> stream
Updating eval setup: not_talk_threshold: 0.5 -> 0.3
Evalulation: assembly101-dialog-klg_test_L0_I5/stream/notalk0.3-maxlen_4k
Metrics:
jaccard_index: 0.2835
missing_rate: 0.4738
redundant_rate: 0.2770
semantic_score: 0.7053
time_diff: 0.6322
precision: 0.4681
recall: 0.3407
F1: 0.3944
num_matched: 2814.0000
num_mismatched: 1532.0000
num_missed: 3914.0000
num_redundant: 1665.0000
Bleu_1: 0.4403
Bleu_1_w: 0.1249
Bleu_2: 0.3314
Bleu_2_w: 0.0939
Bleu_3: 0.2589
Bleu_3_w: 0.0734
Bleu_4: 0.2095
Bleu_4_w: 0.0594
CIDEr: 1.1329
CIDEr_w: 0.3212
METEOR: 0.2114
METEOR_w: 0.0599
Evaluation datasets:
* wtag/dialog-klg_test | num samples: 21
Updating eval setup: inference_runner_type: None -> stream
Evalulation: wtag-dialog-klg_test_L0_I5/stream/notalk0.5-maxlen_4k
Metrics:
jaccard_index: 0.2215
missing_rate: 0.3536
redundant_rate: 0.3931
semantic_score: 0.6719
time_diff: 1.4128
precision: 0.2950
recall: 0.3142
F1: 0.3043
num_matched: 367.0000
num_mismatched: 388.0000
num_missed: 413.0000
num_redundant: 489.0000
Bleu_1: 0.3966
Bleu_1_w: 0.0879
Bleu_2: 0.2880
Bleu_2_w: 0.0638
Bleu_3: 0.2202
Bleu_3_w: 0.0488
Bleu_4: 0.1728
Bleu_4_w: 0.0383
CIDEr: 1.2909
CIDEr_w: 0.2859
METEOR: 0.2019
METEOR_w: 0.0447
All Finished! Time: 24.86 minutes
Model: /fsx_0/user/imzyc/proact_exps/20240822-L4096-I5-ep4-NOSEP-nr0.1-klgmix-1s-lora-bs384-debug
Runs:
ego4d/dialog-klg_test_L0_I5|stream|4k|0.3|summarize_and_drop
epickitchens/dialog-klg_test_L0_I5|stream|4k|0.2|summarize_and_drop
holoassist/dialog-klg_test_L0_I5|stream|4k|0.3|summarize_and_drop
egoexolearn/dialog-klg_test_L0_I5|stream|4k|0.4|summarize_and_drop
assembly101/dialog-klg_test_L0_I5|stream|4k|0.3|summarize_and_drop
wtag/dialog-klg_test_L0_I5|stream|4k|0.5|summarize_and_drop