forked from tylin/coco-caption
-
Notifications
You must be signed in to change notification settings - Fork 88
Open
Description
I'm currently using the pycocoevalcap package to evaluate the performance of my image captioning model. I've noticed that the CIDEr score is consistently 0 for all of my model's generated captions, while all other metrics (BLEU, METEOR, SPICE and ROUGE) are normal.
I have tried to run the evaluation on each image separately, but the situation remains the same. The CIDEr score is always 0.
I'm not sure what could be causing this issue, as the other metrics seem to be working correctly. Can anyone help me figure out why the CIDEr score is not being computed correctly?
Thanks in advance for your help!
Metadata
Metadata
Assignees
Labels
No labels