Skip to content

Commit ad63453

Browse files
committed
Add example script and update pip installation method
1 parent 14f4a1a commit ad63453

File tree

5 files changed

+44
-6
lines changed

5 files changed

+44
-6
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,21 @@ Evaluation codes for MS COCO caption generation.
77
This repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.
88

99
The code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.
10-
Caption evaluation depends on the COCO API that natively supports Python 3 (see Requirements).
10+
Caption evaluation depends on the COCO API that natively supports Python 3.
1111

1212
## Requirements ##
1313
- Java 1.8.0
1414
- Python 3
15-
- pycocotools (COCO Python API): https://github.com/cocodataset/cocoapi
1615

1716
## Installation ##
18-
To install pycocoevalcap and the pycocotools dependency, run:
17+
To install pycocoevalcap and the pycocotools dependency (https://github.com/cocodataset/cocoapi), run:
1918
```
20-
pip install git+https://github.com/salaniz/pycocoevalcap
19+
pip install pycocoevalcap
2120
```
2221

22+
## Usage ##
23+
See the example script: [example/coco_eval_example.py](example/coco_eval_example.py)
24+
2325
## Files ##
2426
./
2527
- eval.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.

example/captions_val2014.json

Lines changed: 1 addition & 0 deletions
Large diffs are not rendered by default.

example/captions_val2014_fakecap_results.json

Lines changed: 1 addition & 0 deletions
Large diffs are not rendered by default.

example/coco_eval_example.py

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
from pycocotools.coco import COCO
2+
from pycocoevalcap.eval import COCOEvalCap
3+
4+
annotation_file = 'captions_val2014.json'
5+
results_file = 'captions_val2014_fakecap_results.json'
6+
7+
# create coco object and coco_result object
8+
coco = COCO(annotation_file)
9+
coco_result = coco.loadRes(results_file)
10+
11+
# create coco_eval object by taking coco and coco_result
12+
coco_eval = COCOEvalCap(coco, coco_result)
13+
14+
# evaluate on a subset of images by setting
15+
# coco_eval.params['image_id'] = coco_result.getImgIds()
16+
# please remove this line when evaluating the full validation set
17+
coco_eval.params['image_id'] = coco_result.getImgIds()
18+
19+
# evaluate results
20+
# SPICE will take a few minutes the first time, but speeds up due to caching
21+
coco_eval.evaluate()
22+
23+
# print output evaluation scores
24+
for metric, score in coco_eval.eval.items():
25+
print(f'{metric}: {score:.3f}')

setup.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,20 @@
33
# Prepend pycocoevalcap to package names
44
package_names = ['pycocoevalcap.'+p for p in find_namespace_packages()]
55

6+
with open("README.md", "r") as fh:
7+
readme = fh.read()
8+
69
setup(
710
name='pycocoevalcap',
8-
version=1.1,
11+
version=1.2,
12+
maintainer='salaniz',
13+
description="MS-COCO Caption Evaluation for Python 3",
14+
long_description=readme,
15+
long_description_content_type="text/markdown",
16+
url="https://github.com/salaniz/pycocoevalcap",
917
packages=['pycocoevalcap']+package_names,
1018
package_dir={'pycocoevalcap': '.'},
1119
package_data={'': ['*.jar', '*.gz']},
12-
install_requires=['pycocotools>=2.0.0']
20+
install_requires=['pycocotools>=2.0.2'],
21+
python_requires='>=3'
1322
)

0 commit comments

Comments
 (0)