|
1 |
| -# OpenPMC-VL |
| 1 | +# Open-PMC |
2 | 2 |
|
3 | 3 | ----------------------------------------------------------------------------------------
|
4 | 4 |
|
5 | 5 | [](https://github.com/VectorInstitute/pmc-data-extraction/actions/workflows/code_checks.yml)
|
6 | 6 | [](https://github.com/VectorInstitute/pmc-data-extraction/actions/workflows/integration_tests.yml)
|
7 | 7 | [](https://github.com/VectorInstitute/pmc-data-extraction/blob/main/LICENSE.md)
|
8 | 8 |
|
9 |
| -A toolkit to download, augment, and benchmark OpenPMC-VL; a large dataset of image-text pairs extracted from open-access scientific articles on PubMedCentral. |
| 9 | +<div align="center"> |
| 10 | + <img src="https://github.com/VectorInstitute/pmc-data-extraction/blob/0a969136344a07267bb558d01f3fe76b36b93e1a/media/open-pmc-pipeline.png?raw=true" |
| 11 | + alt="Open-PMC Pipeline" |
| 12 | + width="1000" /> |
| 13 | +</div> |
| 14 | + |
| 15 | +A toolkit to download, augment, and benchmark Open-PMC; a large dataset of image-text pairs extracted from open-access scientific articles on PubMedCentral. |
| 16 | + |
| 17 | +For more details, see the following resources: |
| 18 | +- **arXiv Paper:** [http://arxiv.org/abs/2503.14377](http://arxiv.org/abs/2503.14377) |
| 19 | +- **Dataset:** [https://huggingface.co/datasets/vector-institute/open-pmc](https://huggingface.co/datasets/vector-institute/open-pmc) |
| 20 | +- **Model Checkpoint:** [https://huggingface.co/vector-institute/open-pmc-clip](https://huggingface.co/vector-institute/open-pmc-clip) |
| 21 | + |
| 22 | +## Table of Contents |
| 23 | + |
| 24 | +1. [Installing Dependencies](#installing-dependencies) |
| 25 | +2. [Download and Parse Image-Caption Pairs](#download-and-parse-image-caption-pairs-from-pubmed-articles) |
| 26 | +3. [Run Benchmarking Experiments](#run-benchmarking-experiments) |
| 27 | +4. [Citation](#citation) |
10 | 28 |
|
11 | 29 | ## Installing dependencies
|
12 | 30 |
|
@@ -133,18 +151,17 @@ mmlearn_run \
|
133 | 151 | dataloader.test.batch_size=64 \
|
134 | 152 | resume_from_checkpoint="path/to/model/checkpoint"
|
135 | 153 | ```
|
136 |
| -For more comprehensive examples of shell scripts that run various experiments with OpenPMC-VL, refer to `openpmcvl/experiment/scripts`. |
| 154 | +For more comprehensive examples of shell scripts that run various experiments with Open-PMC, refer to `openpmcvl/experiment/scripts`. |
137 | 155 | For more information about `mmlearn`, please refer to the package's [official codebase](https://github.com/VectorInstitute/mmlearn).
|
138 | 156 |
|
139 | 157 |
|
140 |
| - |
141 |
| -## References |
142 |
| -<a id="1">[1]</a> PMC-OA paper: |
143 |
| -```latex |
144 |
| -@article{lin2023pmc, |
145 |
| - title={PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents}, |
146 |
| - author={Lin, Weixiong and Zhao, Ziheng and Zhang, Xiaoman and Wu, Chaoyi and Zhang, Ya and Wang, Yanfeng and Xie, Weidi}, |
147 |
| - journal={arXiv preprint arXiv:2303.07240}, |
148 |
| - year={2023} |
| 158 | +## Citation |
| 159 | +If you find the code useful for your research, please consider citing |
| 160 | +```bib |
| 161 | +@article{baghbanzadeh2025advancing, |
| 162 | + title={Advancing Medical Representation Learning Through High-Quality Data}, |
| 163 | + author={Baghbanzadeh, Negin and Fallahpour, Adibvafa and Parhizkar, Yasaman and Ogidi, Franklin and Roy, Shuvendu and Ashkezari, Sajad and Khazaie, Vahid Reza and Colacci, Michael and Etemad, Ali and Afkanpour, Arash and Dolatabadi, Elham}, |
| 164 | + journal={arXiv preprint arXiv:2503.14377}, |
| 165 | + year={2025} |
149 | 166 | }
|
150 | 167 | ```
|
0 commit comments