Skip to content

Update README.md #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Mar 27, 2025
Merged
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 38 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,36 @@
# OpenPMC-VL
# Open-PMC

----------------------------------------------------------------------------------------

[![code checks](https://github.com/VectorInstitute/aieng-template/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/pmc-data-extraction/actions/workflows/code_checks.yml)
[![integration tests](https://github.com/VectorInstitute/aieng-template/actions/workflows/integration_tests.yml/badge.svg)](https://github.com/VectorInstitute/pmc-data-extraction/actions/workflows/integration_tests.yml)
[![license](https://img.shields.io/github/license/VectorInstitute/aieng-template.svg)](https://github.com/VectorInstitute/pmc-data-extraction/blob/main/LICENSE.md)

A toolkit to download, augment, and benchmark OpenPMC-VL; a large dataset of image-text pairs extracted from open-access scientific articles on PubMedCentral.
<div align="center">
<img src="https://github.com/VectorInstitute/pmc-data-extraction/blob/0a969136344a07267bb558d01f3fe76b36b93e1a/media/open-pmc-pipeline.png?raw=true"
alt="Open-PMC Pipeline"
width="1000" />
</div>

A toolkit to download, augment, and benchmark Open-PMC; a large dataset of image-text pairs extracted from open-access scientific articles on PubMedCentral.

For more details, see the following resources:
- **arXiv Paper:** [PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents](http://arxiv.org/abs/2503.14377)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the paper title

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

- **Dataset on Hugging Face:** [Open_PMC Dataset on Hugging Face](https://huggingface.co/datasets/vector-institute/open-pmc)
- **Model Checkpoint on Hugging Face:** [Open_PMC_CLIP Model Checkpoint on Hugging Face](https://huggingface.co/vector-institute/open-pmc-clip)

## Table of Contents

1. [Hugging Face Dataset and Checkpoint](#hugging-face-dataset-and-checkpoint)
2. [Installing Dependencies](#installing-dependencies)
3. [Download and Parse Image-Caption Pairs](#download-and-parse-image-caption-pairs-from-pubmed-articles)
4. [Run Benchmarking Experiments](#run-benchmarking-experiments)
5. [References](#references)

## Hugging Face Dataset and Checkpoint

- **Dataset:** [Open_PMC Dataset on Hugging Face](https://huggingface.co/datasets/vector-institute/open_pmc)
- **Checkpoint:** [Open_PMC_CLIP Model Checkpoint on Hugging Face](https://huggingface.co/vector-institute/open_pmc_clip)

## Installing dependencies

Expand Down Expand Up @@ -133,18 +157,27 @@ mmlearn_run \
dataloader.test.batch_size=64 \
resume_from_checkpoint="path/to/model/checkpoint"
```
For more comprehensive examples of shell scripts that run various experiments with OpenPMC-VL, refer to `openpmcvl/experiment/scripts`.
For more comprehensive examples of shell scripts that run various experiments with Open-PMC, refer to `openpmcvl/experiment/scripts`.
For more information about `mmlearn`, please refer to the package's [official codebase](https://github.com/VectorInstitute/mmlearn).


## Citation
If you find the code useful for your research, please consider citing
```bib
@article{baghbanzadeh2025advancing,
title={Advancing Medical Representation Learning Through High-Quality Data},
author={Baghbanzadeh, Negin and Fallahpour, Adibvafa and Parhizkar, Yasaman and Ogidi, Franklin and Roy, Shuvendu and Ashkezari, Sajad and Khazaie, Vahid Reza and Colacci, Michael and Etemad, Ali and Afkanpour, Arash and others},
journal={arXiv preprint arXiv:2503.14377},
year={2025}
}
```

## References
<a id="1">[1]</a> PMC-OA paper:
```latex
latex
@article{lin2023pmc,
title={PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents},
author={Lin, Weixiong and Zhao, Ziheng and Zhang, Xiaoman and Wu, Chaoyi and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
journal={arXiv preprint arXiv:2303.07240},
year={2023}
}
```