Parquet files and codebook are available on Zenodo: 10.5281/zenodo.7856523
Install uv.
Clone this repository:
git clone https://github.com/MDverse/mdws.gitMove to the new directory:
cd mdwsCreate a virtual environment:
uv syncHave a look to the notes regarding Zenodo and its API.
Create a token here: https://zenodo.org/account/settings/applications/tokens/new/
and store it in the file .env:
ZENODO_TOKEN=YOUR-ZENODO-TOKEN
This file is automatically ignored by git and won't be published on GitHub.
Scrap Zenodo for MD-related datasets and files:
uv run scripts/scrap_zenodo.py --query params/query.yml --output dataScrap Zenodo with a small query, for development or demo purpose:
uv run scripts/scrap_zenodo.py --query params/query_dev.yml --output tmpThe scraping takes some time (about an hour). A mechanism has been set up to avoid overloading the Zenodo API. Be patient.
Eventually, the scraper will produce three files: zenodo_datasets.tsv, zenodo_datasets_text.tsv and zenodo_files.tsv β¨
Note that "false positives" have been removed in the scraping proccess.
Have a look to the notes regarding Figshare and its API.
Scrap FigShare for MD-related datasets and files:
uv run scripts/scrap_figshare.py --query params/query.yml --output dataScrap FigShare with a small query, for development or demo purpose:
uv run scripts/scrap_figshare.py --query params/query_dev.yml --output tmpThe scraping takes some time (about 2 hours). Be patient.
Eventually, the scraper will produce three files: figshare_datasets.tsv, figshare_datasets_text.tsv and figshare_files.tsv β¨
Have a look to the notes regarding OSF and its API.
Create a token here: https://osf.io/settings/tokens (select the osf.full_read scope)
and store it in the file .env:
OSF_TOKEN=<YOUR OSF TOKEN HERE>
This file is ignored by git and won't be published on GitHub.
Scrap OSF for MD-related datasets and files:
uv run scripts/scrap_osf.py --query params/query.yml --output dataScrap OSF with a small query, for development or demo purpose:
uv run scripts/scrap_osf.py --query params/query_dev.yml --output tmpThe scraping takes some time (~ 30 min). Be patient.
Eventually, the scraper will produce three files: osf_datasets.tsv, osf_datasets_text.tsv and osf_files.tsv β¨
To download Gromacs mdp and gro files, use the following commands:
uv run scripts/download_files.py --input data/zenodo_files.tsv \
--storage data/downloads/ --type mdp --type gro --withzipfilesuv run scripts/download_files.py --input data/figshare_files.tsv \
--storage data/downloads/ --type mdp --type gro --withzipfilesuv run scripts/download_files.py --input data/osf_files.tsv \
--storage data/downloads/ --type mdp --type gro --withzipfilesOption --withzipfiles will also get files packaged in zip archives. It means that the script will first download the entire zip archive and then extract the mdp and gro files.
This step will take a couple of hours to complete. Depending on the stability of your internet connection and the availability of the data repository servers, the download might fail for a couple of files. Re-rerun previous commands to resume the download. Files already retrieved will not be downloaded again.
Expect about 640 GB of data with the --withzipfiles option (~ 8800 gro files and 9500 mdp files)
Numbers are indicative only and may vary depend on the time you run this command (databases tend to get bigger and bigger).
uv run scripts/parse_mdp_files.py \
--input data/zenodo_files.tsv --input data/figshare_files.tsv --input data/osf_files.tsv \
--storage data/downloads --output dataThis step will take a couple of seconds to run. Results will be saved in data/gromacs_mdp_files_info.tsv.
A rough molecular composition is deduced from the file params/residue_name.yml that contains a partial list of residues names organized in categories protein, lipid, nucleic, glucid and water & ion.
uv run scripts/parse_gro_files.py \
--input data/zenodo_files.tsv --input data/figshare_files.tsv --input data/osf_files.tsv \
--storage data/downloads --residues params/residue_names.yml --output dataThis step will take about 4 hours to run. Results will be saved in data/gromacs_gro_files_info.tsv.
Parquet format is a column-based storage format that is supported by many data analysis tools. It's an efficient data format for large datasets.
uv run scripts/export_to_parquet.pyThis step will take a couple of seconds to run. Results will be saved in:
data/datasets.parquet
data/files.parquet
data/gromacs_gro_files.parquet
data/gromacs_mdp_files.parquetYou can run all commands above with the run_all.sh script:
bash run_all.shWarning
Be sure, you have have sufficient time, bandwidth and disk space to run this command.
For the owner of the Zenodo record only. Zenodo token requires deposit:actions and deposit:write scopes.
Update metadata:
uv run scripts/upload_datasets_to_zenodo.py --record 7856524 --metadata params/zenodo_metadata.json Update files:
uv run scripts/upload_datasets_to_zenodo.py --record 7856524 \
--file data/datasets.parquet \
--file data/files.parquet \
--file data/gromacs_gro_files.parquet \
--file data/gromacs_mdp_files.parquet \
--file docs/data_model_parquet.mdNote
The latest version of the dataset is available with the DOI 10.5281/zenodo.7856523.