Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 0 additions & 6 deletions .devcontainer/devcontainer.json

This file was deleted.

63 changes: 42 additions & 21 deletions CONTRIBUTING.md
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this CONTRIBUTING doc has a mix of development guidelines for external and specific to sage developers, e.g: only sage bionetworks developers can utilize the run_integration_tests functionality. I might just migrate all of the sage specific to README under Sage Bionetworks only and keep this section just for the forking and adding changes for external developers.
They can still pull the public images and build images via Dockerfile to do testing. But they won't be able to run any of our pipeline steps etc etc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be helpful to get xindis feedback on this

Copy link
Contributor Author

@rxu17 rxu17 Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, I added that as AC for the relevant onboarding tickets - to review and approve the documentation

Original file line number Diff line number Diff line change
Expand Up @@ -23,25 +23,11 @@ This package uses `flake8` - it's settings are described in [setup.cfg](setup.cf

### Install development dependencies

This will install all the dependencies of the package including the active branch of `Genie`. We highly recommend that you leverage some form of python version management like [pyenv](https://github.com/pyenv/pyenv) or [anaconda](https://www.anaconda.com/products/individual). There are two ways you can install the dependencies for this package.

#### pip
This is the more traditional way of installing dependencies. Follow instructions [here](https://pip.pypa.io/en/stable/installation/) to learn how to install pip.

```
pip install -r requirements-dev.txt
pip install -r requirements.txt
```

#### pipenv
`pipenv` is a Python package manager. Learn more about [pipenv](https://pipenv.pypa.io/en/latest/) and how to install it.

```
# Coming soon
```
This will install all the dependencies of the package including the active branch of `Genie`. We highly recommend that you leverage some form of python version management like [pyenv](https://github.com/pyenv/pyenv) or [anaconda](https://www.anaconda.com/products/individual). Follow [dependencies installation instruction here](./README.md#running-locally)

### Developing


The GENIE project follows the standard [git flow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) development strategy.
> To ensure the most fluid development, try not to push to your `develop` or `main` branch.

Expand All @@ -54,17 +40,17 @@ The GENIE project follows the standard [git flow](https://www.atlassian.com/git/
git pull upstream develop
```

1. Create a feature branch which off the `develop` branch. If there is a GitHub/JIRA issue that you are addressing, name the branch after the issue with some more detail (like `{GH|JIRA}-123-add-some-new-feature`).
1. Create a feature branch which off the `develop` branch. If there is a GitHub/JIRA issue that you are addressing, name the branch after the issue with some more detail (like `{GH|GEN}-123-add-some-new-feature`).

```
git checkout develop
git checkout -b JIRA-123-new-feature
git checkout -b GEN-123-new-feature
```

1. At this point, you have only created the branch locally, you need to push this to your fork on GitHub.
1. At this point, you have only created the branch locally, you need to push this remotely to Github.

```
git push --set-upstream origin JIRA-123-new-feature
git push
```

You should now be able to see the branch on GitHub. Make commits as you deem necessary. It helps to provide useful commit messages - a commit message saying 'Update' is a lot less helpful than saying 'Remove X parameter because it was unused'.
Expand Down Expand Up @@ -92,11 +78,46 @@ The GENIE project follows the standard [git flow](https://www.atlassian.com/git/

This package uses [semantic versioning](https://semver.org/) for releasing new versions. The version should be updated on the `develop` branch as changes are reviewed and merged in by a code maintainer. The version for the package is maintained in the [genie/__init__.py](genie/__init__.py) file. A github release should also occur every time `develop` is pushed into `main` and it should match the version for the package.

### Developing with Docker

See [using `docker`](./README.md#using-docker-highly-recommended) for setting up the initial docker environment.

A docker build will be created for your feature branch every time you have an open PR on github and add the label `run_integration_tests` to it.

It is recommended to develop with docker. You can either write the code changes locally, push it to your remote and wait for docker to rebuild OR do the following:

1. Make any code changes. These cannot be dependency changes - those would require a docker rebuild.
1. Create a running docker container with the image that you pulled down or created earlier

```
docker run -d <docker_image_name> /bin/bash -c "while true; do sleep 1; done"
```

1. Copy your code changes to the docker image:

```
docker cp <folder or name of file> <docker_image_name>:/root/Genie/<folder or name of files>
```

1. Run your image in interactive mode:

```
docker exec -it -e SYNAPSE_AUTH_TOKEN=$YOUR_SYNAPSE_TOKEN <docker_image_name> /bin/bash
```

1. Do any commands or tests you need to do

### Testing

#### Running test pipeline

Make sure to run each of the [pipeline steps here](README.md#developing-locally) on the test pipeline and verify that your pipeline runs as expected. This is __not__ automatically run by Github Actions and have to be manually run.
Currently our Github Actions will run each of the [pipeline steps here](README.md#developing-locally) on the test pipeline. This is triggered by adding the Github label `run_integration_tests` on your open PR.

To trigger `run_integration_tests`:

- Add `run_integration_tests` for the first time when you just open your PR
- Remove `run_integration_tests` label and re-add it
- Make any commit and pushes when the PR is still open

#### Running tests

Expand Down
96 changes: 75 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,30 +90,84 @@ Please view [contributing guide](CONTRIBUTING.md) to learn how to contribute to

# Sage Bionetworks Only

## Developing locally
## Running locally

These are instructions on how you would develop and test the pipeline locally.
These are instructions on how you would setup your environment and run the pipeline locally.

1. Make sure you have read through the [GENIE Onboarding Docs](https://sagebionetworks.jira.com/wiki/spaces/APGD/pages/2163344270/Onboarding) and have access to all of the required repositories, resources and synapse projects for Main GENIE.
1. Be sure you are invited to the Synapse GENIE Admin team.
1. Make sure you are a Synapse certified user: [Certified User - Synapse User Account Types](https://help.synapse.org/docs/Synapse-User-Account-Types.2007072795.html#SynapseUserAccountTypes-CertifiedUser)
1. Be sure to clone the cbioportal repo: https://github.com/cBioPortal/cbioportal and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile)
1. Be sure to clone the annotation-tools repo: https://github.com/Sage-Bionetworks/annotation-tools and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile).

### Using `conda`

Follow instructions to install conda on your computer:

Install `conda-forge` and [`mamba`](https://github.com/mamba-org/mamba)
```
conda install -n base -c conda-forge mamba
```

Install Python and R versions via `mamba`
```
mamba create -n genie_dev -c conda-forge python=3.10 r-base=4.3
```

### Using `pipenv`

Installing via [pipenv](https://pipenv.pypa.io/en/latest/installation.html)

1. Specify a python version that is supported by this repo:

```
pipenv --python <python_version>
```

1. [pipenv install from requirements file](https://docs.pipenv.org/en/latest/advanced.html#importing-from-requirements-txt)

1. Activate your `pipenv`:

```
pipenv shell
```

### Using `docker` (**HIGHLY** Recommended)

This is the most reproducible method even though it will be the most tedious to develop with. See [CONTRIBUTING docs for how to locally develop with docker.](/CONTRIBUTING.md). This will setup the docker image in your environment.

1. Pull pre-existing docker image or build from Dockerfile:
Pull pre-existing docker image. You can find the list of images [from here.](https://github.com/Sage-Bionetworks/Genie/pkgs/container/genie)
```
docker pull <some_docker_image_name>
```

Build from Dockerfile
```
docker build -f Dockerfile -t <some_docker_image_name> .
```

1. Run docker image:
```
docker run --rm -it -e SYNAPSE_AUTH_TOKEN=$YOUR_SYNAPSE_TOKEN <some_docker_image_name>
```

### Setting up

1. Clone this repo and install the package locally.

Install Python packages. This is the more traditional way of installing dependencies. Follow instructions [here](https://pip.pypa.io/en/stable/installation/) to learn how to install pip.

```
pip install -e .
pip install -r requirements.txt
pip install -r requirements-dev.txt
```

If you are having trouble with the above, try installing via `pipenv`

1. Specify a python version that is supported by this repo:
```pipenv --python <python_version>```

1. [pipenv install from requirements file](https://docs.pipenv.org/en/latest/advanced.html#importing-from-requirements-txt)

1. Activate your `pipenv`:
```pipenv shell```
Install R packages. Note that the R package setup of this is the most unpredictable so it's likely you have to manually install specific packages first before the rest of it will install.
```
Rscript R/install_packages.R
```

1. Configure the Synapse client to authenticate to Synapse.
1. Create a Synapse [Personal Access token (PAT)](https://help.synapse.org/docs/Managing-Your-Account.2055405596.html#ManagingYourAccount-PersonalAccessTokens).
Expand All @@ -131,45 +185,45 @@ These are instructions on how you would develop and test the pipeline locally.
synapse login
```

1. Run the different pipelines on the test project. The `--project_id syn7208886` points to the test project.
1. Run the different steps of the pipeline on the test project. The `--project_id syn7208886` points to the test project. You should always be using the test project when developing, testing and running locally.

1. Validate all the files **excluding vcf files**:

```
python bin/input_to_database.py main --project_id syn7208886 --onlyValidate
python3 bin/input_to_database.py main --project_id syn7208886 --onlyValidate
```

1. Validate **all** the files:

```
python bin/input_to_database.py mutation --project_id syn7208886 --onlyValidate --genie_annotation_pkg ../annotation-tools
python3 bin/input_to_database.py mutation --project_id syn7208886 --onlyValidate --genie_annotation_pkg ../annotation-tools
```

1. Process all the files aside from the mutation (maf, vcf) files. The mutation processing was split because it takes at least 2 days to process all the production mutation data. Ideally, there is a parameter to exclude or include file types to process/validate, but that is not implemented.

```
python bin/input_to_database.py main --project_id syn7208886 --deleteOld
python3 bin/input_to_database.py main --project_id syn7208886 --deleteOld
```

1. Process the mutation data. Be sure to clone this repo: https://github.com/Sage-Bionetworks/annotation-tools and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile). This repo houses the code that re-annotates the mutation data with genome nexus. The `--createNewMafDatabase` will create a new mutation tables in the test project. This flag is necessary for production data for two main reasons:
1. Process the mutation data. This command uses the `annotation-tools` repo that you cloned previously which houses the code that standardizes/merges the mutation (both maf and vcf) files and re-annotates the mutation data with genome nexus. The `--createNewMafDatabase` will create a new mutation tables in the test project. This flag is necessary for production data for two main reasons:
* During processing of mutation data, the data is appended to the data, so without creating an empty table, there will be duplicated data uploaded.
* By design, Synapse Tables were meant to be appended to. When a Synapse Tables is updated, it takes time to index the table and return results. This can cause problems for the pipeline when trying to query the mutation table. It is actually faster to create an entire new table than updating or deleting all rows and appending new rows when dealing with millions of rows.
* If you run this more than once on the same day, you'll run into an issue with overwriting the narrow maf table as it already exists. Be sure to rename the current narrow maf database under `Tables` in the test synapse project and try again.

```
python bin/input_to_database.py mutation --project_id syn7208886 --deleteOld --genie_annotation_pkg ../annotation-tools --createNewMafDatabase
python3 bin/input_to_database.py mutation --project_id syn7208886 --deleteOld --genie_annotation_pkg ../annotation-tools --createNewMafDatabase
```

1. Create a consortium release. Be sure to add the `--test` parameter. Be sure to clone the cbioportal repo: https://github.com/cBioPortal/cbioportal and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile). For consistency, the processingDate specified here should match the one used for TEST pipeline in [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)
1. Create a consortium release. Be sure to add the `--test` parameter. For consistency, the `processingDate` specified here should match the one used in the `consortium_map` for the `TEST` key [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)

```
python bin/database_to_staging.py Jul-2022 ../cbioportal TEST --test
python3 bin/database_to_staging.py <processingDate> ../cbioportal TEST --test
```

1. Create a public release. Be sure to add the `--test` parameter. Be sure to clone the cbioportal repo: https://github.com/cBioPortal/cbioportal and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile). For consistency, the processingDate specified here should match the one used for TEST pipeline in [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)
1. Create a public release. Be sure to add the `--test` parameter. For consistency, the `processingDate` specified here should match the one used in the `public_map` for the `TEST` key [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)

```
python bin/consortium_to_public.py Jul-2022 ../cbioportal TEST --test
python3 bin/consortium_to_public.py <processingDate> ../cbioportal TEST --test
```

## Production
Expand Down