-
Download Docker and Docker Compose.
-
Build the Docker images:
docker-compose build
-
Create a super user and run the migrations for the
apicontainer:docker-compose run --rm api bash -c "python manage.py migrate; python manage.py createsuperuser"
This option allows you use only the "testing" data because Elasticsearch (ES) is not installed.
-
Setup the env. Install Miniconda (Linux setup; check online for MacOS and Windows):
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b
-
Open a new terminal after the installation.
cd qa conda env create -f environment.yml conda activate covid19-qa
-
Generate the XML article files from the JSON file:
docker-compose up corpus-creator
-
Upload the data into ES using Logstash:
docker-compose up logstash
Start the api container in Docker Compose:
docker-compose up apiThe Swagger description of the services is in http://localhost:8000/
The qa container has a main.py script with some commands useful to test the model.
All the commands allow the flag --ignore-es to work with the testing data.
-
Go into the
qacontainer.If the container is running:
docker-compose exec qa bashIf the container isn't running:
docker-compose run qa bash
-
Inside the container, activate the conda env:
conda activate covid19-qa
-
Execute the command. (Check out the help (
./main.py -h) to see the available options.) Some examples:# Execute the `try` with some optimizations: ./main.py --batch-size 672 --device 0 --threads 20 # Execute the `try` with some optimizations and without Elasticsearch: ./main.py --batch-size 672 --device 0 --threads 20 --ignore-es # Execute the interactive mode: ./main.py interact # Execute the interactive mode and without Elastichsearch: ./main.py --ignore-es interact
A useful tool to interact with your elastic search cluster is Kibana.
-
Run the Kibana container:
docker-compose up kibana
-
In your browser, go to http://0.0.0.0:15601/app/kibana
The qa image is CUDA-enabled. It needs to run with the nvidia runtime to work well.
-
Install nvidia-docker-runtime.
-
Run:
sudo tee /etc/docker/daemon.json <<EOF { "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" } EOF sudo systemctl daemon-reload sudo systemctl restart docker
-
To test it (on a CUDA-enabled environment):
docker-compose build qa docker-compose run --rm qa python -c "import torch; torch.ones(2, device='cuda')"
By default, Docker Compose will load both docker-compose.yml and docker-compose.override.yml.
In production mode, any docker-compose command must include the flags
-f docker-compose.yml -f docker-compose.prod.yml, like in:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --buildFor it to work, you must first create a .env file like .env.example (you can copy it and fill in with values).
For every service you want exposed through the reverse proxy you should add this to the service block in docker-compose.yml file and change SERVICE_NAME:
depends_on:
- traefik
networks:
- proxy
labels:
- traefik.http.routers.whoami.rule=Host(`SERVICE_NAME.${DOMAIN}`)
- traefik.http.routers.whoami.tls=true
- traefik.http.routers.whoami.tls.certresolver=leThis is an example service:
whoami:
image: "containous/whoami"
restart: always
depends_on:
- traefik
networks:
- proxy
labels:
- traefik.http.routers.whoami.rule=Host(`whoami.${DOMAIN}`)
- traefik.http.routers.whoami.tls=true
- traefik.http.routers.whoami.tls.certresolver=le-
Take the latest changes from the
masterbranch toprodfor the GitHub remote (i.e., merging and pushing). -
SSH to the server:
gcloud compute ssh covid19-qa
-
Go to the project folder, pull and build:
cd /opt/covid19-qa/ git pull docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build frontend