Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Distributed Optimization on Kubernetes

This folder contains two kinds of examples with Kubernetes: one is based on [sklearn_simple.py](../sklearn/sklearn_simple.py) and the other is based on [pytorch_lightning_simple.py](../pytorch/pytorch_lightning_simple.py) with MLflow.
This folder contains two kinds of examples with Kubernetes: one is based on [`sklearn_simple.py`](../sklearn/sklearn_simple.py) and the other is based on [`pytorch_lightning_simple.py`](../pytorch/pytorch_lightning_simple.py) with MLflow.

Currently, both [simple/sklearn_distributed.py](./simple/sklearn_distributed.py) and [mlflow/pytorch_lightning_distributed.py](./mlflow/pytorch_lightning_distributed.py) use POSTGRESQL for their backend of `optuna.Study.optimize` to be parallelized.
Currently, both [`simple/sklearn_distributed.py`](./simple/sklearn_distributed.py) and [`mlflow/pytorch_lightning_distributed.py`](./mlflow/pytorch_lightning_distributed.py) use POSTGRESQL for their backend of `optuna.Study.optimize` to be parallelized.
Though we do not use it for MLflow records. Of course, you can use POSTGRESQL as backend store of MLflow (https://mlflow.org/docs/latest/tracking.html#where-runs-are-recorded), current example uses HTTP server.
4 changes: 2 additions & 2 deletions kubernetes/mlflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This example is only verified on minikube.

This example's code is based on ../../pytorch/pytorch_lightning_simple.py example with the following changes:
This example's code is based on [`pytorch_lightning_simple.py`](../../pytorch/pytorch_lightning_simple.py) example with the following changes:

1. It gives a name to the study and sets `load_if_exists` to `True` in order to avoid errors when the code is run from multiple workers.
2. It sets the storage address to the postgres pod deployed with the workers.
Expand All @@ -18,7 +18,7 @@ First run `run.sh` which takes two arguments `$IsMinikube` and `$IMAGE_NAME`
$ bash run.sh True optuna-kubernetes-mlflow:example
```

- If you want to run in cloud, please change the `IMAGE_NAME` accordingly in k8s-manifest.yaml and run as follows. Also please make sure that your kubernetes context is set correctly.
- If you want to run in cloud, please change the `IMAGE_NAME` accordingly in `k8s-manifest.yaml` and run as follows. Also please make sure that your kubernetes context is set correctly.

```bash
$ bash run.sh False $IMAGE_NAME
Expand Down
6 changes: 3 additions & 3 deletions kubernetes/simple/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Distributed Optimization on Kubernetes

This example's code is mostly the same as the sklearn_simple.py example,
This example's code is mostly the same as the [`sklearn_simple.py`](../../sklearn/sklearn_simple.py) example,
except for two things:

1 - It gives a name to the study and sets load_if_exists to True
1 - It gives a name to the study and sets `load_if_exists` to `True`
in order to avoid errors when the code is run from multiple workers.

2 - It sets the storage address to the postgres pod deployed with the workers.
Expand All @@ -18,7 +18,7 @@ Run `run.sh` which takes two arguments `$IsMinikube` and `$IMAGE_NAME`
$ bash run.sh True optuna-kubernetes:example
```

- If you want to run in cloud, please change the IMAGE_NAME accordingly in k8s-manifest.yaml and run as follows. Also please make sure that you kubernetes context is set correctly.
- If you want to run in cloud, please change the `IMAGE_NAME` accordingly in `k8s-manifest.yaml` and run as follows. Also please make sure that you kubernetes context is set correctly.

```bash
$ bash run.sh False $IMAGE_NAME
Expand Down