Skip to content

Commit d13f2b7

Browse files
authored
Update README for v1.0
1 parent 2c75644 commit d13f2b7

File tree

1 file changed

+11
-12
lines changed

1 file changed

+11
-12
lines changed

Diff for: README.md

+11-12
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,15 @@ The overall learning process comprises three broad stages: data preparation, tra
66

77
> The term `classification` in this project is used as it has been traditionally used in the remote sensing community: a process of assigning land cover classes to pixels. The meaning of the word in the deep learning community differs somewhat, where classification is simply to assign a label to the whole input image. This usage of the term classification will always be referred to as a ```classification task``` in the context of this project. Other uses of the term classification refer to the final phase of the learning process when a trained model is applied to new images, regardless of whether `semantic segmentation`, ["the process of assigning a label to every pixel in an image"](https://en.wikipedia.org/wiki/Image_segmentation), or a `classification task` is being used.
88
9-
After installing the required computing environment (see next section), one needs to replace the config.yaml file boilerplate path and other items to point to images and other data. The full sequence of steps is described in the sections below.
9+
After installing the required computing environment (see next section), one needs to replace the config.yaml file boilerplate path and other items to point to images and other data. The full sequence of steps is described in the sections below.
1010

1111
> This project comprises a set of commands to be run at a shell command prompt. Examples used here are for a bash shell in an Ubuntu GNU/Linux environment.
1212
1313

1414
## Requirements
1515
- Python 3.6 with the following libraries:
16-
- pytorch 0.4.1
17-
- torchvision 0.2.1
18-
- numpy
16+
- pytorch 1.0.1 # With your choice of CUDA toolkit
17+
- torchvision
1918
- rasterio
2019
- fiona
2120
- ruamel_yaml
@@ -27,14 +26,14 @@ After installing the required computing environment (see next section), one need
2726

2827
## Installation on your workstation
2928
1. Using conda, you can set and activate your python environment with the following commands:
30-
With GPU:
29+
With GPU (with CUDA 9.0; defaults to CUDA 10.0 if `cudatoolkit=9.0` is not specified):
3130
```shell
32-
conda create -p YOUR_PATH python=3.6 pytorch=0.4.0 torchvision cuda80 ruamel_yaml h5py fiona rasterio scikit-image scikit-learn=0.20 -c pytorch
31+
conda create -p YOUR_PATH python=3.6 pytorch=1.0.1 torchvision cudatoolkit=9.0 ruamel_yaml h5py fiona rasterio scikit-image scikit-learn -c pytorch
3332
source activate YOUR_ENV
3433
```
3534
CPU only:
3635
```shell
37-
conda create -p YOUR_PATH python=3.6 pytorch-cpu=0.4.0 torchvision ruamel_yaml h5py fiona rasterio scikit-image scikit-learn=0.20 -c pytorch
36+
conda create -p YOUR_PATH python=3.6 pytorch-cpu=1.0.1 torchvision ruamel_yaml h5py fiona rasterio scikit-image scikit-learn -c pytorch
3837
source activate YOUR_ENV
3938
```
4039
1. Set your parameters in the `config.yaml` (see section bellow)
@@ -48,7 +47,7 @@ After installing the required computing environment (see next section), one need
4847

4948
## config.yaml
5049

51-
The `config.yaml` file is located in the `conf` directory. It stores the values of all parameters needed by the deep learning algorithms for all phases. It is shown below:
50+
The `config.yaml` file is located in the `conf` directory. It stores the values of all parameters needed by the deep learning algorithms for all phases. It is shown below:
5251

5352
```yaml
5453
# Deep learning configuration file ------------------------------------------------
@@ -96,7 +95,7 @@ training:
9695
# Inference parameters; used in inference.py --------
9796
9897
inference:
99-
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
98+
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
10099
working_folder: /path/to/folder/with/resulting/images # Folder where all resulting images will be written
101100
state_dict_path: /path/to/model/weights/for/inference/checkpoint.pth.tar # File containing pre-trained weights
102101
@@ -111,7 +110,7 @@ models:
111110
<<: *unet001
112111
ternausnet:
113112
pretrained: ./models/TernausNet.pt # Mandatory
114-
checkpointed_unet:
113+
checkpointed_unet:
115114
<<: *unet001
116115
inception:
117116
pretrained: False # optional
@@ -243,7 +242,7 @@ global:
243242
model_name: unetsmall # One of unet, unetsmall, checkpointed_unet, ternausnet, or inception
244243
bucket_name: # name of the S3 bucket where data is stored. Leave blank if using local files
245244
inference:
246-
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
245+
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
247246
working_folder: /path/to/folder/with/resulting/images # Folder where all resulting images will be written
248247
state_dict_path: /path/to/model/weights/for/inference/checkpoint.pth.tar # File containing pre-trained weights
249248
```
@@ -354,7 +353,7 @@ global:
354353
bucket_name: # name of the S3 bucket where data is stored. Leave blank if using local files
355354
classify: True # Set to True for classification tasks and False for semantic segmentation
356355
inference:
357-
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
356+
img_csv_file: /path/to/csv/containing/images/list.csv # CSV file containing the list of all images to infer on
358357
working_folder: /path/to/folder/with/resulting/images # Folder where all resulting images will be written
359358
state_dict_path: /path/to/model/weights/for/inference/checkpoint.pth.tar # File containing pre-trained weights
360359
```

0 commit comments

Comments
 (0)