Skip to content

Commit 7e26b54

Browse files
gideoniteoliverwm1
andauthored
Small changes to README.md (#2)
* Cosmetic changes * Add some details and links --------- Co-authored-by: Oliver Watt-Meyer <[email protected]>
1 parent 2c61239 commit 7e26b54

File tree

1 file changed

+14
-12
lines changed

1 file changed

+14
-12
lines changed

README.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,24 @@
11
# ACE: AI2 Climate Emulator
2-
Inference code accompanying "ACE: A fast, skillful learned global atmospheric
3-
model for climate prediction" ([arxiv:2310.02074](https://arxiv.org/abs/2310.02074)).
2+
This repo constains the inference code accompanying "ACE: A fast, skillful learned global atmospheric model for climate prediction" ([arxiv:2310.02074](https://arxiv.org/abs/2310.02074)).
43

54
## DISCLAIMER
6-
This is rapidly changing research software. No guarantees are made of maintaining
7-
backwards compatibility.
5+
This is rapidly changing research software. We make no guarantees of maintaining backwards compatibility.
86

97
## Quickstart
108

11-
1. Clone this repository. Then assuming conda is available, run
9+
### 1. Clone this repository and install dependencies
10+
11+
Assuming [conda](https://docs.conda.io/en/latest/) is available, run
1212
```
1313
make create_environment
1414
```
1515
to create a conda environment called `fme` with dependencies and source
1616
code installed. Alternatively, a Docker image can be built with `make build_docker_image`.
17-
You may verify installation by running `pytest`.
17+
You may verify installation by running `pytest fme/`.
18+
19+
### 2. Download data and checkpoint
1820

19-
2. Download data and checkpoint. These are available via a public
21+
These are available via a public
2022
[requester pays](https://cloud.google.com/storage/docs/requester-pays)
2123
Google Cloud Storage bucket. The checkpoint can be downloaded with:
2224
```
@@ -28,7 +30,7 @@ but it is required to download enough data to span the desired prediction period
2830
gsutil -m -u YOUR_GCP_PROJECT cp -r gs://ai2cm-public-requester-pays/2023-11-29-ai2-climate-emulator-v1/data/repeating-climSST-1deg-netCDFs/validation .
2931
```
3032

31-
3. Update the paths in the [example config](examples/config-inference.yaml).
33+
### 3. Update the paths in the [example config](examples/config-inference.yaml).
3234
Then in the `fme` conda environment, run inference with:
3335
```
3436
python -m fme.fcn_training.inference.inference examples/config-inference.yaml
@@ -37,15 +39,15 @@ python -m fme.fcn_training.inference.inference examples/config-inference.yaml
3739
## Configuration options
3840
See the `InferenceConfig` class in [this file](fme/fme/fcn_training/inference/inference.py) for
3941
description of configuration options. The [example config](examples/config-inference.yaml)
40-
shows some useful defaults for performing a 400-step (100-day) simulation.
42+
shows some useful defaults for performing a 400-step simulation (100 days, with the 6-hour time step).
4143

4244
## Performance
4345
While inference can be performed without a GPU, it may be very slow in that case. In addition,
4446
I/O performance is critical for fast inference due to loading of forcing data and target data
4547
during inference.
4648

4749
## Analyzing output
48-
Various metrics are computed online by the inference code. These can be viewed via
50+
Various climate performance metrics are computed online by the inference code. These can be viewed via
4951
[wandb](https://wandb.ai) by setting `logging.log_to_wandb` to true and updating `logging.entity`
5052
to your wandb entity. Additionally, raw output data is saved to netCDF by the inference code.
5153

@@ -56,5 +58,5 @@ are available:
5658
gs://ai2cm-public-requester-pays/2023-11-29-ai2-climate-emulator-v1/data/repeating-climSST-1deg-zarrs
5759
gs://ai2cm-public-requester-pays/2023-11-29-ai2-climate-emulator-v1/data/repeating-climSST-1deg-netCDFs
5860
```
59-
The zarr format is convenient for ad-hoc analysis. The netCDF version contains our
60-
train/validation split, and was used for training and inference.
61+
The `zarr` format is convenient for ad-hoc analysis. The netCDF version contains our
62+
train/validation split which was used for training and inference.

0 commit comments

Comments
 (0)