Skip to content

Commit d3fd037

Browse files
Adds Data Handling section to the README (#78)
1 parent ef4b74d commit d3fd037

File tree

1 file changed

+96
-8
lines changed

1 file changed

+96
-8
lines changed

README.md

Lines changed: 96 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,10 @@ This adds the `keras-remote up`, `keras-remote down`, `keras-remote status`, and
7272
- Python 3.11+
7373
- Google Cloud SDK (`gcloud`)
7474
- Run `gcloud auth login` and `gcloud auth application-default login`
75-
- [Pulumi CLI](https://www.pulumi.com/docs/install/) (required for `[cli]` install only)
7675
- A Google Cloud project with billing enabled
7776

77+
Note: The Pulumi CLI is bundled and managed automatically. It will be installed to `~/.keras-remote/pulumi` on first use if not already present.
78+
7879
## Quick Start
7980

8081
### 1. Configure Google Cloud
@@ -203,15 +204,102 @@ def train():
203204

204205
See [examples/Dockerfile.prebuilt](examples/Dockerfile.prebuilt) for a template.
205206

207+
## Handling Data
208+
209+
Keras Remote provides a declarative and performant Data API to seamlessly make your local and cloud data available to your remote functions.
210+
211+
The Data API is designed to be read-only. It reliably delivers data to your pods at the start of a job. For saving model outputs or checkpointing, you should write directly to GCS from within your function.
212+
213+
Under the hood, the Data API optimizes your workflows with two key features:
214+
215+
- **Smart Caching:** Local data is content-hashed and uploaded to a cache bucket only once. Subsequent job runs that use byte-identical data will hit the cache and skip the upload entirely, drastically speeding up execution.
216+
- **Automatic Zip Exclusion:** When you reference a data path inside your current working directory, Keras Remote automatically excludes that directory from the project's zipped payload to avoid uploading the same data twice.
217+
218+
There are three main ways to handle data depending on your workflow:
219+
220+
### 1. Dynamic Data (The `Data` Class)
221+
222+
The simplest and most Pythonic approach is to pass `Data` objects as regular function arguments. The `Data` class wraps a local file/directory path or a Google Cloud Storage (GCS) URI.
223+
224+
On the remote pod, these objects are automatically resolved into plain string paths pointing to the downloaded files, meaning your function code never needs to know about GCS or cloud storage APIs.
225+
226+
```python
227+
import pandas as pd
228+
import keras_remote
229+
from keras_remote import Data
230+
231+
@keras_remote.run(accelerator="v6e-8")
232+
def train(data_dir):
233+
# data_dir is resolved to a dynamic local path on the remote machine
234+
df = pd.read_csv(f"{data_dir}/train.csv")
235+
# ...
236+
237+
# Uploads the local directory to the remote pod automatically
238+
train(Data("./my_dataset/"))
239+
240+
# Cache hit: subsequent runs with the same data skip the upload!
241+
train(Data("./my_dataset/"))
242+
```
243+
244+
**Note on GCS Directories:** When referencing a GCS directory with the `Data` class, you must include a trailing slash (e.g., `Data("gs://my-bucket/dataset/")`). If you omit the trailing slash, the system will treat it as a single file object.
245+
246+
You can also pass multiple `Data` arguments, or nest them inside lists and dictionaries (e.g., `train(datasets=[Data("./d1"), Data("./d2")])`).
247+
248+
### 2. Static Data (The `volumes` Parameter)
249+
250+
For established training scripts where data requirements are static, you can use the `volumes` parameter in the `@keras_remote.run` decorator. This mounts data at fixed, hardcoded absolute filesystem paths, allowing you to drop `keras_remote` into existing codebases without altering the function signature.
251+
252+
```python
253+
import pandas as pd
254+
import keras_remote
255+
from keras_remote import Data
256+
257+
@keras_remote.run(
258+
accelerator="v6e-8",
259+
volumes={
260+
"/data": Data("./my_dataset/"),
261+
"/weights": Data("gs://my-bucket/pretrained-weights/")
262+
}
263+
)
264+
def train():
265+
# Data is guaranteed to be available at these absolute paths
266+
df = pd.read_csv("/data/train.csv")
267+
model.load_weights("/weights/model.h5")
268+
# ...
269+
270+
# No data arguments needed!
271+
train()
272+
273+
```
274+
275+
### 3. Direct GCS Streaming (For Large Datasets)
276+
277+
If your dataset is very large (e.g., > 10GB), it is inefficient to download the entire dataset to the remote pod's local disk. Instead, skip the `Data` wrapper entirely and pass a GCS URI string directly. You can then use frameworks with native GCS streaming support (like `tf.data` or `grain`) to read the data on the fly.
278+
279+
```python
280+
import grain.python as grain
281+
import keras_remote
282+
283+
@keras_remote.run(accelerator="v6e-8")
284+
def train(data_uri):
285+
# Native GCS reading, no download overhead
286+
data_source = grain.ArrayRecordDataSource(data_uri)
287+
# ...
288+
289+
# Pass as a plain string, no Data() wrapper needed
290+
train("gs://my-bucket/arrayrecords/")
291+
292+
```
293+
206294
## Configuration
207295

208296
### Environment Variables
209297

210-
| Variable | Required | Default | Description |
211-
| ---------------------- | -------- | --------------- | ---------------------------------- |
212-
| `KERAS_REMOTE_PROJECT` | Yes || Google Cloud project ID |
213-
| `KERAS_REMOTE_ZONE` | No | `us-central1-a` | Default compute zone |
214-
| `KERAS_REMOTE_CLUSTER` | No || GKE cluster name |
298+
| Variable | Required | Default | Description |
299+
| ---------------------- | -------- | --------------- | ----------------------- |
300+
| `KERAS_REMOTE_PROJECT` | Yes || Google Cloud project ID |
301+
| `KERAS_REMOTE_ZONE` | No | `us-central1-a` | Default compute zone |
302+
| `KERAS_REMOTE_CLUSTER` | No || GKE cluster name |
215303

216304
### Decorator Parameters
217305

@@ -345,10 +433,10 @@ keras-remote down
345433

346434
This removes:
347435

348-
- GKE cluster and accelerator node pools (via Pulumi)
436+
- GKE cluster and accelerator node pools
349437
- Artifact Registry repository and container images
350438
- Cloud Storage buckets (jobs and builds)
351-
Use `--yes` to skip the confirmation prompt.
439+
Use `--yes` to skip the confirmation prompt.
352440

353441
## Contributing
354442

0 commit comments

Comments
 (0)