The Azure Storage Connector for PyTorch (azstoragetorch
) is a library that provides
seamless, performance-optimized integrations between Azure Storage and PyTorch.
Use this library to easily access and store data in Azure Storage while using PyTorch. The
library currently offers:
- File-like object for saving and loading PyTorch models (i.e., checkpointing) with Azure Blob Storage
- PyTorch datasets for loading data samples from Azure Blob Storage
For detailed documentation on azstoragetorch
, we recommend visiting its
official documentation. It includes both a user guide and API references
for the project. Content in this README is scoped to a high-level overview of the
project and its GitHub repository policies.
While the project is major version 0
(i.e., version is 0.x.y
), public interfaces are not stable.
Backwards incompatible changes may be introduced between minor version bumps (e.g., upgrading from
0.1.0
to 0.2.0
). If backwards compatibility is needed while using the library,
we recommend pinning to a minor version of the library (e.g., azstoragetorch~=0.1.0
).
- Python 3.9 or later installed
- Have an Azure subscription and an Azure storage account
Install the library with pip:
pip install azstoragetorch
azstoragetorch
should work without any explicit credential configuration.
azstoragetorch
interfaces default to DefaultAzureCredential
for credentials which automatically retrieves Microsoft Entra ID tokens based on
your current environment. For more information on using credentials with
azstoragetorch
, see the user guide.
This section highlights core features of azstoragetorch
. For more details, see the user guide.
PyTorch supports saving and loading trained models
(i.e., checkpointing). The core PyTorch interfaces for saving and loading models are
torch.save()
and torch.load()
respectively.
Both of these functions accept a file-like object to be written to or read from.
azstoragetorch
offers the azstoragetorch.io.BlobIO
file-like
object class to save and load models directly to and from Azure Blob Storage when
using torch.save()
and torch.load()
:
import torch
import torchvision.models # Install separately: ``pip install torchvision``
from azstoragetorch.io import BlobIO
# Update URL with your own Azure Storage account and container name
CONTAINER_URL = "https://<my-storage-account-name>.blob.core.windows.net/<my-container-name>"
# Model to save. Replace with your own model.
model = torchvision.models.resnet18(weights="DEFAULT")
# Save trained model to Azure Blob Storage. This saves the model weights
# to a blob named "model_weights.pth" in the container specified by CONTAINER_URL.
with BlobIO(f"{CONTAINER_URL}/model_weights.pth", "wb") as f:
torch.save(model.state_dict(), f)
# Load trained model from Azure Blob Storage. This loads the model weights
# from the blob named "model_weights.pth" in the container specified by CONTAINER_URL.
with BlobIO(f"{CONTAINER_URL}/model_weights.pth", "rb") as f:
model.load_state_dict(torch.load(f))
PyTorch offers the Dataset and DataLoader primitives for
loading data samples. azstoragetorch
provides implementations for both types
of PyTorch datasets, map-style and iterable-style datasets,
to load data samples from Azure Blob Storage:
azstoragetorch.datasets.BlobDataset
- Map-style datasetazstoragetorch.datasets.IterableBlobDataset
- Iterable-style dataset
Data samples returned from both datasets map directly one-to-one to blobs in Azure Blob Storage. When instantiating these dataset classes, use one of their class methods:
from_container_url()
- Instantiate dataset by listing blobs from an Azure Storage container.from_blob_urls()
- Instantiate dataset from provided blob URLs
from azstoragetorch.datasets import BlobDataset, IterableBlobDataset
# Update URL with your own Azure Storage account and container name
CONTAINER_URL = "https://<my-storage-account-name>.blob.core.windows.net/<my-container-name>"
# Create an iterable-style dataset by listing blobs in the container specified by CONTAINER_URL.
dataset = IterableBlobDataset.from_container_url(CONTAINER_URL)
# Print the first blob in the dataset. Default output is a dictionary with
# the blob URL and the blob data. Use `transform` keyword argument when
# creating dataset to customize output format.
print(next(iter(dataset)))
# List of blob URLs to create dataset from. Update with your own blob names.
blob_urls = [
f"{CONTAINER_URL}/<blob-name-1>",
f"{CONTAINER_URL}/<blob-name-2>",
f"{CONTAINER_URL}/<blob-name-3>",
]
# Create a map-style dataset from the list of blob URLs
blob_list_dataset = BlobDataset.from_blob_urls(blob_urls)
print(blob_list_dataset[0]) # Print the first blob in the dataset
Once instantiated, azstoragetorch
datasets can be provided directly to a PyTorch
DataLoader
for loading samples:
from torch.utils.data import DataLoader
# Create a DataLoader to load data samples from the dataset in batches of 32
dataloader = DataLoader(dataset, batch_size=32)
for batch in dataloader:
print(batch["url"]) # Prints blob URLs for each 32 sample batch
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.