Skip to content

Releases: pytorch/torchrec

v0.2.0

28 Jun 06:37
Compare
Choose a tag to compare

Changelog

PyPi Installation

The recommended install location is now from pypy. Additionally, TorchRec's binary will not longer contain fbgemm_gpu. Instead fbgemm_gpu will be installed as a dependency. See README for details

Planner Improvements

We added some additional features and bug fixed some bugs
Variable batch size per feature to support request only features
Better calculations for quant UVM Caching
Bug fix for shard storage fitting on device

Single process Batched + Fused Embeddings

Previously TorchRec’s abstractions (EmbeddingBagCollection/EmbeddingCollection) over FBGEMM kernels, which provide benefits such as table batching, optimizer fusion, and UVM placement, could only be used in conjunction with DistributedModelParallel. We’ve decoupled these notions from sharding, and introduced the FusedEmbeddingBagCollection, which can be used as a standalone module, with all of the above features, and can also be sharded.

Sharder

We enabled embedding sharding support for variable batch sizes across GPUs.

Benchmarking and Examples

We introduce
A set of benchmarking tests, showing performance characteristics of TorchRec’s base modules and research models built out of TorchRec.
We provide an example demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval.
inference example with Torch Deploy for both single and multi GPU

Integrations

We demonstrate that TorchRec works out of the box with many components commonly used alongside PyTorch models in production like systems, such as

  • Training a TorchRec model on Ray Clusters utilizing the Torchx Ray scheduler
  • Preprocessing and DataLoading with NVTabular on DLRM
  • Training a TorchRec model with on-the-fly preprocessing with TorchArrow showcasing RecSys domain UDFs.

Scriptable Unsharded Modules

The unsharded embedding modules (EmbeddingBagCollection/EmbeddingCollection and variants) are now torch scriptable.

EmbeddingCollection Column Wise Sharding

We now support column wise sharding for EmbeddingCollection, enabling sequence embeddings to be sharded column wise.

JaggedTensor

Boost performance of to_padded_dense function by implementing with FBGEMM.

Linting

Add lintrunner to allow contributors to lint and format their changes quickly, matching our internal formatter.

v0.1.1

17 May 18:12
Compare
Choose a tag to compare

Changelog

pytorch.org Install

The recommended install location is now from download.pytorch.org. See README for details

Recmetrics

RecMetrics is a metrics library that collects common utilities and optimizations for Recommendation models.

  • A centralized metrics module that allows users to add new metrics
  • Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput
  • Optimization for metrics related operations to reduce the overhead of metric computation
  • Checkpointing

Torchrec inference

Larger models need GPU support for inference. Also, there is a difference between features used in common training stacks and inference stacks. The goal of this library is to make use of some features seen in training to make inference more unified and easier to use.

EmbeddingTower and EmbeddingTowerCollection

a new sharadable nn.Module called EmbeddingTower/EmbeddingTowerCollection. This module will give model authors the basic building block to establish a clear relationship between a set of embedding tables and post lookup modules.

Examples/tutorials

Inference example

documentation (installation and example), updated cmake build and gRPC server example

Bert4rec example

Reproduction of bert4rec paper showcasing EmbeddingCollection module (non pooling)

Sharding Tutorial

Overview of sharding in torchrec and the five types of sharding https://pytorch.org/tutorials/advanced/sharding.html

Improved Planner

  • Updated static estimates for perf
  • Models full model parallel path
  • Includes support for sequence embeddings, weighted features, and feature processors
  • Added grid search proposer

Beta release

14 Apr 23:52
Compare
Choose a tag to compare

We are excited to announce TorchRec, a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production.

Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.
Optimized RecSys kernels powered by FBGEMM , including support for sparse and quantized operations.
A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.
A planner which can automatically generate optimized sharding plans for models.
Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.
GPU inference support.
Common modules for RecSys, such as models and public datasets (Criteo & Movielens).

See our announcement and docs