Releases: pytorch/vision
v0.2.0: New transforms + a new functional interface
This version introduced a functional interface to the transforms, allowing for joint random transformation of inputs and targets. We also introduced a few breaking changes to some datasets and transforms (see below for more details).
Transforms
We have introduced a functional interface for the torchvision transforms, available under torchvision.transforms.functional
. This now makes it possible to do joint random transformations on inputs and targets, which is especially useful in tasks like object detection, segmentation and super resolution. For example, you can now do the following:
from torchvision import transforms
import torchvision.transforms.functional as F
import random
def my_segmentation_transform(input, target):
i, j, h, w = transforms.RandomCrop.get_params(input, (100, 100))
input = F.crop(input, i, j, h, w)
target = F.crop(target, i, j, h, w)
if random.random() > 0.5:
input = F.hflip(input)
target = F.hflip(target)
F.to_tensor(input), F.to_tensor(target)
return input, target
The following transforms have also been added:
F.vflip
andRandomVerticalFlip
- FiveCrop and TenCrop
- Various color transformations:
ColorJitter
F.adjust_brightness
F.adjust_contrast
F.adjust_saturation
F.adjust_hue
LinearTransformation
for applications such as whiteningGrayscale
andRandomGrayscale
Rotate
andRandomRotation
ToPILImage
now supportsRGBA
imagesToPILImage
now accepts amode
argument so you can specify which colorspace the image should beRandomResizedCrop
now acceptsscale
andratio
ranges as input parameters
Documentation
Documentation is now auto generated and publishing to pytorch.org
Datasets:
SEMEION Dataset of handwritten digits added
Phototour dataset patches computed via multi-scale Harris corners now available by setting name
equal to notredame_harris
, yosemite_harris
or liberty_harris
in the Phototour
dataset
Bug fixes:
- Pre-trained densenet models is now CPU compatible #251
Breaking changes:
This version also introduced some breaking changes:
- The
SVHN
dataset has now been made consistent with other datasets by making the label for the digit 0 be 0, instead of 10 (as it was previously) (see #194 for more details) - the
labels
for the unlabelledSTL10
dataset is now an array filled with-1
- the order of the input args to the deprecated
Scale
transform has changed from(width, height)
to(height, width)
to be consistent with other transforms
More models and some bug fixes
- Ability to switch image backends between PIL and accimage
- Added more tests
- Various bug fixes and doc improvements
Models
- Fix for inception v3 input transform bug #144
- Added pretrained VGG models with batch norm
Datasets
- Fix indexing bug in LSUN dataset (#177)
- enable
~
to be used in dataset paths ImageFolder
now returns the same (sorted) file order on different machines (#193)
Transforms
- transforms.Scale now accepts a tuple as new size or single integer
Utils
- can now pass a pad value to make_grid and save_image
More models and datasets. Some bugfixes
New Features
Models
- SqueezeNet 1.0 and 1.1 models added, along with pre-trained weights
- Add pre-trained weights for VGG models
- Fix location of dropout in VGG
torchvision.models
now exposenum_classes
as a constructor argument- Add InceptionV3 model and pre-trained weights
- Add DenseNet models and pre-trained weights
Datasets
- Add STL10 dataset
- Add SVHN dataset
- Add PhotoTour dataset
Transforms and Utilities
transforms.Pad
now allows fill colors of either number tuples, or named colors like"white"
- add normalization options to
make_grid
andsave_image
ToTensor
now supports more input types
Performance Improvements
Bug Fixes
- ToPILImage now supports a single image
- Python3 compatibility bug fixes
ToTensor
now copes with all PIL Image types, not just RGB images- ImageFolder now only scans subdirectories.
- Having files like
.DS_Store
is now no longer a blocking hindrance - Check for non-zero number of images in ImageFolder
- Subdirectories of classes have recursive scans for images
- Having files like
- LSUN test set loads now
Just a version bump
A small release, just needed a version bump because of PyPI.
Add models and modelzoo, some bugfixes
New Features
- Add
torchvision.models
: Definitions and pre-trained models for common vision models- ResNet, AlexNet, VGG models added with downloadable pre-trained weights
- adding padding to RandomCrop. Also add
transforms.Pad
- Add MNIST dataset
Performance Fixes
- Fixing performance of LSUN Dataset
Bug Fixes
- Some Python3 fixes
- Bug fixes in save_image, add single channel support
First release
Introduced Datasets and Transforms.
Added common datasets
-
COCO (Captioning and Detection)
-
LSUN Classification
-
ImageFolder
-
Imagenet-12
-
CIFAR10 and CIFAR100
-
Added utilities for saving images from Tensors.