Releases: apple/coremltools
Releases · apple/coremltools
coremltools 7.0b2
- The default neural network backend is now 
mlprogramfor iOS15/macOS12. Previously callingcoremltools.convert()without providing theconvert_toor theminimum_deployment_targetarguments, used the lowest deployment target (iOS11/macOS10.13) and theneuralnetworkbackend. Now the conversion process will default to iOS15/macOS12 and themlprogrambackend. You can change this behavior by providing aminimum_deployment_targetorconvert_tovalue. - Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when 
mlprogrambackend is used. - Changes upper input range behavior when backend is 
mlprogram:- If 
RangeDimis used and no upper-bound is set (with a positive number), an exception will be raised. - If the user does not use the 
inputsparameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning. 
 - If 
 - New utility method for getting weight metadata: 
coremltools.optimize.coreml.get_weights_metadata. This information can be used to customize optimization across ops when usingcoremltools.optimize.coremlAPIs. - Support for new PyTorch ops: 
repeat_interleaveandunflatten. - New and updated iOS17/macOS14 ops: 
batch_norm,conv,conv_transpose,expand_dims,gru,instance_norm,inverse,l2_norm,layer_norm,linear,local_response_norm,log,lstm,matmul,reshape_like,resample,resize,reverse,reverse_sequence,rnn,rsqrt,slice_by_index,slice_by_size,sliding_windows,squeeze,transpose. - Various other bug fixes, enhancements, clean ups and optimizations.
 
Special thanks to our external contributors for this release: @fukatani, @pcuenca, @KWiecko, @comeweber and @sercand
coremltools 7.0b1
- New submodule 
coremltools.optimizefor model quantization and compressioncoremltools.optimize.coremlfor compressing coreml models, in a data free manner.coremltools.compresstion_utils.*APIs have been moved herecoremltools.optimize.torchfor compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted usingcoremltools.convert
 - Updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
 pass_pipelineparameter has been added tocoremltools.convertto allow controls over which optimizations are performed.- Python 3.11 support.
 - MLModel batch prediction support.
 - Support for converting statically quantized PyTorch models
 - New Torch layer support: 
randn,randn_like,scaled_dot_product_attention,stft,tile - Faster weight palletization for large tensors.
 coremltools.models.ml_program.compression_utilsis deprecated.- Various other bug fixes, enhancements, clean ups and optimizations.
 
Core ML tools 7.0 guide: https://coremltools.readme.io/v7.0/
Special thanks to our external contributors for this release: @fukatani, @pcuenca, @mlaves, @cclauss, @smpanaro, @nikalra, @jszaday
coremltools 6.3
Core ML Tools 6.3 Release Note
- Torch 2.0 Support
 - TensorFlow 2.12.0 Support
 - Remove Python 3.6 support
 - Functionality for controling graph passes/optimizations, see the  
pass_pipelineparameter tocoremltools.convert. - A utility function for easily creating pipeline, see: 
utils.make_pipeline. - A debug utility function for extracting submodels, see: 
converters.mil.debugging_utils.extract_submodel - Various other bug fixes, enhancements, clean ups and optimizations.
 
Special thanks to our external contributors for this release: @fukatani, @nikalra and @kevin-keraudren.
coremltools 6.2
Core ML Tools 6.2 Release Note
- Support new PyTorch version: 
torch==1.13.1andtorchvision==0.14.1. - New ops support:
- New PyTorch ops support: 1-D and N-D FFT / RFFT / IFFT / IRFFT in 
torch.fft,torchvision.ops.nms,torch.atan2,torch.bitwise_and,torch.numel, - New TensorFlow ops support: FFT / RFFT / IFFT / IRFFT in 
tf.signal,tf.tensor_scatter_nd_add. 
 - New PyTorch ops support: 1-D and N-D FFT / RFFT / IFFT / IRFFT in 
 - Existing ops improvements:
- Supports int input for 
clampop. - Supports dynamic 
topk(k not determined during compile time). - Supports 
padding='valid'in PyTorch convolution. - Supports PyTorch Adaptive Pooling.
 
 - Supports int input for 
 - Supports numpy v1.24.0 (#1718)
 - Add int8 affine quantization for the compression_utils.
 - Various other bug fixes, optimizations and improvements.
 
Special thanks to our external contributors for this release: @fukatani, @ChinChangYang, @danvargg, @bhushan23 and @cjblocker.
coremltools 6.1
- Support for TensorFlow 
2.10. - New PyTorch ops supported: 
baddbmm,glu,hstack,remainder,weight_norm,hann_window,randint,cross,trace, andreshape_as. - Avoid root logger and use the coremltools logger instead.
 - Support dynamic input shapes for PyTorch 
repeatandexpandop. - Enhance translation of torch 
whereop with only one input. - Add support for PyTorch einsum equation: 
'bhcq,bhck→bhqk’. - Optimization graph pass improvement
- 3D convolution batchnorm fusion
 - Consecutive relu fusion
 - Noop elimination
 
 - Actively catch the tensor which has rank >= 6 and error out
 - Various other bug fixes, optimizations and improvements.
 
Special thanks to our external contributors for this release: @fukatani, @piraka9011, @giorgiop, @hollance, @SangamSwadiK, @RobertRiachi, @waylybaye, @GaganNarula, and @sunnypurewal.
coremltools 6.0
- MLProgram compression: affine quantization, palettize, sparsify. See 
coremltools.compression_utils - Python 3.10 support.
 - Support for latest scikit-learn version (
1.1.2). - Support for latest PyTorch version (
1.12.1). - Support for TensorFlow 
2.8. - Support for options to specify input and output data types, for both images and multiarrays
- Update coremltools python bindings to work with GRAYSCALE_FLOAT16 image datatype of CoreML
 - New options to set input and output types to multi array of type float16, grayscale image of type float16 and set output type as images, similar to the 
coremltools.ImageTypeused with inputs. 
 - New compute unit enum type: 
CPU_AND_NEto select the model runtime to the Neural engine and CPU. - Support for several new TensorFlow and PyTorch ops.
 - Changes to opset (available from iOS16, macOS13)
- New MIL ops: 
full_like,resample,reshape_like,pixel_unshuffle,topk - Existing MIL ops with new functionality: 
crop_resize,gather,gather_nd,topk,upsample_bilinear. 
 - New MIL ops: 
 - API Breaking Changes:
- Do not assume source prediction column is “predictions”, fixes #58.
 - Remove 
useCPUOnlyparameter fromcoremltools.convertandcoremltools.models.MLModel. Usecoremltools.ComputeUnitinstead. - Remove ONNX support.
 - Remove multi-backend Keras support.
 
 - Various other bug fixes, optimizations and improvements.
 
coremltools 6.0b2
- Support for new MIL ops added in iOS16/macOS13: 
pixel_unshuffle,resample,topk - Update coremltools python bindings to work with GRAYSCALE_FLOAT16 image datatype of CoreML
 - New compute unit enum type: 
CPU_AND_NE - New PyTorch ops: 
AdaptiveAvgPool2d,cosine_similarity,eq,linalg.norm,linalg.matrix_norm,linalg.vector_norm,ne,PixelUnshuffle - Support for 
identity_nTensorFlow op - Various other bug fixes, optimizations and improvements.
 
coremltools 6.0b1
- MLProgram compression: affine quantization, palettize, sparsify. See 
coremltools.compression_utils. - New options to set input and output types to multi array of type float16, grayscale image of type float16 and set output type as images, similar to the 
coremltools.ImageTypeused with inputs. - Support for PyTorch 1.11.0.
 - Support for TensorFlow 2.8.
 - [API Breaking Change] Remove 
useCPUOnlyparameter fromcoremltools.convertandcoremltools.models.MLModel. Usecoremltools.ComputeUnitinstead. - Support for many new PyTorch and TensorFlow layers
 - Many bug fixes and enhancements.
 
Known issues
- While conversion and CoreML models with Grayscale Float16 images should work with ios16/macos13 beta, the coremltools-CoreML python binding has an issue which would cause the 
predictAPI in coremltools to crash when the either the input or output is of type grayscale float16 - The new Compute unit configuration 
MLComputeUnitsCPUAndNeuralEngineis not available in coremltools yet 
coremltools 5.2
- Support latest version (1.10.2) of PyTorch
 - Support TensorFlow 2.6.2
 - Support New PyTorch ops:
bitwise_notdimdoteyefillhardswishlinspacemvnew_fullnew_zerosrreluselu
 - Support TensorFlow ops
DivNoNanLog1pSparseSoftmaxCrossEntropyWithLogits
 - Various bug fixes, clean ups and optimizations.
 - This is the final coremltools version to support Python 3.5
 
coremltools 5.1
- New supported PyTorch operations: 
broadcast_tensors,frobenius_norm,full,normandscatter_add. - Automatic support for inplace PyTorch operations if non-inplace operation is supported.
 - Support PyTorch 1.9.1
 - Various other bug fixes, optimizations and improvements.