coremltools 3.0b beta release
Pre-releaseThis is the first beta release of coremltools 3 which aligns with the preview of Core ML 3. It includes a new version of the .mlmodel specification which brings with it support for:
- Updatable models
 - More dynamic and expressive neural networks
 - Nearest neighbor classifiers
 - Recommenders
 - Linked models
 - Sound analysis preprocessing
 - Runtime adjustable parameters
 
This release also enhances and introduces the following converters and utilities:
- Keras converter
- Adds support for converting training details using respect_trainable flag
 
 - Scikit converter
- Nearest neighbor classifier conversion
 
 - NeuralNetworkBuilder
- Support for all new layers introduced in CoreML 3
 - Support for adding update details such as marking layers updatable, specifying a loss function and providing an optimizer
 
 - KNearestNeighborsClassifierBuilder (new)
- Newly added to support simple programatic construction of nearest neighbor classifiers
 
 - Tensorflow (new)
- A new tensorflow converter with improved graph transformation capabilities and support for version 4 of the .mlmodel specification
 - This is used by the new tfcoreml beta converter package as well. Try it out with 
pip install tfcoreml==0.4.0b1 
 
This release also adds Python 3.7 support for coremltools
Updatable Models
Core ML 3 supports on-device update of models. Version 4 of the .mlmodel specification can encapsulate all the necessary parameters for a model update. Nearest neighbor, neural networks and pipeline models can all be made updatable.
Updatable neural networks support training of convolution and fully connected layer weights (with back-propagation through many other layers types). Categorical cross entropy and mean squared error losses are available along with stochastic gradient descent and Adam optimizers.
See examples of how to convert and create updatable models
See the MLUpdateTask API reference for how update a model from within an app.
Neural Networks
- Support for new layers in Core ML 3 added to the NeuralNetworkBuilder
- Exact rank mapping of multi dimensional array inputs
 - Control Flow related layers (branch, loop, range, etc.)
 - Element-wise unary layers (ceil, floor, sin, cos, gelu, etc.)
 - Element-wise binary layers with broadcasting (addBroadcastable, multiplyBroadcastable, etc)
 - Tensor manipulation layers (gather, scatter, tile, reverse, etc.)
 - Shape manipulation layers (squeeze, expandDims, getShape, etc.)
 - Tensor creation layers (fillDynamic, randomNormal, etc.)
 - Reduction layers (reduceMean, reduceMax, etc.)
 - Masking / Selection Layers (whereNonZero, lowerTriangular, etc.)
 - Normalization layers (layerNormalization)
 - For a full list of supported layers in Core ML 3, check out CoreML specification documentation (NeuralNetwork.proto).
 
 - Support conversion of recurrent networks from TensorFlow
 
Known Issues
coremltools 3.0b1
- Converting a Keras model that uses mean squared error for the loss function will not create a valid model. A workaround is to set respect_trainable to False (the default) when converting and then manually add the loss function.
 
Core ML 3 Developer Beta 1
- The default number of epochs encoded in model is not respected and may run for 0 epochs, immediately returning without training.
- Workaround: Explicitly supply epochs via MLModelConfiguration updateParameters using MLParameterKey.epochs even if you want to use the default value encoded in the model.
 
 - Loss returned by the ADAM optimizer is not correct
 - Some updatable pipeline models containing a static neural network sub-model can intermittently fail to update with the error: “Attempting to hash an MLFeatureValue that is not an image or multi array”.  This error will surface in task.error as part of MLUpdateContext passed to the provided completion handler.
- Workaround: Retry model update by creating a new update task with the same training data.
 
 - Some of the new neural network layers may result in an error when the model is run on a non-CPU compute device.
- Workaround: restrict computation to CPU with MLModelConfiguration computeUnits
 
 - Enumerated shape flexibility, when used with Neural network inputs with 'exact_rank' mapping (i.e. rank 5 disabled), may result in an error during prediction.
- Workaround: use range shape flexibility