Open
Description
At the moment neural networks must use the same activation function for every layer. This is problematic for autoencoders because the output layer should have a linear activation if StandardScaling is used on the inputs. (ie if tanh is used as the activation, the output layer will never be able to reconstruct values less than -1 or greater than 1!!)