Skip to content

support for different activation functions in different layers of a neural network #8643

Open
@exalate-issue-sync

Description

@exalate-issue-sync

At the moment neural networks must use the same activation function for every layer. This is problematic for autoencoders because the output layer should have a linear activation if StandardScaling is used on the inputs. (ie if tanh is used as the activation, the output layer will never be able to reconstruct values less than -1 or greater than 1!!)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions