-
Notifications
You must be signed in to change notification settings - Fork 201
Description
A colleague of mine made the comment I should pretraing the models to yield more robust models and
better accuracy. Now... how can I do that - or ... what are possible avenues here?
My models are all based on ecg (1 dim, up to 3 channels which I basically encode as "RGB" ).
He hinted that one should first train the dataset differently... basically use a few convolutional layers to
"compress" the signal and then "expand" the layers again. The pretraining goal is to "reconstruct" the input signal (am I right here???)
so input is output. (I know that this can be done in an old fashioned 3 layer NN approach which yields to the PCA - is that here the same?)
After pretraining - cut off some of the layers (how many is good?) - add new ones so the real classification task can be achieved and train again with the real classification task.
Is this something reasonable? And is there maybe an example out there that shows an efficent way on how to do that?