Open
Description
Under recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb:
The CharRNN
design is using a LSTM layer that is uni-directional however with a fully connected on top of the output sequence, it seems like the information from future predictions are leaked back into the early predictions. The model is then used differently at inference time to make predictions. Is this intended or am I missing something?
Metadata
Metadata
Assignees
Labels
No labels