Skip to content

Possible error in reimplementation #8

@vvigilante

Description

@vvigilante

With reference to the original implementation
https://github.com/kracwarlock/action-recognition-visual-attention/blob/6738a0e2240df45ba79e87d24a174f53adb4f29b/src/actrec.py#L111

it looks like to me that they use this structure for decoding the lstm:
Dense(100) -> tanh -> Dense(n_classes) -> softmax
while you implemented in the function _decode_lstm:
Dense(n_classes) -> tanh -> Dense(n_classes) -> softmax

I think that the first, hidden fully connected layer has too few neurons, and that is it different from the original implementation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions