Skip to content

Provided checkpoints produce high EER #16

@w-hc

Description

@w-hc

Hi thank you for providing the code and checkpoints.

I am running the code on the updated final data release downloaded from https://datashare.is.ed.ac.uk/handle/10283/3336

I modified the dataset feeding layer a little bit for the changed file name and formats. But overall the logic is kept as is.

python model_main.py --eval  --eval_output=./cm.txt --model_path=./sample_model/SPEC/spec_logical.pth

This checkpoint produces ERR at 43.01%

python model_main.py --eval  --eval_output=./cm.txt --model_path=./sample_model/SPEC/spec.pth

This checkpoint produces ERR at 49.05%

python model_main.py --eval  --eval_output=./cm.txt --model_path=./sample_model/SPEC/spect_finetune.pth

This checkpoint produces ERR at 56.51%

python model_main.py --eval --features mfcc  --eval_output=./cm.txt --model_path=./sample_model/MFCC/mfcc_logical.pth

This checkpoint fails to run.

RuntimeError: size mismatch, m1: [32 x 32], m2: [480 x 128] at /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/THC/generic/THCTensorMathBlas.cu:283

The dimensions of the output is 32x32 before fc1 and there is a shape mismatch.

I guess my question is are these checkpoints supposed to perform much better?
If so then it might be an error on my part and I'll doublecheck.

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions