Skip to content

[Q] Training Details #30

@gevmin94

Description

@gevmin94

Hi, thanks for the great work on LLaSA!

I had a question regarding the released checkpoint and the training configuration. In the paper, the Training Details section mentions that the model was trained for 1.4M steps, with perceptual losses enabled for the last 200k steps.

However, in the released config, I see:

min_steps: 3000000
max_steps: 3000000

Additionally, the setup appears to use 40 GPUs (5 nodes × 8 devices), with a per-device batch size of 8 and 6-second clips — which seems to correspond to an effective batch size of ~32 minutes of audio.

Could you please clarify:
1. Was the released checkpoint trained for 1.4M or 3M steps?
2. Does the provided config reflect the exact setup used for training the released checkpoint?
3. Was the effective batch size indeed around 32 minutes of audio?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions