Skip to content

Loss Convergence and whether ViT is Trained #18

@SuperStacie

Description

@SuperStacie

Hi, thanks for the interesting work!

  1. I've running the updated code and observe that at the pretraining stage, the loss is coverged to ~3(slightly above 3), does my training show similar tendency as your official experiment setting? If it seems correct, in the orginal LLaVA-1.5 pretraining, the loss is finally converged to ~2, how to inteprete this difference?

  2. May I know the rough converged loss value of the fine-tuning stage?

  3. According to you paper, Sec. 3.1 "In our experiments, we show that ViT and position embedding parameters can be kept frozen during pretraining, and updating these parameters during the instruction-tuning stage is sufficient for good performance", it means the ViT is fine-tuned, but the author claims in another issue that the ViT is freezed all the time. Can you clarify on this point? From my understanding, the ViT positional embedding changed adapting the dynamic aspect ratio (similar to pix2instruct), the ViT need to be fine-tuned.

Many thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions