Replies: 1 comment
-
|
I have the same confusion. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Here is the code:
https://github.com/PixArt-alpha/PixArt-sigma/blob/1b9027e84e37b7efb6d02f00cc71826c20a2c0cb/diffusion/model/nets/PixArt_blocks.py#L28
The first parameter of q, k is 1. We are wondering why you fix this number as 1 instead of using B. Won't this mess up the information of different videos and images within a batch? Or during the whole training process, you use batchsize=1 for training?
Beta Was this translation helpful? Give feedback.
All reactions