Replies: 1 comment
-
|
Hi @yinleon 👋, What's your GPU ? It shouldn't raise any memory issues - I was running it with 4k images without issues on a 16GB GPU because under the hood every image will be resized to 1024x1024 same shape the detection models was trained on. Best regards, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Apologies if this is obvious.
I am using the
ocr_predictormodel on a GPUWhen I read the image(s), they appear to be numpy arrays. Do I need to convert the image into a torch tensor on the same machine?
Separately the images are 720, 1280 which I fear may be too large to hold in the GPU's memory.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions