Can I use await model.transcribe(...)? I want to send hundreds of messages without kafka queue, and current gpu load at 1 request is low and as far as I understood, the time to wait for request before sending to model.transcribe() grows with number of requests.