SRT streaming to cloudflare: can't see live input? #1843
davidbaraff
started this conversation in
General
Replies: 2 comments 5 replies
-
|
If there is a delay between publish and the first HaishinKit also provide an option in MediaMixer for using your own // Start MediaMixer in manual mode.
let mixer = MediaMixer(captureSessionMode: .manual)
await mixer.startRunning()// Manually append each CMSampleBuffer.
await mixer.append(sampleBuffer) |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
There is no delay. My capture session is up and running long before I call publish so samples would be sent right away.But if I can manually call mixer.append(sampleBuffer) then I will try that.1. I assume that works for both audio and video samples? 2. If the mixer is setup, but I have not yet called publish, I assume calling mixer.append() is low-cost? (I don’t want to run expensive conversions before the stream has started, as we run the capture session all the time, even when not streaming.)Indeed, that would be preferable, because when calling append() directly on the stream instead, I had to write code that would do conversion of CMSamples to AVAudio PCM data, which I would greatly prefer not to write myself.I’ll try that and let you know if it worked. Sent from my iPadOn Nov 25, 2025, at 7:23 AM, shogo4405 ***@***.***> wrote:
If there is a delay between publish and the first append, a similar issue may occur.
Even a short interval, such as two seconds, can cause this. If you are calling AVCaptureSession.startRunning() after publish, you might want to try starting it beforehand.
HaishinKit also provide an option in MediaMixer for using your own AVCaptureSession:
// Start MediaMixer in manual mode.
let mixer = MediaMixer(captureSessionMode: .manual)
await mixer.startRunning()
// Manually append each CMSampleBuffer.
await mixer.append(sampleBuffer)
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This is similar to the problem I had with castr, in that the Example application, which uses RP and MediaMixer works correctly with cloudflare, but my app (which is now working with castr after the recent update to allow me to set the expected media before publish), does not work with cloudflare.
I can stream RTMP to cloudflare with my app just fine, and see the live preview, and see the VOD when done.
When I stream SRT to cloudflare, I can see the VOD when the recording is done, but whenever I try to playback from cloudflare live when streaming, I get initially the current frame of the live stream, and then their player just spins. GPT suggests its an issue with IDR?
Anyway, my question is, what is different about the example app, which uses MediaMixer than what I'm doing, which is I construct a standard AVCaptureSession and in the output callbacks, feed audo/video samples as appropriate into the SRTStream.
Given that this doesn't work, if I make my own AVCaptureSession (and I have to be in control of the callback, as I have more processing I need to do there), is there some way I can make use of MediaMixer to feed it samples after I've processed them? (E.g. I take the CMSampleBuffer from the capture callback and heavily process it before sending it on). That's why I haven't simply attached audio/video as in the Example app; I have to get in in the middle.
But given that the example app allows live playback of SRT on cloudflare, and my app doesn't, there must be something extra that all the MediaMixer code is doing that I don't...
Any clues appreciated!
Beta Was this translation helpful? Give feedback.
All reactions