Replies: 2 comments 2 replies
-
Hello, this is a FFmpeg question, not a server question, anyway, from my experience, input streams must be put at the beginning of the command, then you have to map them, then you can choose the codec, and finally you can send out the stream. Example:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
By the way, the README has been updated with step-by-step instructions on how to add audio to an existing RPI Camera stream. GStreamer has been chosen over FFmpeg because tests showed that it introduces less latency: https://github.com/bluenviron/mediamtx#from-a-raspberry-pi-camera |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm trying to merge a video stream with an audio stream.
The Camera is a Raspberry Pi Camera Module V3. I'm able to make it stream by itself.
And the audio source is from a USB microphone, which I'm able to record from with
ffmpeg
.To record from the USB microphone:
This works properly.
In the rtsp-simple-server configuration, I have this for the camera:
Which also works properly.
Now, based on other question, I have tried to merge using
runOnDemand
andffmpeg
, but the stream crash when I request it.Below the section above, I have added this:
Any idea what I'm missing here ?
If I remove the
-i rtsp://localhost:8554/cam -c:v copy
part, I'm able to stream the audio only, so the problem seems to be with getting the camera stream.Thank you !
Beta Was this translation helpful? Give feedback.
All reactions