You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now let's discuss how the architecture of our solution will look like.
4
-
It will be a little different from the RTMP to HLS architecture.
5
-
The main component will be the pipeline, which will ingest RTP stream and convert it to HLS. Beyond that we will also need a Connection Manager, which will be responsible for establishing an RTSP connection with the server.
4
+
It will be a little different from the RTMP to HLS architecture. In most cases communication with a RTSP server is split into two phases:
6
5
7
-

6
+
- Negotiation of the stream parameters over RTSP.
7
+
- Receiving RTP stream(s) that the client and server have agreed upon.
8
8
9
-
When initializing, the pipeline will start a Connection Manager which starts an RTSP connection with the server. Once the connection is fully established, the pipeline will be notified.
9
+
Both of these phases are handled by RTSP Source. Let's take a closer look how each of them folds out:
10
10
11
-
Let's take a closer look on each of those components:
12
-
13
-
## Connection Manager
14
-
The role of the connection manager is to initialize RTSP session and start playing the stream.
11
+
## Establishing the connection
12
+
When establishing a connection the source will act as a connection manager, initializing the RTSP session and starting the stream playback.
15
13
It communicates with the server using the [RTSP requests](https://antmedia.io/rtsp-explained-what-is-rtsp-how-it-works/#RTSP_requests). In fact, we won't need many requests to start playing the stream - take a look at the desired message flow:
16
14
17
15

@@ -20,18 +18,21 @@ First we want to get the details of the video we will be playing, by sending the
20
18
Then we call the `SETUP` method, defining the transport protocol (RTP) and client port used for receiving the stream.
21
19
Now we can start the stream using `PLAY` method.
22
20
23
-
## Pipeline
21
+
## Receiving the stream
24
22
25
-
The pipeline consists of a couple elements, each of them performing a specific media processing task. You can definitely notice some similarities to the pipeline described in the [RTMP architecture](02_RTMP_Introduction.md). However, we will only be processing video so only the video processing elements will be necessary.
23
+
The source is a bin containing a few elements, each of them performing a specific media processing task. You can definitely notice some similarities to the pipeline described in the [RTMP architecture](03_RTMP_Architecture.md). However, we will only be processing video so only the video processing elements will be necessary.
26
24
27
25

28
26
29
-
We have already used the,`H264 Parser`, `MP4 H264 Payloader`, `CMAF Muxer`and `HLS Sink` elements in the RTMP pipeline, take a look at the [RTMP to HLS architecture](03_RTMP_SystemArchitecture.md) chapter for details of the purpose of those elements.
27
+
We have already used the `H264 Parser`and `HLS Sink Bin` elements in the RTMP pipeline, take a look at the [RTMP to HLS architecture](03_RTMP_Architecture.md) chapter for details of the purpose of those elements.
30
28
31
29
Let us describe briefly what is the purpose of the other components:
32
30
33
31
### UDP Source
34
32
This element is quite simple - it receives UDP packets from the network and sends their payloads to the next element.
35
33
36
-
### RTP SessionBin
37
-
RTP SessionBin is a Membrane's Bin, which is a Membrane's container used for creating reusable groups of elements. In our case the Bin handles the RTP session with the server, which has been set up by the Connection Manager.
34
+
### RTP Demuxer
35
+
This element is responsible for getting media packets out of the RTP packets they were transported in and routing them according to their [SSRC](https://datatracker.ietf.org/doc/html/rfc3550#section-3). In our case we only receive a single video stream, so only one output will be used.
36
+
37
+
### RTP H264 Depayloader
38
+
When transporting H264 streams over RTP they need to be split into chunks and have some additional metadata included. This element's role is to unpack the RTP packets it receives from the Demuxer into a pure H264 stream that can be processed further.
In the tutorial we won't explain how to implement the solution from the ground up - instead, we will run the existing code from [Membrane demos](https://github.com/membraneframework/membrane_demo).
2
2
3
3
To run the RTSP to HLS converter first clone the demos repo:
The `@output_path` attribute defines the storage directory for hls files and the `@rtp_port` defines on which port we will be expecting the rtp stream, once the RTSP connection is established.
25
+
The `output_path` attribute defines the storage directory for hls files and the `rtp_port` defines on which port we will be expecting the rtp stream, once the RTSP connection is established.
36
26
37
-
The `@rtsp_stream_url` attribute contains the address of the stream, which we will be converting. It is a sample stream prepared for the purpose of the demo.
27
+
The `rtsp_stream_url` attribute contains the address of the stream, which we will be converting. If you want to receive a stream from some accessible RTSP server, you can pass it's URL here. In this demo we'll run our own, simple server, using port 30001:
28
+
29
+
```bash
30
+
mix run server.exs
31
+
```
38
32
39
33
Now we can start the application:
40
-
```console
34
+
```bash
41
35
mix run --no-halt
42
36
```
43
37
44
-
The pipeline will start playing, after a couple of seconds the HLS files should appear in the `@output_path` directory. In order to play the stream we need to first serve them. We can do it using simple python server.
45
-
46
-
```console
47
-
python3 -m http.server 8000
48
-
```
38
+
The pipeline will start playing, after a couple of seconds the HLS files should appear in the `@output_path` directory.
49
39
50
40
Then we can play the stream using [ffmpeg](https://ffmpeg.org/), by pointing to the location of the manifest file:
As explained in the [Architecture chapter](08_RTSP_Architecture.md), the pipeline will consist of an RTSP Source and an HLS Sink Bin. For now we won't connect this elements in any way, since we don't have information about what tracks we'll receive from the RTSP server which we're connecting to.
Once we receive the `{:set_up_tracks, tracks}` notification from the source we have the information what tracks have been set up during connection establishment and what we should expect. First we filter these tracks, so that we have at most one video and audio track each. Then we can create specs that will connect output pads of the source with input pads of the sink appropriately - audio to audio and video to video.
28
+
29
+
##### lib/pipeline.ex
30
+
```elixir
31
+
@impltrue
32
+
defhandle_child_notification({:set_up_tracks, tracks}, :source, _ctx, state) do
0 commit comments