Optimizing delivery of LL-HLS with NGINX #1689
Replies: 3 comments 4 replies
-
Are you certain you want to have your upstream connect to port 3333? That's typically the WebRTC port, not LLHLS Can you explain your thinking with some comments inline? I'd also say your logging seems excessive, but perhaps that's what you want for debugging |
Beta Was this translation helpful? Give feedback.
-
I noticed that the session (Yellow) appears to be unique for each user. By using "proxy_cache_key $request_uri;" the cache hit ratio decreases significantly. I'm trying to understand more about this session behavior—what exactly is it, and how is it managed for each user? If I skip the session in the proxy server, the stream doesn’t play smoothly. Could anyone provide some guidance on how to handle this efficiently? |
Beta Was this translation helpful? Give feedback.
-
Another topic here Recently, we tried to deploy a change that would serve thumbs from OME, rather than our bespoke thumbnailing process. We use nginx as a reverse proxy, and we had ome set as an upstream like so:
Some other relevant config:
Note, at no point do we add the following line, which I believe is a mistake, since the nginx docs say for keepalive connections you want to clear the Connection header
The keepalive is intended to improve performance by reusing connections rather than making/closing connections every time Before the attempted deploy, the only thing that uses this upstream was the playback endpoint, which worked great. We deployed an update that added another location that proxy passed requests for images to OME, for the thumbnails This eventually caused issues for stream playback and resulted in 404s for stream playlists (m3u8 files), which were a huge pain in the ass to track down. My guess is that either OME or nginx don't like requesting different content types over the same connection, or something along those lines, or perhaps a header issue (one is served from a location like hostname.tld/ome-screenshots, the other from a subdomain). To fix this, we've made a separate upstream definition, added the line to clear the connection header to the playback location (but not the image upstream, which I suppose might cause issues, now that I'm thinking about it?) and more recently I've disabled keepalive entirely for the images upstream (since I assume the performance gains there are minimal) Note that with 32 workers, and a keepalive of 32, that means we'll have up to 32ˆ2 connections to OME, which I'm not sure is ideal. How have the rest of you sorted nginx proxying? I'm sure we can improve on this, but I'm not fantastic with nginx. Do we even care about keepalives between nginx and OME on the same physical server (and different docker containers?) Related, how does OME determine a client is a different client or not? It would be nice to have accurate client counts, but filtered through nginx, I'm not sure we could For reference, here's our current upstream set
|
Beta Was this translation helpful? Give feedback.
-
My latest question here was about how to use NGINX along with OvenMediaEngine.
Today, I’m sharing my configuration here. It might help others bootstrap their own configurations.
This configuration should have http2, but enabling locally is not straightforward - use mkcert
For those who are already familiar with LL-HLS and NGINX, what other settings or approaches would you recommend?
Beta Was this translation helpful? Give feedback.
All reactions