-
-
Notifications
You must be signed in to change notification settings - Fork 856
Feat docker refactor #1432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat docker refactor #1432
Conversation
bb0674b to
44853cf
Compare
fb2b7ac to
66e67c7
Compare
|
So I took a look at the docker refactor branch. Were you thinking of keeping the mediacms containers running bookworm while Redis and Postgres swapped to alpine? Redis shouldn’t have a problem, but collation would need some effort to work because of locale/collation differences between the two images. Do you have a task tracker or other thing that has some direction or priorities? My only issue is that I’m not using Docker for my deployment; I use the images, but have my own deployment structure for openshift (well, really kubernetes if you subtract the container build process which makes my life easier). I can test them development-wise as I can emulate x86 on my Mac, but it is mostly ephemeral-esque testing. Most of what I think you’re trying to do I have already done, albeit for my own purposes in my own highly modified version… I suspect we're about 45% of the way to my custom version, which I'll happily drop once I get more feature parity. Merging is NOT fun at present. I also have hardware encoding working for NVENC, Intel and RKMPP (arm64), though I turned it off due to quality (cpu based is MUCH slower but looks better). I’m also interested in enabling multi-architecture support if possible; is that in the list of things to do? Bento4 doesn't seem to produce binaries for Linux for architectures other than x86, so a custom build there would likely be required. Thoughts? |
|
Thanks for checking @mrjoshuap . There is no issue tracker with this, but we can open one, or keep the discussion here, anything works. For example currently the entire mediacms folder is getting mounted by the running containers. After the refactor, this won't be the case, you should not need to clone the repo in order to get a working system up. What is a good path for migration here for existing systems? Same for the postgresql volume that is switched. Bookworm vs alpine, this is detail, if there are issues in Redis/Postgresql then this can be reverted Lets keep the hardware acceleration part on another issue, this is a goal by itself to explore and introduce a working solution. I haven't been involved at all to this so far. On a side note, I think that ffmpeg can do HLS file creation, which is what we use bento4 . So perhaps that dependency could be removed (but again, this is irrelevant of this refactoring here!) |
|
After a little analysis, here are some high level tasks pending. Mostly for your thoughts and quick tracking. As a side note, the current build is failing with an error:
This might be a result of just working off an older main revision and just not fixed from merge conflicts. High Priority Tasks
Medium Priority Tasks
Low Priority Tasks
|
|
Yes, FFmpeg can create HLS (HTTP Live Streaming) packages, including playlists and segmented files, making it a popular all-in-one tool for transcoding and packaging video for both live and on-demand streaming. It handles adaptive bitrate ladders easily and supports features like segment duration control and encryption. Compared to Bento4, a specialized MP4 packaging toolkit, FFmpeg is simpler and faster for most basic workflows, especially with traditional MPEG-TS segments. However, Bento4 offers better precision and standards compliance for modern fragmented MP4 (fMP4/CMAF) segments, easier dual HLS+DASH output from the same files, and stronger support for advanced DRM and low-latency use cases. Many professional setups use FFmpeg for encoding and Bento4 for packaging to get the best of both worlds, but for straightforward HLS needs, FFmpeg alone is perfectly capable with few significant drawbacks. The main thing I see as a problem might be the HEVC part; I'm not familiar enough to know how big of a deal it is... Also, if we use ffmpeg, we'd want to decide if the HLS remains two steps or if we want to combine them with the transcoding. I don't think it's a good idea to one pass both, but wanted to note some of this for reference. |
|
After reviewing my Dockerfile, which already handles several of these tasks, I discovered that I had at some point made the static files a volume. During container build, I move the default static files (those that should be present) to a |
|
All this is under the assumption that a couple of the reported issues, along with some of my own experience, are in fact affected by the same/similar collation problems between GLIBC and MUSL. I've not tried recently, might not be a thing. Also stewed on a couple possibilities and think we might be better suited by keeping the base images the same and updating everything else until it stabilizes. Then we can work on going to slimmer images (if that's the goal) where we'd possibly need to do collation/locale migrations or have an alternative procedure to migrate existing deployments. I think it's going to be a safe operation (yet tedious) if we don't alter the base images too drastically. |
|
Here's my branch for this; forked from main and works for 'production' 'full' and 'dev' -- at least on my Mac: https://github.com/mrjoshuap/mediacms/blob/feat/docker-modernization/ |
|
@mgogoulos I made some substantial updates to the documentation in my branch to provide more structure and layout. The next set of tasks I was going to look at was the single server/linux deployment and possibly guidance on Kubernetes deployments (which is MY preferred deploy type). I personally think it silly to support the linux deployment option long term, but I also have no idea of the actual usage.. |
Description
very draft version of needed changes