Skip to content

Added distributed transcoding and playback design doc#914

Open
solidDoWant wants to merge 1 commit intozoriya:masterfrom
solidDoWant:docs/distributed-transcoding-and-playback-1
Open

Added distributed transcoding and playback design doc#914
solidDoWant wants to merge 1 commit intozoriya:masterfrom
solidDoWant:docs/distributed-transcoding-and-playback-1

Conversation

@solidDoWant
Copy link
Contributor

Rendered

This is lower-level documentation of the distributed transcoder architecture discussed on Discord here. If the there are no major issues, I'll go ahead and start the actual implementation.

Signed-off-by: Fred Heinecke <fred.heinecke@yahoo.com>
Signed-off-by: solidDoWant <fred.heinecke@yahoo.com>
@@ -0,0 +1,178 @@
# Distributed transcoding and how playback works

Kyoo provides videos via [HTTP live streaming](https://www.cloudflare.com/learning/video/what-is-http-live-streaming/) (HLS). HLS is comprised of two components: a "playlist" (using the `.m3u8` file extension), and "transport stream segments", or "segments" (using the `.ts` file extension). Playlists contain a set of segments, which are pieces of the video being streamed. When playing a video, the client first requests a playlist of the video, and then requests segments of the video, as needed.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Segments aren't necessarily .ts files, some versions of HLS support fmp4 (see #542) and one day another segment format could be adopted.


The transcoding service is designed to be highly available. When multiple transcoding service instances are deployed at once and configured properly, users should not notice when at least one service fails. This holds true even when the failed instance(s) were transcoding a video being actively played. This is because Kyoo supports _distributed, parallel transcoding_. The service can be configured so that a minimum number of transcoder instances will transcode the parts of the same video. When multiple instances transcode the same parts of the same video at the same time, only one has to succeed for each segments for transcoding to be successful.

Because no two segments are guaranteed to come from the same transcoder instance, it is critical that all segments are entirely independent of each other, and do not overlap. "Parallel segments", or segments covering the same video and same time range that are produced by different instances, must always start with a [I-frame](https://en.wikipedia.org/wiki/Video_compression_picture_types). The start-finish time interval of parallel segments must also match up exactly, with no extra (or missing) frames. Additionally, for interoperability with direct playback, segments must line up exactly with keyframes in the source video. See [here](https://zoriya.dev/blogs/transcoder/) for more information.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sgegments also need to start by i-frame for better seeking by clients (without that, clients might need to fetch the segment before the seeked time [& create another transcode job just to get that one frame])

Comment on lines +33 to +35
api ->> api: Generate playlist
api -) db: Create transcoding job for first k segments
api ->> cvp: Return video playlist
Copy link
Owner

@zoriya zoriya May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
api ->> api: Generate playlist
api -) db: Create transcoding job for first k segments
api ->> cvp: Return video playlist
api ->> api: Generate playlist
api ->> cvp: Return video playlist
api -) db: Create transcoding job for first k segments

we do not wait for the first k segments to be ready to return the playlist, i think this change would highlight it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify the -) arrow represents an async action, but yea I can reorder this.

Are you sure that you'd prefer "Return video playlist" before "Generate playlist"?

option Segment exists, not pending deletion
db ->> db: Update segment access time<br/>via trigger (for cleanup)
db -->> api: Return URL
option Segment pending deletion
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does pending deletion mean here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a corner case where a segment hasn't been accessed in awhile, and is in the "pending deletion" part of segment cleanup here. This ensures that there isn't a race condition between DB and S3 state when requesting a segment that is about to disappear.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then we probably want to handle that exactly like a Segment does not exist no? (as in, use the bellow option that will create a transcoding job)

end

cvp ->> api: Requests video segment
loop Until segment is available
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in practice we'd wait for an event from pg (using LISTEN), can't we specify that as an arrow instead of a loop?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I should change it to this. I wrote this before adding the job completion notifications

Comment on lines +42 to +49
box Transcoder service (1..N)
participant api as Web API
participant jobs as Job worker
end
box Backend (HA)
participant db as Postgres
participant fs as Storage
end
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fs & jobs are never used in the schema (maybe just remove them?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea sure I can do this on all the diagrams where they aren't in use. I just had all the participants listed to make it a little easier to scroll back and forth between the diagrams.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i saw that afterwards, might be good to keep them but the document should explain what they are before the schemas.

I have no clue what Job worker, Storage (there's two of them) or worker is

Comment on lines +83 to +95
loop pg_cron: trigger every time duration d
db ->> db: Create segment cleanup job
worker ->> storage: Get all segments
storage -->> worker: Return segments
loop For each segment
critical Cleanup old segments
worker ->> db: Get last accessed time
option No record, segment older than expiration time t, or<br/>Record exists, segment access time older than expiration time t
worker ->> db: Mark segment as "pending deletion"
worker ->> storage: Delete segment
worker ->> db: Delete segment record
end
end
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks super complex for what it is. I don't think we need a pg_cron (which is an extension that is a pain to install) nor do we need a specific worker for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea you're right. I was on the fence about whether or not this should be triggered by the DB, or if workers should have a long running ticker/goroutine that handles cleanup.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep the current goroutine/timer workflow, let's keep it simple


Segments can be generated from videos as-is (direct playback), or transcoded. Kyoo supports both options. Transcoding is on the fly, slightly ahead of when a client is expected to request segments. Transcoding may be done one segment at a time, or in batches, which generally results in better transcoding performance. Once segments are transcoded, they are cached in the storage backend (filesystem, S3) for a user-configurable duration, and eventually removed when they have not been recently accessed. Cleanup is handled as a background job, and old segments may not be removed immediately.

The transcoding service is designed to be highly available. When multiple transcoding service instances are deployed at once and configured properly, users should not notice when at least one service fails. This holds true even when the failed instance(s) were transcoding a video being actively played. This is because Kyoo supports _distributed, parallel transcoding_. The service can be configured so that a minimum number of transcoder instances will transcode the parts of the same video. When multiple instances transcode the same parts of the same video at the same time, only one has to succeed for each segments for transcoding to be successful.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced we want to have an option to configure multiples instance that would transcode the same parts. This makes code way more complex & aggravate mismatch bugs (if segments are not perfectly cuts).

The benefits for this complexity (+ wasted compute) is somewhat discussable, i think we could handle service failures without ALWAYS running transcode 2+ times

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I have with this isn't that service failures cannot be handled at all - it's that they cannot be handled fast enough. An transcoding job essentially amounts to:

  1. Receive the job details (function call, postgres lookup, rabbitmq message, etc.)
  2. Run ffmpeg -i <source video file> <args> <output file>, waiting for completion
  3. Get all the files that ffmpeg produced and save them in the file backend
  4. Notify that the job is complete (function call, postgres, etc.)

When there is a failure in step 2, 3, or 4, the whole process needs to start over. Starting over adds transcoding latency, increasing the time it takes for the transcoded segments to be available to clients. When the transcoder takes a relatively long time (larger segment sizes, slower hardware, other streams being processed), and is barely faster than the actual client playback, starting the job over results in buffering. When the application is configured to transcode the same content on multiple instances, this latency is entirely mitigated.

IMO the issue of wasted compute shouldn't be considered here. I think that the user should have to explicitly enable/configure parallel decoding, therefore, they should be the ones to decide if the compute tradeoff is worth it.

As far as segment cutting bugs, this approach should not introduce any additional bugs that would not also be added by supporting transcoding job restarts in the first place. If all transcoding for an entire video is not handled by a single atomic job (so that all segments come from a single ffmpeg call), then if there is a segment cutting bug, it will affect playback.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We upload segments & update the pg's metadata after every segment. We could just detect if segment transcoding takes more than segment_time * 1.2 and trigger a new job.
In the worst case scenario, users would have a few seconds of loading before playback starts again in a new job. With 95% of videos (where segment_time is <10s) this will be completely transparent for users.

Comment on lines +142 to +178
### Job tracker cleanup
```mermaid
sequenceDiagram
participant cvp as Client video player
box Transcoder service (1..N)
participant api as Web API
participant jobs as Job worker
end
box Backend (HA)
participant db as Postgres
participant fs as Storage
end

loop pg_cron: trigger every time duration d
loop For each job type
db ->> db: Delete old jobs (cascade delete processing records)
end
end
```

### Worker startup
```mermaid
sequenceDiagram
participant cvp as Client video player
box Transcoder service (1..N)
participant api as Web API
participant jobs as Job worker
end
box Backend (HA)
participant db as Postgres
participant fs as Storage
end

worker ->> db: LISTEN for job notifications
worker ->> db: Look for available (pending) jobs
worker ->> worker: Process pending jobs (see Jobs section)
```
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't have 2+ transcoders running on the same parts, wa can almost remove all of that (we'd only need some logic to detect when a transcoder died unexpectingly i think)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the case. To support multiple instances of the service, with both handling API calls, there must be some form of communication between them. Otherwise, only process-level failures can be handled (and handled downstream). The following failures could not be handled (as examples):

  • One worker receives too much client load and cannot transcode additional requests fast enough (this is my biggest concern)
  • The configured hardware transcoder one machine does not support a specific codec
  • One worker is temporarily unable to access the media, file storage, etc. due to network misconfiguration, incorrect or expired credentials, etc.
  • An upgrade is in progress and either the old or new instances is unable to complete jobs until the upgrade is complete
  • Any other transient failure

Copy link
Owner

@zoriya zoriya May 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how the proposed workflow fixes those issues ngl.

How i see those problem handled:

One worker receives too much client load and cannot transcode additional requests fast enough

Shouldn't the workload be handled by a load balancer?

The configured hardware transcoder one machine does not support a specific codec

This should just fallback to software transcoding, we can't really know in advance if hwaccell will be available

One worker is temporarily unable to access the media, file storage, etc. due to network misconfiguration, incorrect or expired credentials, etc.

I don't think we can (or should) handle that well. This should just error out.

An upgrade is in progress and either the old or new instances is unable to complete jobs until the upgrade is complete

Updates should be transparent & allow both the old & new ones to work together. We store on db Version of state (metadata extracted, thumbnail computed & co) & newer versions will need to be handled by older services.

Any other transient failure

This should just error out.

@zoriya
Copy link
Owner

zoriya commented May 4, 2025

Small note but you did a typo on worker vs Job worker & Storage vs storage so schemas are harder to read rn (we have the 4 of them specified in a weird order)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants