Skip to content

Commit e0bc85e

Browse files
committed
Added new beta 10 canvas encoder example and documentation
1 parent b8f48eb commit e0bc85e

File tree

7 files changed

+168
-6
lines changed

7 files changed

+168
-6
lines changed

README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,9 @@
55

66
<p align="center">
77
<img src="https://img.shields.io/badge/Made with-Typescript-blue?color=000000&logo=typescript&logoColor=ffffff" alt="Static Badge">
8-
<a href="https://vitejs.dev"><img src="https://img.shields.io/badge/Powered%20by-Vite-000000?style=flat" alt="powered by vite"></a>
8+
<a href="https://vitejs.dev"><img src="https://img.shields.io/badge/Powered%20by-Vite-000000?style=flat&logo=Vite&logoColor=ffffff" alt="powered by vite"></a>
99
<a href="https://discord.gg/h5QGXw8m"><img src="https://img.shields.io/discord/1115673443141156924?style=flat&logo=discord&logoColor=fff&color=000000" alt="discord"></a>
10+
<a href="https://x.com/diffusionstudi0"><img src="https://img.shields.io/badge/Follow for-Updates-blue?color=000000&logo=X&logoColor=ffffff" alt="Static Badge"></a>
1011
</p>
1112
<br/>
1213

docs/guide/canvas.md

+87
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# Canvas Encoder
2+
3+
The `CanvasEncoder` is a powerful tool for creating video recordings directly from a canvas element in the browser, ideal for capturing canvas-based games or animations without the need for our `Composition` object.
4+
5+
## Installation
6+
7+
To use the `CanvasEncoder`, import it into your project:
8+
9+
```typescript
10+
import { CanvasEncoder } from '@diffusionstudio/core';
11+
```
12+
13+
## Basic Usage
14+
15+
Start by creating a canvas element and setting its dimensions to match your desired video resolution:
16+
17+
```typescript
18+
// Make sure to assign video dimensions
19+
const canvas = new OffscreenCanvas(1920, 1080);
20+
21+
const encoder = new CanvasEncoder(canvas);
22+
```
23+
24+
### Configuration Options
25+
26+
The `CanvasEncoder` constructor accepts an optional second argument to configure the output settings. The default configurations are:
27+
28+
```typescript
29+
{
30+
sampleRate: 44100, // Audio sample rate in Hz
31+
numberOfChannels: 2, // Number of audio channels
32+
videoBitrate: 10e6, // Video bitrate in bits per second
33+
fps: 30, // Frames per second for the video
34+
}
35+
```
36+
37+
## Video Encoding
38+
39+
After setting up the encoder, you can encode individual frames from the canvas to create your video. The following example creates a 3-second video by encoding 90 frames:
40+
41+
```typescript
42+
const ctx = canvas.getContext("2d")!;
43+
44+
for (let i = 0; i < 90; i++) {
45+
ctx.clearRect(0, 0, canvas.width, canvas.height);
46+
// background
47+
ctx.fillStyle = "blue";
48+
ctx.fillRect(0, 0, canvas.width, canvas.height);
49+
// text
50+
ctx.fillStyle = "white";
51+
ctx.font = "50px serif"; // animated Hello World
52+
ctx.fillText("Hello world", 10 + i * 20, 10 + i * 12);
53+
54+
// Encode the current canvas state
55+
await encoder.encodeVideo();
56+
}
57+
```
58+
59+
## Audio Encoding (Optional)
60+
61+
To add audio to your video, you can use the Web Audio API. The `CanvasEncoder` supports encoding audio buffers along with the video:
62+
63+
```typescript
64+
const response = await fetch('https://diffusion-studio-public.s3.eu-central-1.amazonaws.com/audio/sfx/tada.mp3');
65+
const arrayBuffer = await response.arrayBuffer();
66+
const context = new OfflineAudioContext(2, 1, 48e3);
67+
68+
// Decode the audio data to get an AudioBuffer
69+
const audioBuffer = await context.decodeAudioData(arrayBuffer);
70+
71+
// Encode the audio buffer
72+
await encoder.encodeAudio(audioBuffer);
73+
```
74+
75+
The audio will be automatically resampled to match the output configuration, so you don't need to worry about sample rate differences.
76+
77+
> Note: By adding the audio, the resulting video duration will be 6 seconds, as that's the duration of the sound effect. In production you want to keep the video and audio durations synced.
78+
79+
## Exporting the Video
80+
81+
Once you've encoded all the video frames and audio data, finalize the encoding process and export the result as an MP4 file:
82+
83+
```typescript
84+
const blob = await encoder.export();
85+
```
86+
87+
The `export` method returns a `Blob` containing the video with a `video/mp4` MIME type. You can then save or process this blob as needed.

docs/guide/models.md

+2
Original file line numberDiff line numberDiff line change
@@ -54,3 +54,5 @@ timestamp.frames; // 30
5454
timestamp.millis; // 1000
5555
timestamp.seconds; // 1
5656
```
57+
58+
**Next:** [Canvas Encoder](/docs/guide/canvas.md)

docs/guide/video.md

+1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ This guide provides a comprehensive overview of using the Diffusion Studio libra
88
* [Image Clip](/docs/guide/image.md)
99
* [Text Clip](/docs/guide/text.md)
1010
* [Models](/docs/guide/models.md)
11+
* [Canvas Encoder](/docs/guide/canvas.md)
1112

1213
## Setting Up a Composition
1314

examples/scripts/canvas-encoder.ts

+71
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
import { Application, buildGeometryFromPath, GraphicsPath, Mesh, Texture } from 'pixi.js';
2+
import { CanvasEncoder, downloadObject } from '@diffusionstudio/core';
3+
4+
// Create a new application
5+
const app = new Application();
6+
7+
// Initialize the application
8+
await app.init({
9+
backgroundColor: 'brown',
10+
height: 1080,
11+
width: 1920,
12+
});
13+
14+
document.body.appendChild(app.canvas);
15+
16+
const path = new GraphicsPath()
17+
.rect(-50, -50, 100, 100)
18+
.circle(80, 80, 50)
19+
.circle(80, -80, 50)
20+
.circle(-80, 80, 50)
21+
.circle(-80, -80, 50);
22+
23+
const geometry = buildGeometryFromPath({
24+
path,
25+
});
26+
27+
const meshes: Mesh[] = [];
28+
29+
for (let i = 0; i < 200; i++) {
30+
const x = Math.random() * app.screen.width;
31+
const y = Math.random() * app.screen.height;
32+
33+
const mesh = new Mesh({
34+
geometry,
35+
texture: Texture.WHITE,
36+
x,
37+
y,
38+
tint: Math.random() * 0xffffff,
39+
});
40+
41+
app.stage.addChild(mesh);
42+
43+
meshes.push(mesh);
44+
}
45+
46+
// create new encoder with a framerate of 30FPS
47+
const encoder = new CanvasEncoder(app.canvas);
48+
49+
for (let i = 0; i < 180; i++) {
50+
// render to canvas
51+
app.render();
52+
// encode current canvas state
53+
await encoder.encodeVideo();
54+
// animate
55+
meshes.forEach((mesh) => {
56+
mesh.rotation += 0.02;
57+
});
58+
}
59+
60+
// optionally create audio buffer
61+
// using the WebAudio API
62+
const response = await fetch('https://diffusion-studio-public.s3.eu-central-1.amazonaws.com/audio/sfx/tada.mp3');
63+
const arrayBuffer = await response.arrayBuffer();
64+
const context = new OfflineAudioContext(1, 1, 48e3);
65+
const audioBuffer = await context.decodeAudioData(arrayBuffer);
66+
67+
// encode audio buffer (sample rate will be adapted for you)
68+
await encoder.encodeAudio(audioBuffer);
69+
70+
// finalize encoding/muxing and download result
71+
downloadObject(await encoder.export(), 'test.mp4');

package-lock.json

+4-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,6 @@
1414
"vite": "^5.4.0"
1515
},
1616
"dependencies": {
17-
"@diffusionstudio/core": "^1.0.0-beta.9"
17+
"@diffusionstudio/core": "^1.0.0-beta.10"
1818
}
1919
}

0 commit comments

Comments
 (0)