Releases: diffusionstudio/core
Releases · diffusionstudio/core
v1.0.0-rc.1
Some breaking changes were introduced with v1.0.0-rc.1
. Here is a migration guide:
appendTrack
has been renamed to shiftTrack
.
before:
const track = composition.appendTrack(VideoTrack);
after:
const track = composition.shiftTrack(VideoTrack);
appendClip
has been renamed to add
.
before:
const clip = await composition.appendClip(new Clip());
// when using tracks
const clip = await track.appendClip(new Clip());
after:
const clip = await composition.add(new Clip());
// when using tracks
const clip = await track.add(new Clip());
position
has been renamed to layer
on the track object
before:
const track = composition.appendTrack(VideoTrack).position('bottom');
after:
const track = composition.shiftTrack(VideoTrack).layer('bottom');
New Features
a new method for creating tracks has been introduced:
const track = composition.createTrack('video');
// equivalent to
const track = composition.shiftTrack(VideoTrack);
This enabled us to add a new new method to the MediaClip
for creating captions, which was previously not possible due to circular dependencies:
const audio = new AudioClip(new File(), { transcript: new Transcript() });
await composition.add(audio);
await audio.generateCaptions();
Note the
MediaClip
needs to be added to the composition for the generateCaptions method to be available.
v1.0.0-beta.15
- Reduced caption preset complexity by omitting the init function and serializing a base text clip
v1.0.0-beta.14
Added VerdantCaptionPreset
example usage:
import * as core from '@diffusionstudio/core';
const composition = new core.Composition();
const transcript = await core.Transcript.from('https://diffusion-studio-public.s3.eu-central-1.amazonaws.com/docs/ai_ft_coding_captions.json');
await composition.appendTrack(core.CaptionTrack)
.from(new core.MediaClip({ transcript: transcript.optimize() }))
.create(core.VerdantCaptionPreset)
v1.0.0-beta.13
- migrated repository to open-source governed by the Mozilla Public License, v. 2.0
- minor bug fixes
v1.0.0-beta.12
Improved TextClip
default values and removed confusing properties
v1.0.0-beta.11
- Improved error handling when loading videos with invalid codecs
- Enforced inline playback on HTML5 Video Element (mobile related)
v1.0.0-beta.10
Added CanvasEncoder
implementation
The CanvasEncoder
is a powerful tool for creating video recordings directly from a canvas element in the browser, ideal for capturing canvas-based games or animations without the need for our Composition
object.
Basic Example
import { CanvasEncoder } from '@diffusionstudio/core';
// Make sure to assign video dimensions
const canvas = new OffscreenCanvas(1920, 1080);
const encoder = new CanvasEncoder(canvas);
const ctx = canvas.getContext("2d")!;
for (let i = 0; i < 90; i++) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
// background
ctx.fillStyle = "blue";
ctx.fillRect(0, 0, canvas.width, canvas.height);
// text
ctx.fillStyle = "white";
ctx.font = "50px serif"; // animated Hello World
ctx.fillText("Hello world", 10 + i * 20, 10 + i * 12);
// Encode the current canvas state
await encoder.encodeVideo();
}
const blob = await encoder.export();
v1.0.0-beta.9
- Added properties to all Clip constructors for a more efficient initialization
v1.0.0-beta.8
const text = new core.TextClip('Bunny - Our Brave Hero')
.set({ position: 'center', stop: 90, stroke: { color: '#000000' } }); // will trigger 2 update events
v1.0.0-beta.9
const text = new core.TextClip({
text: 'Bunny - Our Brave Hero',
position: 'center',
stop: 90,
stroke: { color: '#000000' },
}); // will trigger a single update event
- Improved
set
method typing by defining dedicated interfaces