Skip to content

feat: a working first mvp of the project#1

Open
subhraneel2005 wants to merge 1 commit intomainfrom
prototype-v1
Open

feat: a working first mvp of the project#1
subhraneel2005 wants to merge 1 commit intomainfrom
prototype-v1

Conversation

@subhraneel2005
Copy link
Owner

@subhraneel2005 subhraneel2005 commented Jan 31, 2026

hld flow of current prototype

vidgen

Summary by CodeRabbit

Release Notes

  • New Features

    • Added video rendering with dynamic captions, audio, and customizable highlight colors
    • Introduced automatic caption generation directly from audio files
    • Implemented background video support with local caching for improved performance
  • Documentation

    • Updated README with new rendering workflow instructions
  • Chores

    • Added build scripts and updated configuration

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

This PR introduces a complete video generation pipeline integrating caption transcription via Whisper, audio processing (MP3 to 16kHz WAV conversion), Cloudinary video caching, and dynamic Remotion composition rendering through a new POST API endpoint. Components are refactored to accept props instead of relying solely on store state.

Changes

Cohort / File(s) Summary
Configuration & Package Setup
.gitignore, package.json, README.md, remotion/scripts/download-whisper.mjs
Added cache directories to .gitignore. Updated npm scripts to bundle Remotion and download Whisper model (tiny.en). Updated README with props-based rendering example.
Audio Processing
app/actions/generate-audio.ts, app/actions/mp3-16k_wav.ts
Swapped audio output from PCM 16k to MP3, then post-processes via ffmpeg to convert to WAV at 16kHz. New mp3ToWav helper module executes ffmpeg conversion with specific codec flags.
Caption Generation Pipeline
lib/getCaptions.ts, remotion/scripts/generate-captions.ts, remotion/scripts/whisper-worker.ts, components/screens/generate-captions-screen.tsx
Refactored getCaptions to accept audioPath and persist captions JSON to disk. New whisper-worker script performs Whisper transcription and caption conversion. generateCaptions now exports a callable function. Added GenerateCaptions UI component with loading state and error handling.
Video Caching & Storage
lib/video-cache.ts
New module providing getCachedVideo (MD5-based local caching of Cloudinary videos) and cleanOldCache (pruning stale files by age) utilities.
API & User Interface
app/api/render-video/route.ts, app/page.tsx, components/render-button.tsx
New POST endpoint orchestrates complete render flow: loads captions, caches background videos, constructs composition props, renders to MP4, cleans old cache. Added RenderButton component to trigger rendering. Integrated GenerateCaptions into main page layout.
Remotion Composition & Components
remotion/Composition.tsx, remotion/Root.tsx, remotion/CaptionText.tsx, remotion/RedditOverlay,.tsx
Added new public props (audioUrl, hook, highlightColor, videoUrl) to composition. Replaced Html5Audio with conditional OffthreadVideo for background video. Updated CaptionText and RedditOverlay to consume props instead of store. Introduced RenderInputProps interface and default props structure.
State Management & Utilities
store/useVideoStoryStore.ts, remotion/uyils/serializeStoreForRender.ts
Added captionsPath state, setCaptionsPath setter, and getRenderData() method to store. Wrapped store with persist middleware. New serializeStoreForRender utility exports render-ready props from store state.
Data & Assets
props.json, remotion/captions/captions-*.json
Added example props.json with caption data structure. Generated timestamped caption JSON files consumed by rendering pipeline.

Sequence Diagram(s)

sequenceDiagram
    participant User as User (Browser)
    participant UI as UI Components
    participant Store as Zustand Store
    participant API as /api/render-video
    participant Whisper as Whisper Engine
    participant Cache as Video Cache
    participant Remotion as Remotion Renderer
    participant Disk as File System

    User->>UI: Click "Generate Final Video"
    UI->>Store: getRenderData()
    Store-->>UI: {audioUrl, hook, highlightColor, videoUrl, captionsPath}
    UI->>API: POST with render data
    
    API->>Disk: Load captions JSON from path
    Disk-->>API: captions array
    
    alt videoUrl is Cloudinary
        API->>Cache: getCachedVideo(cloudinaryUrl)
        Cache->>Disk: Check cache by MD5 hash
        alt Cache miss
            Cache->>Disk: Download from Cloudinary
            Disk-->>Cache: Save MP4
        end
        Cache-->>API: Relative cache path
    end
    
    API->>Remotion: Load composition with bundled assets
    API->>Remotion: Override durationInFrames from last caption
    Remotion->>Remotion: Calculate frames from captions + audio
    
    API->>Remotion: Render composition to MP4
    Remotion->>Disk: Write video file
    Disk-->>Remotion: Complete
    
    API->>Disk: cleanOldCache(7 days)
    Disk->>Disk: Delete stale cache files
    
    API-->>User: {success: true, videoPath: "..."}
    User->>User: Display final video link
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Whispers of captions now dance through the air,
Videos cached with such meticulous care,
Props flow like carrots from store to the frame,
Remotion renders—let rendering acclaim!
With MP3 to WAV and Cloudinary's might,
The pipeline's complete—what a wonderful sight!

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.11% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'feat: a working first mvp of the project' is vague and uses generic phrasing that does not clearly convey the specific technical changes introduced in this comprehensive changeset. Provide a more descriptive title that captures the main feature or primary change (e.g., 'feat: add video rendering pipeline with caption generation and audio-to-video composition').
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch prototype-v1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/actions/generate-audio.ts (2)

5-38: ⚠️ Potential issue | 🟠 Major

Create public/audios directory before writing files.
The directory is gitignored and won't exist on a fresh clone, causing writeFile to throw ENOENT. Create the directory with the recursive flag before attempting to write.

Suggested fix
-import { writeFile } from 'fs/promises';
+import { mkdir, writeFile } from 'fs/promises';
@@
-        const mp3Path = path.join(process.cwd(), "public/audios", `${baseName}.mp3`);
-        const wavPath = path.join(process.cwd(), "public/audios", `${baseName}_16k.wav`);
+        const audioDir = path.join(process.cwd(), "public/audios");
+        await mkdir(audioDir, { recursive: true });
+        const mp3Path = path.join(audioDir, `${baseName}.mp3`);
+        const wavPath = path.join(audioDir, `${baseName}_16k.wav`);

41-48: ⚠️ Potential issue | 🟡 Minor

Keep returned audio metadata consistent with the WAV URL.

The returned base64, format, and mediaType fields represent the MP3 source from the API, while the url points to the WAV file created by mp3ToWav. Although the current consumer only uses url, this inconsistency could mislead developers who might use the base64 or mediaType fields, expecting them to represent the WAV output. Consider either: (1) converting base64/format/mediaType to represent the final WAV output, or (2) clearly separating source and output metadata in the return object.

🤖 Fix all issues with AI agents
In `@app/actions/mp3-16k_wav.ts`:
- Around line 6-16: The ffmpeg conversion in mp3ToWav uses execFileAsync
(promisified execFile) without increasing stdout/stderr buffer, so long ffmpeg
stderr output can overflow the default 1MB and abort conversion; update the exec
call in mp3ToWav to pass an options object with a larger maxBuffer (e.g.,
maxBuffer: 10 * 1024 * 1024) to execFileAsync, or alternatively add the
"-nostats" flag to the ffmpeg args array to suppress progress output; modify the
call site that invokes execFileAsync with ("ffmpeg", [...args...], { maxBuffer:
10 * 1024 * 1024 }) (or include "-nostats" in the args) to resolve the buffer
overflow.

In `@app/api/render-video/route.ts`:
- Around line 8-25: The POST handler reads a client-controlled
renderData.captionsPath and uses path.join(process.cwd(),
renderData.captionsPath) in the POST function, allowing path traversal; restrict
loading to a known base directory (e.g., a captions directory) and validate the
resolved path is inside it before reading. Change the logic around
path.join()/fs.readFileSync: compute the absolute resolved path (path.resolve or
equivalent) against your baseDir, verify the resolved path startsWith the
baseDir (or is within an allow-list), reject requests that violate this
constraint with a 400/403, and only then read and JSON.parse the file; keep the
existing error handling for read/parse failures. Ensure references: POST,
renderData.captionsPath, captionsFullPath, process.cwd(), and path.join are
updated accordingly.
- Around line 27-40: The substring check using renderData.videoUrl is unsafe;
replace the naive includes("cloudinary.com") logic by parsing the URL with new
URL(renderData.videoUrl) and validate protocol and hostname before calling
getCachedVideo: ensure url.protocol === "https:" (or allow "http:" only if
justified) and that url.hostname === "cloudinary.com" or
url.hostname.endsWith(".cloudinary.com") (reject crafted userinfo/host
variants), and if validation fails return an error response instead of
attempting the fetch; keep the existing try/catch around getCachedVideo and only
call getCachedVideo when the parsed URL passes these checks.
- Around line 59-75: The code computes durationInFrames using a hardcoded 30fps
and mutates composition.durationInFrames after selectComposition; instead, move
the caption-based duration logic into the composition definition via
calculateMetadata so selectComposition returns correct metadata, or if you
cannot change the composition, compute frames using composition.fps (from the
returned composition) and avoid directly mutating
composition.durationInFrames—use the calculateMetadata hook on the <Composition>
(or a separate metadata calculation before rendering) to derive duration from
captions (use captions.length to handle missing captions and compute
lastCaptionEndMs safely).

In `@app/page.tsx`:
- Around line 12-27: Remove the inline developer/debug comments around the JSX
components (ScriptGenerator, GenerateAudio, GenerateCaptions, ChooseBGVideo,
ChooseHighlightColor); delete the `{/* correct */}` and `{/* incorrect, captions
are getting generated manually r9*/}` markers or replace them with concise,
meaningful JSDoc or TODO comments if persistent context is required, and if the
note about GenerateCaptions indicates an unresolved bug, open/fix the underlying
issue in the GenerateCaptions component (or add a TODO with a link/issue id) so
no ambiguous developer notes remain in the merged code.

In `@components/render-button.tsx`:
- Around line 11-28: The handleRender function lacks error handling and may
leave isRendering true on failures; wrap the fetch/response.json flow in a
try/catch/finally around the existing getRenderData()/fetch call, check
response.ok and throw or handle non-2xx responses before parsing JSON, catch
JSON parse or network errors to log them and surface user feedback (e.g., call a
toast/alert or set an error state), and ensure setIsRendering(false) is always
executed in finally so the button is re-enabled; reference handleRender,
getRenderData, setIsRendering and the '/api/render-video' request when making
these changes.

In `@components/screens/generate-captions-screen.tsx`:
- Around line 19-27: The code unconditionally prefixes `audioUrl` with "public/"
and swallows exceptions; update the call in the `try` block so it first
normalizes/detects the form of `audioUrl` (e.g. if it already starts with
"http", "/", or "public/") and only prepend "public/" when appropriate, then
call `generateCaptions` with that normalized path; in the `catch` block capture
the thrown error (e.g. `err`) and log it (console.error or process logger)
before calling `toast.error` so failures are observable, keeping the existing
state updates via `setLoading` and `setCaptionsPath` intact.

In `@lib/getCaptions.ts`:
- Around line 7-18: The transcribe() call inside getCaptions can throw and is
currently unhandled; wrap the call to transcribe({ ... }) in a try/catch, catch
any error thrown by transcribe (and also guard against a falsy
whisperCppOutput), log the detailed error server-side (e.g., using your logger)
and return or throw a sanitized/generic error message to the caller so internal
details aren’t exposed; update getCaptions to reference the transcribe call and
whisperCppOutput checks and ensure the function returns a safe error/result path
when transcription fails.

In `@lib/video-cache.ts`:
- Around line 41-55: The cleanOldCache function lacks error handling; update it
so you first check that CACHE_DIR exists (e.g., with fs.existsSync) before
calling fs.readdirSync, wrap the overall directory read in a try-catch to handle
missing dirs, and wrap per-file operations (fs.statSync and fs.unlinkSync) in
their own try-catch so a missing/raced file won't abort the whole loop; use
console.error to log the caught errors with context (include filePath and the
error) so failures in cleanOldCache are handled gracefully.
- Around line 19-34: Replace the blocking existsSync + writeFileSync pattern to
avoid the TOCTOU race and event-loop blocking: stop using
fs.existsSync(relativePath/cachedPath) and fs.writeFileSync; instead use async
APIs (fs.promises) and perform an atomic write by streaming the response to a
temp file (e.g., cachedPath + ".tmp" or use a unique tmp name) with
stream.pipeline or response.body.pipe, then fs.promises.rename to move the temp
file to cachedPath; open the final write with an exclusive/atomic approach (or
attempt fs.promises.rename and if it fails because cachedPath already exists,
discard the temp) to handle concurrent fetchers safely; keep references to
cachedPath, relativePath and cloudinaryUrl and replace response.arrayBuffer()
with streaming to avoid buffering the whole file in memory.

In `@remotion/Root.tsx`:
- Around line 37-39: The current computation of durationInFrames uses
lastCaption = defaultProps.captions[defaultProps.captions.length - 1] and then
reads lastCaption.endMs, which will throw if captions is empty; update the logic
in Root.tsx to guard against an empty defaultProps.captions by checking its
length (or using optional chaining) and providing a sensible fallback endMs
(e.g., 0 or a default duration in seconds) before computing durationInFrames;
modify the lastCaption/endMs usage in the duration calculation so it never
dereferences undefined (references: defaultProps.captions, lastCaption,
durationInFrames, fps).

In `@remotion/scripts/whisper-worker.ts`:
- Around line 6-10: The code uses process.argv[2] directly in path.join to
create audioPath which allows path traversal; fix by resolving and validating
the runtime path: compute const publicDir = path.resolve(process.cwd(),
"public") and const resolved = path.resolve(publicDir, process.argv[2]); verify
resolved.startsWith(publicDir + path.sep) (or === publicDir for root) and
reject/exit with an error if it does not, also sanitize input by
decoding/stripping null bytes and disallowing absolute paths before calling
fs.existsSync; update references to audioPath, process.argv[2], fs.existsSync,
and path.join accordingly.
- Around line 12-19: The async IIFE calling transcribe() uses model: "tiny.en"
which is inconsistent with other callers (e.g., lib/getCaptions.ts and
remotion/scripts/transcribe.mjs) and it has no error handling; update the model
value in this IIFE to match the project's standard (use "medium.en") and wrap
the transcribe invocation (and any subsequent logic that uses whisperCppOutput)
in a try/catch so errors from transcribe() are caught, logged (include the
error), and the process exits with a non-zero code to avoid unhandled rejections
— refer to the transcribe function call, the whisperCppOutput variable, and the
outer async IIFE when making changes.

In `@remotion/uyils/serializeStoreForRender.ts`:
- Line 1: Rename the directory from "uyils" to "utils" and update any imports
referencing it (e.g., the import of useVideoStoryStore in
serializeStoreForRender.ts) so paths point to "../../utils/useVideoStoryStore"
(and update other files that import from remotion/uyils to remotion/utils
accordingly); ensure CI/build references are updated and run a quick
project-wide search to fix all occurrences of "uyils" to "utils".
- Around line 17-31: The createRenderProps function currently reads captions
from captionsPath but returns hardcoded audioUrl, hook, and highlightColor and
lacks file I/O error handling; update createRenderProps to accept additional
parameters (or a config object) for audioUrl, hook, and highlightColor (or read
them from a provided store/config) instead of hardcoding, and wrap the fs/path
file read and JSON.parse in try/catch to surface and log/throw meaningful
errors; reference the createRenderProps function, the captionsPath parameter,
the captions variable, and the returned audioUrl/hook/highlightColor to locate
and change the implementation accordingly.
🧹 Nitpick comments (8)
lib/video-cache.ts (2)

1-1: Minor: File comment doesn't match actual filename.

The comment says lib/videoCache.ts but the file is named lib/video-cache.ts.

🔧 Proposed fix
-// lib/videoCache.ts
+// lib/video-cache.ts

8-11: Module-level side effect may cause issues in edge runtimes.

Creating directories at module load time can fail in serverless/edge environments where the filesystem may be read-only or ephemeral. Consider lazy initialization inside getCachedVideo instead.

♻️ Proposed refactor
-// Ensure cache directory exists
-if (!fs.existsSync(CACHE_DIR)) {
-  fs.mkdirSync(CACHE_DIR, { recursive: true });
-}
+function ensureCacheDir() {
+  if (!fs.existsSync(CACHE_DIR)) {
+    fs.mkdirSync(CACHE_DIR, { recursive: true });
+  }
+}

Then call ensureCacheDir() at the start of getCachedVideo.

README.md (1)

138-142: Minor: Improve labeling for the new command.

The label "new" on line 138 is unclear. Consider adding a brief description of what the new command does differently.

📝 Suggested improvement
-new
+new approach (with props file):
 `
 node -p "JSON.stringify({captions:require('./remotion/captions/captions-1769673372841.json')})" > props.json && \
 npx remotion render remotion/index.ts MyVideo output.mp4 --props=props.json
 `
remotion/scripts/generate-captions.ts (1)

5-7: Parameter naming is misleading.

The parameter is named audioUrl but getCaptions (from lib/getCaptions.ts) expects a file path (audioPath). This naming inconsistency could cause confusion. The caller in generate-captions-screen.tsx passes public/${audioUrl}, which is a path, not a URL.

♻️ Proposed fix for clarity
-export async function generateCaptions(audioUrl: string) {
-  return await getCaptions(audioUrl); // { captionsPath }
+export async function generateCaptions(audioPath: string) {
+  return await getCaptions(audioPath); // { captionsPath }
 }
remotion/captions/captions-1769865347049.json (1)

1-842: Duplicate generated caption file.

This file appears to contain identical content to remotion/captions/captions-1769673372841.json. Having multiple generated caption files committed suggests these are test artifacts that should be excluded from version control rather than committed.

props.json (1)

1-1: Consider whether this test data file should be committed.

This props.json file appears to be sample/test data for the Remotion rendering workflow. It contains:

  • Hardcoded Cloudinary video URL (may expire)
  • Local audio file path reference
  • Sample caption data

If this is intended as documentation (referenced in README), consider:

  1. Renaming to props.example.json to clarify it's sample data
  2. Adding props.json to .gitignore so user-generated props don't get committed
lib/getCaptions.ts (1)

22-36: Consider async file operations and cleanup strategy.

  1. Using synchronous writeFileSync in an async function blocks the event loop. Consider using fs.promises.writeFile.

  2. Timestamp-based filenames (captions-${Date.now()}.json) will accumulate over time. Consider implementing a cleanup strategy or reusing filenames based on audio file hash.

♻️ Async file write example
+import { promises as fsPromises } from "fs";
+
 // ...
 
-  fs.writeFileSync(captionsPath, JSON.stringify(captions, null, 2));
+  await fsPromises.writeFile(captionsPath, JSON.stringify(captions, null, 2));
remotion/Root.tsx (1)

8-15: Export RenderInputProps for reuse.

This interface is also needed in app/api/render-video/route.ts and potentially other files. Exporting it would ensure type consistency across the render pipeline.

♻️ Export the interface
-interface RenderInputProps {
+export interface RenderInputProps {
   hook: string;
   story: string;
   audioUrl: string;
   videoUrl: string;
   highlightColor: string;
   captions: Caption[];
 }

Comment on lines +6 to +16
const execFileAsync = promisify(execFile);

export async function mp3ToWav(mp3Path: string, wavPath: string) {
await execFileAsync("ffmpeg", [
"-y",
"-i", mp3Path,
"-ac", "1",
"-ar", "16000",
"-c:a", "pcm_s16le",
wavPath,
]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check the file context and see if there are any related tests or usage patterns
find . -type f -name "*.ts" -o -name "*.js" | head -20

Repository: subhraneel2005/vidgen

Length of output: 607


🏁 Script executed:

# Look for test files related to mp3-16k_wav
fd -e test.ts -e test.js -e spec.ts -e spec.js | grep -i mp3

Repository: subhraneel2005/vidgen

Length of output: 47


🏁 Script executed:

# Check if there are any existing usages of execFile in the codebase
rg "execFile|execFileAsync" -A 3 -B 1

Repository: subhraneel2005/vidgen

Length of output: 629


🏁 Script executed:

# Verify the file exists and check its full content
find . -type f -name "mp3-16k_wav.ts"

Repository: subhraneel2005/vidgen

Length of output: 93


🏁 Script executed:

cat -n ./app/actions/mp3-16k_wav.ts

Repository: subhraneel2005/vidgen

Length of output: 530


🌐 Web query:

Node.js execFile default maxBuffer size child_process

💡 Result:

In modern Node.js, child_process.execFile()’s options.maxBuffer default is 1024 * 1024 bytes (1 MiB), and it applies to each of stdout and stderr (if either exceeds the limit, the child is terminated and output is truncated). (nodejs.org)

Historically (e.g., Node 0.11 / 8.x), the default was 200 * 1024 bytes (200 KiB). (r2.nodejs.org)

Citations:


🌐 Web query:

ffmpeg stderr output size progress logs buffer overflow

💡 Result:

When you run ffmpeg with stderr captured (e.g., via subprocess.PIPE) and you don’t continuously read it, the OS pipe buffer can fill up and ffmpeg will block/hang while trying to write more log/progress lines—this is often mistaken for a “buffer overflow.”[2]

Ways to fix/avoid it:

  • Don’t capture logs if you don’t need them: let stderr inherit the parent terminal (no pipe), or redirect it to a file.[2]
  • Continuously drain the pipe(s):
    • If you capture stderr, read it in a dedicated thread/async loop, or merge it into stdout and read one stream.
    • For CLI shells, redirect stderrstdout (e.g., 2>&1) if you want to pipe/parse it.[3]
  • Reduce/disable ffmpeg’s progress spam on stderr:
    • Disable the default stats/progress line: -nostats (since -stats is on by default).[1]
    • Reduce logging: -loglevel warning (or error). (This also reduces what gets written to stderr.)
    • Slow the update rate: -stats_period 1 (or higher) to emit less frequently.[1]
  • Use “program-friendly” progress instead of parsing stderr:
    • -progress <url> outputs periodic key=value lines designed for parsing, and the update rate is controlled by -stats_period.[1]
    • Common pattern: write progress to a pipe you read (often pipe:1 / stdout) while keeping logs minimal.[1]

If you share your exact command and how you’re capturing stderr (language + snippet), I can suggest a robust read/drain pattern for that runtime.


🏁 Script executed:

# Check how mp3ToWav is used in the codebase to understand typical audio lengths
rg "mp3ToWav" -B 2 -A 2

Repository: subhraneel2005/vidgen

Length of output: 915


🏁 Script executed:

# Search for any context about audio processing and file sizes
rg "duration|length|seconds|minutes|long|audio" -i app/actions/mp3-16k_wav.ts

Repository: subhraneel2005/vidgen

Length of output: 47


Increase maxBuffer to prevent ffmpeg stderr from overflowing execFile's 1MB default.
ffmpeg writes progress logs to stderr by default; for longer audio files, this can exceed the buffer limit and silently abort conversions. Increase maxBuffer to 10MB (or higher depending on typical audio length), or alternatively add the -nostats flag to suppress progress output.

💡 Suggested fix (increase maxBuffer)
 export async function mp3ToWav(mp3Path: string, wavPath: string) {
-  await execFileAsync("ffmpeg", [
+  await execFileAsync("ffmpeg", [
     "-y",
     "-i", mp3Path,
     "-ac", "1",
     "-ar", "16000",
     "-c:a", "pcm_s16le",
     wavPath,
-  ]);
+  ], { maxBuffer: 10 * 1024 * 1024 });
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const execFileAsync = promisify(execFile);
export async function mp3ToWav(mp3Path: string, wavPath: string) {
await execFileAsync("ffmpeg", [
"-y",
"-i", mp3Path,
"-ac", "1",
"-ar", "16000",
"-c:a", "pcm_s16le",
wavPath,
]);
const execFileAsync = promisify(execFile);
export async function mp3ToWav(mp3Path: string, wavPath: string) {
await execFileAsync("ffmpeg", [
"-y",
"-i", mp3Path,
"-ac", "1",
"-ar", "16000",
"-c:a", "pcm_s16le",
wavPath,
], { maxBuffer: 10 * 1024 * 1024 });
}
🤖 Prompt for AI Agents
In `@app/actions/mp3-16k_wav.ts` around lines 6 - 16, The ffmpeg conversion in
mp3ToWav uses execFileAsync (promisified execFile) without increasing
stdout/stderr buffer, so long ffmpeg stderr output can overflow the default 1MB
and abort conversion; update the exec call in mp3ToWav to pass an options object
with a larger maxBuffer (e.g., maxBuffer: 10 * 1024 * 1024) to execFileAsync, or
alternatively add the "-nostats" flag to the ffmpeg args array to suppress
progress output; modify the call site that invokes execFileAsync with ("ffmpeg",
[...args...], { maxBuffer: 10 * 1024 * 1024 }) (or include "-nostats" in the
args) to resolve the buffer overflow.

Comment on lines +8 to +25
export async function POST(request: Request) {
const renderData = await request.json();

// load captions
let captions: Caption[] = [];
if (renderData.captionsPath) {
try {
const captionsFullPath = path.join(process.cwd(), renderData.captionsPath);
const captionsContent = fs.readFileSync(captionsFullPath, "utf-8");
captions = JSON.parse(captionsContent);
} catch (error) {
console.error("Error loading captions:", error);
return Response.json(
{ success: false, error: "Failed to load captions" },
{ status: 500 }
);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Block path traversal when loading captions.
renderData.captionsPath is client-controlled and is joined directly to process.cwd() (Line 15), so absolute paths or .. can escape the intended directory and read arbitrary files. Constrain to an allow‑listed base dir and reject paths outside it.

🔒 Proposed fix
-      const captionsFullPath = path.join(process.cwd(), renderData.captionsPath);
+      const captionsBaseDir = path.join(process.cwd(), "public", "captions");
+      const captionsFullPath = path.resolve(
+        captionsBaseDir,
+        renderData.captionsPath
+      );
+      if (!captionsFullPath.startsWith(captionsBaseDir + path.sep)) {
+        return Response.json(
+          { success: false, error: "Invalid captions path" },
+          { status: 400 }
+        );
+      }
🤖 Prompt for AI Agents
In `@app/api/render-video/route.ts` around lines 8 - 25, The POST handler reads a
client-controlled renderData.captionsPath and uses path.join(process.cwd(),
renderData.captionsPath) in the POST function, allowing path traversal; restrict
loading to a known base directory (e.g., a captions directory) and validate the
resolved path is inside it before reading. Change the logic around
path.join()/fs.readFileSync: compute the absolute resolved path (path.resolve or
equivalent) against your baseDir, verify the resolved path startsWith the
baseDir (or is within an allow-list), reject requests that violate this
constraint with a 400/403, and only then read and JSON.parse the file; keep the
existing error handling for read/parse failures. Ensure references: POST,
renderData.captionsPath, captionsFullPath, process.cwd(), and path.join are
updated accordingly.

Comment on lines +27 to +40
// Download and cache background video if it's from Cloudinary
let videoUrl = renderData.videoUrl;
if (videoUrl && videoUrl.includes("cloudinary.com")) {
try {
videoUrl = await getCachedVideo(videoUrl);
console.log("Using cached video:", videoUrl);
} catch (error) {
console.error("Failed to cache video:", error);
return Response.json(
{ success: false, error: "Failed to download background video" },
{ status: 500 }
);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate Cloudinary URLs with URL parsing to avoid SSRF bypasses.
videoUrl.includes("cloudinary.com") (Line 29) is a substring check that can be bypassed via crafted URLs (userinfo/host tricks), enabling arbitrary fetches. Parse with new URL() and enforce protocol/hostname; reject other remote URLs.

🛡️ Proposed fix
-  if (videoUrl && videoUrl.includes("cloudinary.com")) {
-    try {
-      videoUrl = await getCachedVideo(videoUrl);
-      console.log("Using cached video:", videoUrl);
-    } catch (error) {
-      console.error("Failed to cache video:", error);
-      return Response.json(
-        { success: false, error: "Failed to download background video" },
-        { status: 500 }
-      );
-    }
-  }
+  if (videoUrl) {
+    let isCloudinary = false;
+    try {
+      const parsed = new URL(videoUrl);
+      isCloudinary =
+        parsed.protocol === "https:" &&
+        (parsed.hostname === "res.cloudinary.com" ||
+          parsed.hostname.endsWith(".cloudinary.com"));
+    } catch {
+      // Not an absolute URL → treat as local path
+    }
+
+    if (isCloudinary) {
+      try {
+        videoUrl = await getCachedVideo(videoUrl);
+        console.log("Using cached video:", videoUrl);
+      } catch (error) {
+        console.error("Failed to cache video:", error);
+        return Response.json(
+          { success: false, error: "Failed to download background video" },
+          { status: 500 }
+        );
+      }
+    } else if (/^https?:\/\//i.test(videoUrl)) {
+      return Response.json(
+        { success: false, error: "Unsupported remote video URL" },
+        { status: 400 }
+      );
+    }
+  }
🤖 Prompt for AI Agents
In `@app/api/render-video/route.ts` around lines 27 - 40, The substring check
using renderData.videoUrl is unsafe; replace the naive
includes("cloudinary.com") logic by parsing the URL with new
URL(renderData.videoUrl) and validate protocol and hostname before calling
getCachedVideo: ensure url.protocol === "https:" (or allow "http:" only if
justified) and that url.hostname === "cloudinary.com" or
url.hostname.endsWith(".cloudinary.com") (reject crafted userinfo/host
variants), and if validation fails return an error response instead of
attempting the fetch; keep the existing try/catch around getCachedVideo and only
call getCachedVideo when the parsed URL passes these checks.

Comment on lines +59 to +75
try {
const composition = await selectComposition({
serveUrl: bundleLocation,
id: "MyVideo",
inputProps: inputProps,
timeoutInMilliseconds: 60000,
});

const lastCaptionEndMs = captions.length
? Math.max(...captions.map(c => c.endMs))
: 2000;

const durationInFrames = Math.ceil((lastCaptionEndMs / 1000) * 30);

// override duration
composition.durationInFrames = durationInFrames;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n app/api/render-video/route.ts | sed -n '59,75p'

Repository: subhraneel2005/vidgen

Length of output: 609


🌐 Web query:

Remotion composition fps durationInFrames override API documentation

💡 Result:

To “override” a composition’s fps / durationInFrames in Remotion, you generally don’t override them via inputProps directly; instead you either (A) compute dynamic metadata using calculateMetadata, or (B) pass different metadata when using APIs that accept it (like <Player>).

  • Static composition metadata (authoring-time): Set fps and durationInFrames on <Composition /> in Root.tsx. (remotion.dev)
  • Dynamic override (render-time / based on props or fetched data): Use the <Composition calculateMetadata={...} /> callback to return durationInFrames and/or fps (and/or width/height). Returned fields take precedence over the props you passed to <Composition>. (remotion.dev)
  • How the renderer picks up the overridden values: selectComposition() evaluates calculateMetadata() (with your inputProps) and returns the resolved fps / durationInFrames, which you then pass to renderMedia() / renderStill() / renderFrames() as the composition object. (remotion.dev)
  • Player-specific override: The <Player> does not use <Composition>; you pass fps and durationInFrames directly as props to <Player>. (remotion.dev)
  • Notes about “override parameters”: The dynamic-metadata guide documents that certain CLI override parameters (e.g. for dimensions) can take priority over values returned by calculateMetadata(). (remotion.dev)

Citations:


🏁 Script executed:

# Check if there's a calculateMetadata callback defined in the composition setup
grep -r "calculateMetadata" app/ --include="*.ts" --include="*.tsx"

Repository: subhraneel2005/vidgen

Length of output: 47


🌐 Web query:

Remotion renderMedia composition object durationInFrames how to pass render settings

💡 Result:

In renderMedia(), the composition argument is only a VideoConfig (metadata): {id, width, height, fps, durationInFrames, …}—so durationInFrames comes from that object (usually returned by selectComposition() / getCompositions()), not from “render settings”. Render settings are separate top-level options you pass to renderMedia() (e.g. codec, crf, imageFormat, concurrency, frameRange, etc.).[1]

Also note: Remotion config (remotion.config.ts) and CLI flags do not apply to the Node render APIs—you must pass options explicitly in your renderMedia() call.[2]

Example (get durationInFrames from the composition, pass render settings separately)

import {bundle} from '@remotion/bundler';
import {renderMedia, selectComposition} from '@remotion/renderer';

const serveUrl = await bundle({entryPoint: './src/index.ts'});

const inputProps = { /* your props */ };

// composition.durationInFrames is resolved here
const composition = await selectComposition({
  serveUrl,
  id: 'MyComposition',
  inputProps, // pass same inputProps here and to renderMedia
});

await renderMedia({
  serveUrl,
  composition,                 // includes durationInFrames/fps/width/height/id
  codec: 'h264',               // render setting
  crf: 18,                     // render setting
  concurrency: 4,              // render setting
  frameRange: null,            // or [0, composition.durationInFrames - 1]
  outputLocation: 'out.mp4',   // render setting
  inputProps,
});

If you need durationInFrames to be dynamic

Make it dynamic via calculateMetadata() on the <Composition>—that’s the supported way to compute durationInFrames (and also width/height/fps) from data/props before render.[3]

Sources: Remotion renderMedia() API[1], Remotion renderer API note about config/CLI flags[2], Remotion calculateMetadata() docs[3].


Use composition.fps instead of hardcoded 30 and handle missing captions properly.

The fps is hardcoded to 30 (line 71) when it should use composition.fps from the resolved composition metadata. The arbitrary 2000ms fallback when captions are missing (line 69) doesn't align with the actual composition duration.

However, note that directly mutating composition.durationInFrames after selectComposition() is not the intended Remotion API pattern. According to Remotion's documentation, dynamic composition metadata (like duration based on caption data) should be computed via the calculateMetadata() callback on the <Composition> component definition, not by mutating the composition object post-selection.

If the composition definition supports calculateMetadata(), move the caption-duration logic there so that selectComposition() returns the correct resolved duration. If that's not feasible for your architecture, at minimum fix the hardcoded fps:

- const durationInFrames = Math.ceil((lastCaptionEndMs / 1000) * 30);
- 
- // override duration
- composition.durationInFrames = durationInFrames;
+ const durationInFrames = lastCaptionEndMs
+   ? Math.ceil((lastCaptionEndMs / 1000) * composition.fps)
+   : composition.durationInFrames;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const composition = await selectComposition({
serveUrl: bundleLocation,
id: "MyVideo",
inputProps: inputProps,
timeoutInMilliseconds: 60000,
});
const lastCaptionEndMs = captions.length
? Math.max(...captions.map(c => c.endMs))
: 2000;
const durationInFrames = Math.ceil((lastCaptionEndMs / 1000) * 30);
// override duration
composition.durationInFrames = durationInFrames;
try {
const composition = await selectComposition({
serveUrl: bundleLocation,
id: "MyVideo",
inputProps: inputProps,
timeoutInMilliseconds: 60000,
});
const lastCaptionEndMs = captions.length
? Math.max(...captions.map(c => c.endMs))
: 2000;
const durationInFrames = lastCaptionEndMs
? Math.ceil((lastCaptionEndMs / 1000) * composition.fps)
: composition.durationInFrames;
🤖 Prompt for AI Agents
In `@app/api/render-video/route.ts` around lines 59 - 75, The code computes
durationInFrames using a hardcoded 30fps and mutates
composition.durationInFrames after selectComposition; instead, move the
caption-based duration logic into the composition definition via
calculateMetadata so selectComposition returns correct metadata, or if you
cannot change the composition, compute frames using composition.fps (from the
returned composition) and avoid directly mutating
composition.durationInFrames—use the calculateMetadata hook on the <Composition>
(or a separate metadata calculation before rendering) to derive duration from
captions (use captions.length to handle missing captions and compute
lastCaptionEndMs safely).

Comment on lines +12 to +27

{/* correct */}
<ScriptGenerator />

{/* correct */}
<GenerateAudio />

{/* incorrect, captions are getting generated manually r9*/}
<GenerateCaptions />

{/* correct */}
<ChooseBGVideo />

{/* correct */}
<ChooseHighlightColor />

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove debug/internal comments before merging.

The inline comments ({/* correct */}, {/* incorrect, captions are getting generated manually r9*/}) appear to be developer notes rather than documentation. These should be removed or converted to meaningful documentation before merging.

The comment on line 19 (incorrect, captions are getting generated manually r9) suggests there may be an unresolved issue with the captions implementation.

🧹 Proposed cleanup
-
-      {/* correct */}
       <ScriptGenerator />
-
-      {/* correct */}
       <GenerateAudio />
-
-      {/* incorrect, captions are getting generated manually r9*/}
       <GenerateCaptions />
-
-      {/* correct */}
       <ChooseBGVideo />
-
-      {/* correct */}
       <ChooseHighlightColor />
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{/* correct */}
<ScriptGenerator />
{/* correct */}
<GenerateAudio />
{/* incorrect, captions are getting generated manually r9*/}
<GenerateCaptions />
{/* correct */}
<ChooseBGVideo />
{/* correct */}
<ChooseHighlightColor />
<ScriptGenerator />
<GenerateAudio />
<GenerateCaptions />
<ChooseBGVideo />
<ChooseHighlightColor />
🤖 Prompt for AI Agents
In `@app/page.tsx` around lines 12 - 27, Remove the inline developer/debug
comments around the JSX components (ScriptGenerator, GenerateAudio,
GenerateCaptions, ChooseBGVideo, ChooseHighlightColor); delete the `{/* correct
*/}` and `{/* incorrect, captions are getting generated manually r9*/}` markers
or replace them with concise, meaningful JSDoc or TODO comments if persistent
context is required, and if the note about GenerateCaptions indicates an
unresolved bug, open/fix the underlying issue in the GenerateCaptions component
(or add a TODO with a link/issue id) so no ambiguous developer notes remain in
the merged code.

Comment on lines +37 to +39
// Calculate duration based on last caption
const lastCaption = defaultProps.captions[defaultProps.captions.length - 1];
const durationInFrames = Math.ceil((lastCaption.endMs / 1000) * fps);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against empty captions array.

If captions is empty, accessing captions[captions.length - 1] returns undefined, and lastCaption.endMs will throw a runtime error.

🛡️ Proposed fix
   // Calculate duration based on last caption
   const lastCaption = defaultProps.captions[defaultProps.captions.length - 1];
-  const durationInFrames = Math.ceil((lastCaption.endMs / 1000) * fps);
+  const durationInFrames = lastCaption 
+    ? Math.ceil((lastCaption.endMs / 1000) * fps) 
+    : fps * 5; // Default 5 seconds if no captions
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Calculate duration based on last caption
const lastCaption = defaultProps.captions[defaultProps.captions.length - 1];
const durationInFrames = Math.ceil((lastCaption.endMs / 1000) * fps);
// Calculate duration based on last caption
const lastCaption = defaultProps.captions[defaultProps.captions.length - 1];
const durationInFrames = lastCaption
? Math.ceil((lastCaption.endMs / 1000) * fps)
: fps * 5; // Default 5 seconds if no captions
🤖 Prompt for AI Agents
In `@remotion/Root.tsx` around lines 37 - 39, The current computation of
durationInFrames uses lastCaption =
defaultProps.captions[defaultProps.captions.length - 1] and then reads
lastCaption.endMs, which will throw if captions is empty; update the logic in
Root.tsx to guard against an empty defaultProps.captions by checking its length
(or using optional chaining) and providing a sensible fallback endMs (e.g., 0 or
a default duration in seconds) before computing durationInFrames; modify the
lastCaption/endMs usage in the duration calculation so it never dereferences
undefined (references: defaultProps.captions, lastCaption, durationInFrames,
fps).

Comment on lines +6 to +10
const audioPath = path.join(process.cwd(), "public", process.argv[2]);
if (!fs.existsSync(audioPath)) {
console.error("File not found:", audioPath);
process.exit(1);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential path traversal vulnerability.

process.argv[2] is directly concatenated into the path without sanitization. A malicious input like ../../../etc/passwd could access files outside the public/ directory.

🛡️ Proposed fix to validate the path
-const audioPath = path.join(process.cwd(), "public", process.argv[2]);
-if (!fs.existsSync(audioPath)) {
+const userInput = process.argv[2];
+if (!userInput || userInput.includes('..') || path.isAbsolute(userInput)) {
+  console.error("Invalid audio path provided");
+  process.exit(1);
+}
+
+const audioPath = path.join(process.cwd(), "public", userInput);
+const resolvedPath = path.resolve(audioPath);
+const publicDir = path.resolve(process.cwd(), "public");
+
+if (!resolvedPath.startsWith(publicDir)) {
+  console.error("Audio path must be within public directory");
+  process.exit(1);
+}
+
+if (!fs.existsSync(audioPath)) {
   console.error("File not found:", audioPath);
   process.exit(1);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const audioPath = path.join(process.cwd(), "public", process.argv[2]);
if (!fs.existsSync(audioPath)) {
console.error("File not found:", audioPath);
process.exit(1);
}
const userInput = process.argv[2];
if (!userInput || userInput.includes('..') || path.isAbsolute(userInput)) {
console.error("Invalid audio path provided");
process.exit(1);
}
const audioPath = path.join(process.cwd(), "public", userInput);
const resolvedPath = path.resolve(audioPath);
const publicDir = path.resolve(process.cwd(), "public");
if (!resolvedPath.startsWith(publicDir)) {
console.error("Audio path must be within public directory");
process.exit(1);
}
if (!fs.existsSync(audioPath)) {
console.error("File not found:", audioPath);
process.exit(1);
}
🤖 Prompt for AI Agents
In `@remotion/scripts/whisper-worker.ts` around lines 6 - 10, The code uses
process.argv[2] directly in path.join to create audioPath which allows path
traversal; fix by resolving and validating the runtime path: compute const
publicDir = path.resolve(process.cwd(), "public") and const resolved =
path.resolve(publicDir, process.argv[2]); verify resolved.startsWith(publicDir +
path.sep) (or === publicDir for root) and reject/exit with an error if it does
not, also sanitize input by decoding/stripping null bytes and disallowing
absolute paths before calling fs.existsSync; update references to audioPath,
process.argv[2], fs.existsSync, and path.join accordingly.

Comment on lines +12 to +19
(async () => {
const whisperCppOutput = await transcribe({
inputPath: path.resolve(audioPath),
whisperPath: path.resolve("whisper.cpp"),
whisperCppVersion: "1.5.5",
model: "tiny.en",
tokenLevelTimestamps: true,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Model inconsistency and missing error handling.

  1. This script uses model: "tiny.en" but lib/getCaptions.ts and remotion/scripts/transcribe.mjs use model: "medium.en". This could cause inconsistent transcription quality across different code paths.

  2. The async IIFE lacks error handling. If transcribe() fails, the process will crash with an unhandled rejection.

🔧 Proposed fix
 (async () => {
+  try {
     const whisperCppOutput = await transcribe({
       inputPath: path.resolve(audioPath),
       whisperPath: path.resolve("whisper.cpp"),
       whisperCppVersion: "1.5.5",
-      model: "tiny.en",
+      model: "medium.en",
       tokenLevelTimestamps: true,
     });
 
     const { captions } = toCaptions({ whisperCppOutput });
 
     const captionsDir = path.resolve("remotion/captions");
     if (!fs.existsSync(captionsDir)) fs.mkdirSync(captionsDir, { recursive: true });
 
     const captionsFileName = path.basename(audioPath, path.extname(audioPath)) + "-captions.json";
     const captionsPath = path.join(captionsDir, captionsFileName);
 
     fs.writeFileSync(captionsPath, JSON.stringify(captions, null, 2));
     console.log(captionsPath);
+  } catch (error) {
+    console.error("Transcription failed:", error);
+    process.exit(1);
+  }
 })();
🤖 Prompt for AI Agents
In `@remotion/scripts/whisper-worker.ts` around lines 12 - 19, The async IIFE
calling transcribe() uses model: "tiny.en" which is inconsistent with other
callers (e.g., lib/getCaptions.ts and remotion/scripts/transcribe.mjs) and it
has no error handling; update the model value in this IIFE to match the
project's standard (use "medium.en") and wrap the transcribe invocation (and any
subsequent logic that uses whisperCppOutput) in a try/catch so errors from
transcribe() are caught, logged (include the error), and the process exits with
a non-zero code to avoid unhandled rejections — refer to the transcribe function
call, the whisperCppOutput variable, and the outer async IIFE when making
changes.

@@ -0,0 +1,31 @@
import { useVideoStoryStore } from "../../store/useVideoStoryStore";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Directory name typo: "uyils" should be "utils".

The directory is named remotion/uyils/ but should likely be remotion/utils/.

🤖 Prompt for AI Agents
In `@remotion/uyils/serializeStoreForRender.ts` at line 1, Rename the directory
from "uyils" to "utils" and update any imports referencing it (e.g., the import
of useVideoStoryStore in serializeStoreForRender.ts) so paths point to
"../../utils/useVideoStoryStore" (and update other files that import from
remotion/uyils to remotion/utils accordingly); ensure CI/build references are
updated and run a quick project-wide search to fix all occurrences of "uyils" to
"utils".

Comment on lines +17 to +31
export const createRenderProps = async (captionsPath: string) => {
const fs = await import("fs");
const path = await import("path");

const absolutePath = path.resolve(process.cwd(), captionsPath);
const captions = JSON.parse(fs.readFileSync(absolutePath, "utf-8"));

// You can either read from a JSON file or pass this data directly
return {
captions,
audioUrl: "/audios/story-1769672701663_16k.wav",
hook: "My sister is getting married in a dress I designed for myself and I don't know what to do",
highlightColor: "#FF4500",
};
}; No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded values defeat the purpose of this utility.

createRenderProps reads captions from a dynamic path but returns hardcoded values for audioUrl, hook, and highlightColor. This appears incomplete—these should be parameters or read from the store/config.

Also missing error handling for file operations.

🔧 Suggested refactor
-export const createRenderProps = async (captionsPath: string) => {
+export const createRenderProps = async (options: {
+  captionsPath: string;
+  audioUrl: string;
+  hook: string;
+  highlightColor?: string;
+}) => {
   const fs = await import("fs");
   const path = await import("path");
   
-  const absolutePath = path.resolve(process.cwd(), captionsPath);
-  const captions = JSON.parse(fs.readFileSync(absolutePath, "utf-8"));
+  const absolutePath = path.resolve(process.cwd(), options.captionsPath);
   
-  // You can either read from a JSON file or pass this data directly
+  if (!fs.existsSync(absolutePath)) {
+    throw new Error(`Captions file not found: ${absolutePath}`);
+  }
+  
+  const captions = JSON.parse(fs.readFileSync(absolutePath, "utf-8"));
+
   return {
     captions,
-    audioUrl: "/audios/story-1769672701663_16k.wav",
-    hook: "My sister is getting married in a dress I designed for myself and I don't know what to do",
-    highlightColor: "#FF4500",
+    audioUrl: options.audioUrl,
+    hook: options.hook,
+    highlightColor: options.highlightColor || "#FF4500",
   };
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const createRenderProps = async (captionsPath: string) => {
const fs = await import("fs");
const path = await import("path");
const absolutePath = path.resolve(process.cwd(), captionsPath);
const captions = JSON.parse(fs.readFileSync(absolutePath, "utf-8"));
// You can either read from a JSON file or pass this data directly
return {
captions,
audioUrl: "/audios/story-1769672701663_16k.wav",
hook: "My sister is getting married in a dress I designed for myself and I don't know what to do",
highlightColor: "#FF4500",
};
};
export const createRenderProps = async (options: {
captionsPath: string;
audioUrl: string;
hook: string;
highlightColor?: string;
}) => {
const fs = await import("fs");
const path = await import("path");
const absolutePath = path.resolve(process.cwd(), options.captionsPath);
if (!fs.existsSync(absolutePath)) {
throw new Error(`Captions file not found: ${absolutePath}`);
}
const captions = JSON.parse(fs.readFileSync(absolutePath, "utf-8"));
return {
captions,
audioUrl: options.audioUrl,
hook: options.hook,
highlightColor: options.highlightColor || "#FF4500",
};
};
🤖 Prompt for AI Agents
In `@remotion/uyils/serializeStoreForRender.ts` around lines 17 - 31, The
createRenderProps function currently reads captions from captionsPath but
returns hardcoded audioUrl, hook, and highlightColor and lacks file I/O error
handling; update createRenderProps to accept additional parameters (or a config
object) for audioUrl, hook, and highlightColor (or read them from a provided
store/config) instead of hardcoding, and wrap the fs/path file read and
JSON.parse in try/catch to surface and log/throw meaningful errors; reference
the createRenderProps function, the captionsPath parameter, the captions
variable, and the returned audioUrl/hook/highlightColor to locate and change the
implementation accordingly.

@subhraneel2005 subhraneel2005 self-assigned this Jan 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant