Skip to content

Conversation

@LetItRock
Copy link
Contributor

What changed? Why was the change needed?

AI Workflow Generation from the user prompt.
For the details, check the comments on the PR code.

Screenshots

Screenshot 2026-01-20 at 13 57 42
Screen.Recording.2026-01-20.at.13.57.51.mov
Screen.Recording.2026-01-20.at.14.10.29.mov

@LetItRock LetItRock requested a review from scopsy January 20, 2026 13:57
@LetItRock LetItRock self-assigned this Jan 20, 2026
@linear
Copy link

linear bot commented Jan 20, 2026

@github-actions github-actions bot changed the title feat(api-service): ai workflow generation from the user prompt feat(api-service): ai workflow generation from the user prompt fixes NV-7047 Jan 20, 2026
@netlify
Copy link

netlify bot commented Jan 20, 2026

Deploy preview added

Name Link
🔨 Latest commit f62d51e
🔍 Latest deploy log https://app.netlify.com/projects/dashboard-v2-novu-staging/deploys/696faa02b2bc0e00086fb53a
😎 Deploy Preview https://deploy-preview-9867.dashboard-v2.novu-staging.co
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

Warning

Rate limit exceeded

@LetItRock has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 53 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 2f4a479 and f62d51e.

📒 Files selected for processing (1)
  • .cspell.json

Walkthrough

Adds AI workflow generation capability: three new runtime dependencies; registers AiModule in the application module; new AI controller exposing POST /ai/generate-workflow and GET /ai/suggestions; DTOs, prompts, and extensive zod schemas for workflow and step outputs; an LlmService with OpenAI/Anthropic support, streaming, retries, and timeouts; use cases to generate workflows and return suggestions (including upsert of generated workflows); environment validators for LLM configuration; new shared enums and a feature-flag key to gate the feature; minor workflow command nullability and dashboard input defaulting changes.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main feature being added: AI workflow generation from user prompts, with reference to the associated issue.
Description check ✅ Passed The description is related to the changeset, explaining that AI workflow generation from user prompts was implemented with references to inline code comments.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@apps/api/src/app/ai/prompts/step.prompt.ts`:
- Around line 25-257: In STEP_CONTENT_PROMPTS update the HTML Format Guidelines
to remove the contradiction about table cell inline styles: state a single
consistent rule (either allow inline styles on table cells for
spacing/compatibility or prohibit them) and adjust the surrounding guidance
lines (e.g., "Body must be valid HTML with inline styles..." and "Use tables for
layout... Never use inline styles for the table cells.") so they agree;
reference STEP_CONTENT_PROMPTS and the "HTML Format Guidelines" section and make
the rule explicit (e.g., "You may use inline styles on table cells for
padding/colors to ensure email client compatibility" OR "Avoid inline styles on
table cells; apply styles on table elements instead for compatibility") and
update any example notes accordingly.

In `@apps/api/src/app/ai/schemas/steps-control.schema.ts`:
- Around line 105-121: The throttle schemas use nullable discriminators which
makes the union ambiguous; update aiThrottleFixedControlSchema and
aiThrottleDynamicControlSchema to make the type field non-nullable (use
z.literal(ThrottleTypeEnum.FIXED) and z.literal(ThrottleTypeEnum.DYNAMIC)
without .nullable()), and then convert aiThrottleControlSchema to a
discriminated union (z.discriminatedUnion('type', [aiThrottleFixedControlSchema,
aiThrottleDynamicControlSchema])) so the runtime can unambiguously pick the
correct branch by the type field.

In `@apps/api/src/app/ai/schemas/workflow-generation.schema.ts`:
- Around line 50-57: The tags constraint in workflowMetadataSchema is
inconsistent: the z.array(z.string()).max(16).nullable() allows 16 tags but the
description says "max 5"; update the schema so the constraint and description
match—either change the .max(16) call on the tags field to .max(5) if the
intended limit is 5, or update the descriptive string to "max 16" if 16 is
intended; locate this in workflowMetadataSchema and adjust only the max value or
the description text accordingly to keep them consistent.

In `@apps/api/src/app/ai/services/llm.service.ts`:
- Around line 244-270: The streamChat generator currently calls streamText
inside callWithRetries but then iterates result.textStream without retries or
timeout; modify streamChat so the async iteration over result.textStream is also
protected: move the consumer logic into a retryable wrapper (use the same
callWithRetries or a new helper) so any errors during for await (const chunk of
result.textStream) are caught and retried, and enforce a per-chunk or overall
timeout (use AbortController or this.config.timeout) to abort stalled streams;
reference streamChat, callWithRetries, streamText and result.textStream when
making the change.
- Around line 140-146: The log message hardcodes "OpenAI API call failed" even
when Anthropic (or another provider) is used; update the two logger.error calls
(the one using errorObj?.statusCode and the later one around line 153) to
include the actual provider name instead of "OpenAI" by using the service's
provider identifier (e.g., this.provider, this.client?.name, or whatever
providerName field exists) so the log becomes "<provider> API call failed" and
keep the rest of the errorContext/statusCode/url/responseBody payload unchanged.
- Around line 176-204: callWithRetries creates an AbortController/timeout but
never passes the signal into the API call and only clears the timeout on error;
update callWithRetries to accept and forward an AbortSignal to the underlying
request and always clear the timeout (on success, retry paths, and error).
Specifically, change callWithRetries(fn: (opts?: { abortSignal?: AbortSignal })
=> Promise<T>, ...) so you call fn({ abortSignal: abortController.signal }) (or
otherwise pass the signal through), move clearTimeout(timeoutId) into a finally
block so it always runs, and ensure retries (when checking
maxSchemaValidationRetries and calling this.callWithRetries(...)) reuse a fresh
AbortController/timeout or propagate/renew the signal appropriately; keep
existing checks for isConfigured/model/config and existing error handling via
handleAIError.
- Around line 206-223: In generateObject, update the args to use maxOutputTokens
(same as in generateText) instead of maxTokens so the Vercel AI SDK receives the
correct parameter; locate the generateObject function in llm.service.ts and
replace any occurrence of maxTokens in its args object with maxOutputTokens,
ensuring it uses input.maxTokens ?? this.config.maxTokens (or input.maxTokens)
mapped to maxOutputTokens and matches the pattern used by generateText.

In
`@apps/api/src/app/workflows-v1/usecases/update-workflow/update-workflow.command.ts`:
- Around line 45-54: The description and tags fields currently use `@IsOptional`()
but accept null and therefore still run `@Length` validation; update the DTO by
adding `@ValidateIf`((o) => o.description !== null) above the description property
and `@ValidateIf`((o) => o.tags !== null) above the tags property (matching the
existing pattern used for userPreferences) so validators (Length, ArrayUnique,
IsArray) are skipped when null is explicitly passed.

In
`@apps/api/src/app/workflows-v2/usecases/upsert-workflow/upsert-workflow.command.ts`:
- Around line 109-117: Add `@ValidateIf` checks to skip validation when null is
explicitly passed: for the description property add `@ValidateIf`(o =>
o.description !== null) above the `@IsOptional/`@IsString decorators, and for the
tags property add `@ValidateIf`(o => o.tags !== null) above the
`@IsOptional/`@IsArray/@IsString({ each: true })/@ArrayMaxSize decorators in
upsert-workflow.command.ts so validators only run when the value is not null.
🧹 Nitpick comments (8)
apps/dashboard/src/components/workflow-editor/url-input.tsx (1)

61-66: UI displays SELF when field value is undefined, but form state may remain undefined on submission.

The Select component defensively shows SELF when field.value is undefined, but the actual form state depends on whether redirect.target is included in the step's controls.values. If the backend doesn't set this field, submitting without user interaction would send undefined rather than SELF.

Ensure redirect.target is initialized in the form's defaultValues, either by confirming the backend always provides it in step.controls.values, or by explicitly setting it when the component mounts if undefined:

<Select
  value={field.value ?? RedirectTargetEnum.SELF}
  onValueChange={(value) => {
    field.onChange(value);
    saveForm();
  }}
+  onOpenChange={() => {
+    if (field.value === undefined) {
+      field.onChange(RedirectTargetEnum.SELF);
+    }
+  }}
>

Alternatively, ensure the form's schema or step initialization sets redirect.target to SELF by default.

apps/api/src/app/ai/usecases/get-suggestions/get-suggestions.usecase.ts (1)

6-62: Consider hoisting static suggestions to a module-level constant.
This avoids re-allocating the array on every call and makes reuse/testing easier.

♻️ Suggested refactor
+const WORKFLOW_SUGGESTIONS: WorkflowSuggestionDto[] = [
+  {
+    id: 'welcome-email',
+    type: WorkflowSuggestionType.WELCOME,
+    title: 'Welcome Email',
+    description: 'Send a personalized welcome email to new users when they sign up',
+    icon: 'mail',
+    examplePrompt:
+      'Create a welcome workflow that sends a personalized email to new users with their name and a getting started guide',
+  },
+  {
+    id: 'password-reset',
+    type: WorkflowSuggestionType.PASSWORD_RESET,
+    title: 'Password Reset',
+    description: 'Secure password reset flow with email verification',
+    icon: 'lock',
+    examplePrompt:
+      'Create a password reset workflow that sends a secure reset link via email and confirms when the password is changed',
+  },
+  {
+    id: 'order-confirmation',
+    type: WorkflowSuggestionType.ORDER_CONFIRMATION,
+    title: 'Order Confirmation',
+    description: 'Multi-channel order confirmation with email and in-app notifications',
+    icon: 'shopping-cart',
+    examplePrompt:
+      'Create an order confirmation workflow that sends an email receipt and an in-app notification with order details',
+  },
+  {
+    id: 'marketing-campaign',
+    type: WorkflowSuggestionType.MARKETING,
+    title: 'Marketing Campaign',
+    description: 'Promotional notifications with digest and delay capabilities',
+    icon: 'megaphone',
+    examplePrompt:
+      'Create a marketing workflow with a delay before sending and the ability to digest multiple promotions',
+  },
+  {
+    id: 'real-time-alert',
+    type: WorkflowSuggestionType.REAL_TIME_ALERT,
+    title: 'Real-time Alert',
+    description: 'Urgent notifications via push and SMS for time-sensitive events',
+    icon: 'bell',
+    examplePrompt:
+      'Create an urgent alert workflow that sends push notifications immediately and falls back to SMS if needed',
+  },
+  {
+    id: 'activity-digest',
+    type: WorkflowSuggestionType.DIGEST,
+    title: 'Activity Digest',
+    description: 'Aggregate multiple events into a single summary notification',
+    icon: 'layers',
+    examplePrompt:
+      'Create a daily digest workflow that collects all user activities and sends a summary email at the end of the day',
+  },
+];
+
 `@Injectable`()
 export class GetSuggestionsUseCase {
   execute(): WorkflowSuggestionDto[] {
-    return [
-      {
-        id: 'welcome-email',
-        type: WorkflowSuggestionType.WELCOME,
-        title: 'Welcome Email',
-        description: 'Send a personalized welcome email to new users when they sign up',
-        icon: 'mail',
-        examplePrompt:
-          'Create a welcome workflow that sends a personalized email to new users with their name and a getting started guide',
-      },
-      {
-        id: 'password-reset',
-        type: WorkflowSuggestionType.PASSWORD_RESET,
-        title: 'Password Reset',
-        description: 'Secure password reset flow with email verification',
-        icon: 'lock',
-        examplePrompt:
-          'Create a password reset workflow that sends a secure reset link via email and confirms when the password is changed',
-      },
-      {
-        id: 'order-confirmation',
-        type: WorkflowSuggestionType.ORDER_CONFIRMATION,
-        title: 'Order Confirmation',
-        description: 'Multi-channel order confirmation with email and in-app notifications',
-        icon: 'shopping-cart',
-        examplePrompt:
-          'Create an order confirmation workflow that sends an email receipt and an in-app notification with order details',
-      },
-      {
-        id: 'marketing-campaign',
-        type: WorkflowSuggestionType.MARKETING,
-        title: 'Marketing Campaign',
-        description: 'Promotional notifications with digest and delay capabilities',
-        icon: 'megaphone',
-        examplePrompt:
-          'Create a marketing workflow with a delay before sending and the ability to digest multiple promotions',
-      },
-      {
-        id: 'real-time-alert',
-        type: WorkflowSuggestionType.REAL_TIME_ALERT,
-        title: 'Real-time Alert',
-        description: 'Urgent notifications via push and SMS for time-sensitive events',
-        icon: 'bell',
-        examplePrompt:
-          'Create an urgent alert workflow that sends push notifications immediately and falls back to SMS if needed',
-      },
-      {
-        id: 'activity-digest',
-        type: WorkflowSuggestionType.DIGEST,
-        title: 'Activity Digest',
-        description: 'Aggregate multiple events into a single summary notification',
-        icon: 'layers',
-        examplePrompt:
-          'Create a daily digest workflow that collects all user activities and sends a summary email at the end of the day',
-      },
-    ];
+    return WORKFLOW_SUGGESTIONS;
   }
 }
apps/api/src/app/ai/schemas/maily.schema.ts (1)

54-260: Avoid nullable content arrays to prevent null propagation.
Line 61 and other content fields are nullable, so content: null can pass validation and force downstream renderers to handle nulls. If the renderer expects arrays, prefer required arrays (empty when needed) or normalize after parsing.

♻️ Suggested change (apply similarly to other `content` fields)
-  content: z.array(mailyInlineContentSchema).nullable(),
+  content: z.array(mailyInlineContentSchema),
apps/api/src/config/env.validators.ts (1)

53-61: Consider conditional validation for AI credentials.
With empty-string defaults, the service can start without a usable key/model and only fail at runtime. Consider failing fast when AI_LLM_PROVIDER is set/enabled.

apps/api/src/app/ai/prompts/step.prompt.ts (1)

259-284: Use a function declaration for the pure helper.
Prefer function over a const arrow for this pure utility. As per coding guidelines.

♻️ Proposed refactor
-export const buildStepPrompt = ({
+export function buildStepPrompt({
   step,
   workflowMetadata,
   userPrompt,
 }: {
   step: StepMetadata;
   workflowMetadata: WorkflowMetadata;
   userPrompt: string;
-}): string => {
+}): string {
   const { name: workflowName, description, steps, reasoning } = workflowMetadata;

   const stepsOverview = steps.map((s, i) => `${i + 1}. ${s.name} (${s.type})`).join('\n');

   return `Generate the content for step: **${step.name}** (type: ${step.type})

 ## Context of the user's workflow request
 ${userPrompt}

 ## The Generated Workflow Context
 - **Workflow Name**: ${workflowName}
 - **Description**: ${description || 'Not specified'}
 - **Design Rationale**: ${reasoning.summary}

 ## The Generated Workflow Steps Overview
 ${stepsOverview}`;
-};
+}
apps/api/src/app/ai/dtos/generate-workflow.dto.ts (1)

27-75: Confirm class-based DTOs are required.
If these DTOs are only used for typing, consider interfaces to align with backend style; keep classes if decorators are required at runtime. As per coding guidelines.

apps/api/src/app/ai/usecases/generate-workflow/generate-workflow.usecase.ts (2)

99-120: Consider parallelizing step control value generation for improved latency.

Steps are processed sequentially in the for...of loop (Lines 111-117). Since each step's generation is independent, using Promise.all could significantly reduce total latency, especially for workflows with many steps.

Proposed parallel implementation
   private async generateStepControlValues({
     workflowMetadata,
     userPrompt,
   }: {
     workflowMetadata: WorkflowMetadata;
     userPrompt: string;
   }): Promise<StepWithControlValues[]> {
     const { steps } = workflowMetadata;
     this.logger.info(`AI Phase 2: Generating control values for ${steps.length} steps...`);

-    const stepsWithControlValues: StepWithControlValues[] = [];
-
-    for (const step of steps) {
-      const controlValues = await this.generateSingleStepControlValues({ step, workflowMetadata, userPrompt });
-      stepsWithControlValues.push({
-        ...step,
-        controlValues,
-      });
-    }
-
-    return stepsWithControlValues;
+    const stepsWithControlValues = await Promise.all(
+      steps.map(async (step) => {
+        const controlValues = await this.generateSingleStepControlValues({ step, workflowMetadata, userPrompt });
+
+        return { ...step, controlValues };
+      })
+    );
+
+    return stepsWithControlValues;
   }

Note: If rate limiting is a concern, the sequential approach may be intentional. Consider adding a comment to document the choice.


147-159: Redundant return path in email block handling.

Line 155 returns wrappedResult.root for the HTML case, but this is already handled by the fallback at Line 158. The early return at Line 155 is unreachable when editorType === 'block' due to the return at Line 152.

Proposed simplification
     if (stepType === StepTypeEnum.EMAIL) {
       const result = wrappedResult as z.infer<typeof wrappedEmailControlSchema>;
       const { editorType, body, ...rest } = result.root;
       // The Maily JSON body is returned as an object, so we need to stringify it.
       if (editorType === 'block') {
         return { editorType, body: JSON.stringify(body), ...rest };
       }
-
-      return wrappedResult.root as Record<string, unknown>;
     }

     return wrappedResult.root as Record<string, unknown>;

Comment on lines +63 to +75
export class AiConversationDto {
@ApiProperty({ description: 'Conversation messages', type: [AiMessageDto] })
messages: AiMessageDto[];

@ApiProperty({ description: 'Conversation status', enum: AiConversationStatusEnum })
status: AiConversationStatusEnum;

@ApiProperty({ description: 'Generated workflow configuration', type: WorkflowResponseDto })
workflow: WorkflowResponseDto;

@ApiProperty({ description: 'AI reasoning for the workflow design', type: WorkflowReasoningDto })
reasoning: WorkflowReasoningDto;
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the AI generate workflow endpoint response dto that will be returned to the Dashboard, consists of:

  • messages: user prompt, assistant message with reasoning message
  • conversation status (maybe will be used in the future)
  • generated workflow - that is following the same dto as we have on the workflows controller
  • reasoning

import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
import { WorkflowSuggestionType } from './generate-workflow.dto';

export class WorkflowSuggestionDto {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dto used for the workflow suggestions; I'll polish the endpoint a bit later

Comment on lines +4 to +5
export const VALID_JSON_OUTPUT_REQUIREMENTS = `- ALWAYS return a valid JSON object directly at the root level.`;
export const STEP_VALID_JSON_ROOT_OUTPUT_REQUIREMENTS = `- ALWAYS return a valid JSON object directly with the key "root" and the value being the JSON object of the step.`;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JSON output requirements for the workflow and steps;
steps additionally have a "root" key at the main object because of the OpenAI limitations on the unions and discriminated unions, more details below

- Use appropriate formatting and styling only when it is necessary to improve the readability of the content
- Align content with the workflow's purpose and the user's original request
- Keep the content consistent with the other steps in the workflow
- Use appropriate personalization with Liquid templating ({{ subscriber.firstName }}, {{ payload.* }})`;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one of the next steps would be to supply with the variables context and individual steps variables context, for example email after digest

Comment on lines +38 to +41
- ALWAYS return required properties: subject, editorType, body
- subject: string - Email subject line.
- editorType: "block"
- body: object - Email body in Maily TipTap JSON format
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forcing these in the system prompt here as the structured output doesn't always return what is needed :D

};

@Injectable()
export class LlmService implements OnModuleInit {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generic llm service that we could reuse

throw new ServiceUnavailableException('Failed to generate content. Please try again.');
}

private async callWithRetries<T>(fn: () => Promise<T>, retryCount = 0): Promise<T> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handles retries when returned object validation doesn't pass; also has another internal retries mechanism that handles the connection issues to the ai provider

Comment on lines +44 to +50
// Phase 1: Generate workflow metadata and step structure
const workflowMetadata = await this.generateWorkflowMetadata(userPrompt);
const { reasoning, steps: _steps, ...workflowFields } = workflowMetadata;

// Phase 2: Generate step control values based on step type
// Each prompt instructs the AI to wrap the response in { root: { ... } }
const stepsWithControlValues = await this.generateStepControlValues({ workflowMetadata, userPrompt });
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Workflow generation is split into two phases:

  • first we generate the workflow metadata: name, description, severity, tags, and steps array with type and name
  • then we use workflow metadata to generate steps content

This is done like this for a few reasons:

  • structured output of the most ai providers has limitations, for example, 10 levels of schema nesting, which was passed with the maily editor schema
  • ai output hardly follows the complex output schema


@Injectable()
export class GetSuggestionsUseCase {
execute(): WorkflowSuggestionDto[] {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be polished and used later

Comment on lines 54 to 61
AI_LLM_PROVIDER: str({ choices: ['openai', 'anthropic'], default: 'openai' }),
AI_LLM_API_KEY: str({ default: '' }),
AI_LLM_MODEL: str({ default: '' }),
AI_LLM_MAX_TOKENS: num({ default: 4096 }),
AI_LLM_TEMPERATURE: num({ default: 0.7 }),
AI_LLM_MAX_RETRIES: num({ default: 3 }),
AI_LLM_SCHEMA_VALIDATION_RETRIES: num({ default: 3 }),
AI_LLM_REQUEST_TIMEOUT_MS: num({ default: 30000 }),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

api config that we will be controlling

@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 20, 2026

Open in StackBlitz

npm i https://pkg.pr.new/novuhq/novu@9867
npm i https://pkg.pr.new/novuhq/novu/@novu/providers@9867
npm i https://pkg.pr.new/novuhq/novu/@novu/shared@9867

commit: f62d51e

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/api/src/app/ai/prompts/step.prompt.ts`:
- Around line 236-250: The throttle prompt currently allows `type: "fixed" |
null` which conflicts with the validation schema that expects `"fixed"` or
`"dynamic"`; update the prompt block (the `throttle:` template where
STEP_CRITICAL_OUTPUT_REQUIREMENTS is used) to require `type: "fixed" |
"dynamic"` and, if needed, add brief schema bullets for the `"dynamic"` variant
(e.g., required fields used by dynamic throttles) so generated AI output matches
the Zod schema exactly.
♻️ Duplicate comments (1)
apps/api/src/app/ai/schemas/steps-control.schema.ts (1)

105-121: Prefer a discriminated union for throttle controls.

The type field is now non-nullable, so you can use z.discriminatedUnion for clearer errors and faster validation.

♻️ Proposed change
-const aiThrottleControlSchema = z.union([aiThrottleFixedControlSchema, aiThrottleDynamicControlSchema]);
+const aiThrottleControlSchema = z.discriminatedUnion('type', [
+  aiThrottleFixedControlSchema,
+  aiThrottleDynamicControlSchema,
+]);
🧹 Nitpick comments (5)
apps/api/src/app/ai/services/llm.service.ts (3)

10-43: Consider using interfaces instead of types.

Per coding guidelines for backend code in apps/api/**/*.ts, interfaces are preferred over types. These exported type aliases could be interfaces.

♻️ Suggested refactor
-export type LlmConfig = {
+export interface LlmConfig {
   provider: LlmProvider;
   apiKey: string;
   model: string;
   maxOutputTokens: number;
   temperature: number;
   maxRetries: number;
-};
+}

-export type GenerateTextInput = {
+export interface GenerateTextInput {
   systemPrompt: string;
   userPrompt: string;
   maxOutputTokens?: number;
   temperature?: number;
-};
+}

-export type GenerateObjectInput<T extends z.ZodType> = {
+export interface GenerateObjectInput<T extends z.ZodType> {
   systemPrompt: string;
   userPrompt: string;
   schema: T;
   maxOutputTokens?: number;
   temperature?: number;
-};
+}

-export type ChatStreamInput = {
+export interface ChatStreamInput {
   systemPrompt: string;
   message: string;
   messageHistory: Array<{
     role: 'user' | 'assistant' | 'system';
     content: string;
   }>;
   maxOutputTokens?: number;
   temperature?: number;
-};
+}

74-79: Missing validation for parsed environment variables.

parseInt and parseFloat return NaN for invalid values, which could cause unexpected behavior. Consider adding validation or using default values more defensively.

♻️ Suggested defensive parsing
+  private parseIntEnv(value: string | undefined, defaultValue: number): number {
+    const parsed = parseInt(value || '', 10);
+    return Number.isNaN(parsed) ? defaultValue : parsed;
+  }
+
+  private parseFloatEnv(value: string | undefined, defaultValue: number): number {
+    const parsed = parseFloat(value || '');
+    return Number.isNaN(parsed) ? defaultValue : parsed;
+  }
+
   private initializeConfig(): void {
     // ... existing code ...
     this.config = {
       provider,
       apiKey,
       model: process.env.AI_LLM_MODEL || this.getDefaultModel(provider),
-      maxOutputTokens: parseInt(process.env.AI_LLM_MAX_OUTPUT_TOKENS || '4096', 10),
-      temperature: parseFloat(process.env.AI_LLM_TEMPERATURE || '0.7'),
-      maxRetries: parseInt(process.env.AI_LLM_MAX_RETRIES || '3', 10),
+      maxOutputTokens: this.parseIntEnv(process.env.AI_LLM_MAX_OUTPUT_TOKENS, 4096),
+      temperature: this.parseFloatEnv(process.env.AI_LLM_TEMPERATURE, 0.7),
+      maxRetries: this.parseIntEnv(process.env.AI_LLM_MAX_RETRIES, 3),
     };
-    this.maxSchemaValidationRetries = parseInt(process.env.AI_LLM_SCHEMA_VALIDATION_RETRIES || '3', 10);
-    this.requestTimeoutMs = parseInt(process.env.AI_LLM_REQUEST_TIMEOUT_MS || '30000', 10);
+    this.maxSchemaValidationRetries = this.parseIntEnv(process.env.AI_LLM_SCHEMA_VALIDATION_RETRIES, 3);
+    this.requestTimeoutMs = this.parseIntEnv(process.env.AI_LLM_REQUEST_TIMEOUT_MS, 30000);

289-293: Add type guard for statusCode comparison.

error?.statusCode >= 500 could behave unexpectedly if statusCode is undefined or not a number (e.g., undefined >= 500 is false, but a string like "500" would coerce). Consider adding explicit type checking.

♻️ Suggested fix
       const isRetryableError =
         error?.name === 'AbortError' ||
         error?.name === 'AI_NoObjectGeneratedError' ||
-        error?.statusCode >= 500 ||
-        error?.statusCode === 429;
+        (typeof error?.statusCode === 'number' && error.statusCode >= 500) ||
+        error?.statusCode === 429;
apps/api/src/app/ai/schemas/steps-control.schema.ts (1)

37-41: Avoid nested nullable for redirect.

aiRedirectSchema is already nullable, so aiRedirectSchema.nullable() creates a redundant nullable wrapper and can bloat the JSON schema.

♻️ Proposed simplification
 const aiActionSchema = z
   .object({
     label: z.string(),
-    redirect: aiRedirectSchema.nullable(),
+    redirect: aiRedirectSchema,
   })
   .nullable();
apps/api/src/app/ai/prompts/step.prompt.ts (1)

259-272: Use a function declaration for the pure helper.

buildStepPrompt is pure and can be declared with function for consistency with backend TS guidelines.

♻️ Suggested change
-export const buildStepPrompt = ({
+export function buildStepPrompt({
   step,
   workflowMetadata,
   userPrompt,
 }: {
   step: StepMetadata;
   workflowMetadata: WorkflowMetadata;
   userPrompt: string;
-}): string => {
+}): string {
   const { name: workflowName, description, steps, reasoning } = workflowMetadata;

   const stepsOverview = steps.map((s, i) => `${i + 1}. ${s.name} (${s.type})`).join('\n');

   return `Generate the content for step: **${step.name}** (type: ${step.type})
@@
 ${stepsOverview}`;
-};
+}
As per coding guidelines, use the `function` keyword for pure functions.

@github-actions
Copy link
Contributor

LaunchDarkly flag references

🔍 1 flag added or modified

Name Key Aliases found Info
IS_AI_WORKFLOW_GENERATION_ENABLED IS_AI_WORKFLOW_GENERATION_ENABLED

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants