-
Notifications
You must be signed in to change notification settings - Fork 4.2k
feat(api-service): ai workflow generation from the user prompt fixes NV-7047 #9867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
✅ Deploy preview added
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughAdds AI workflow generation capability: three new runtime dependencies; registers AiModule in the application module; new AI controller exposing POST /ai/generate-workflow and GET /ai/suggestions; DTOs, prompts, and extensive zod schemas for workflow and step outputs; an LlmService with OpenAI/Anthropic support, streaming, retries, and timeouts; use cases to generate workflows and return suggestions (including upsert of generated workflows); environment validators for LLM configuration; new shared enums and a feature-flag key to gate the feature; minor workflow command nullability and dashboard input defaulting changes. 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In `@apps/api/src/app/ai/prompts/step.prompt.ts`:
- Around line 25-257: In STEP_CONTENT_PROMPTS update the HTML Format Guidelines
to remove the contradiction about table cell inline styles: state a single
consistent rule (either allow inline styles on table cells for
spacing/compatibility or prohibit them) and adjust the surrounding guidance
lines (e.g., "Body must be valid HTML with inline styles..." and "Use tables for
layout... Never use inline styles for the table cells.") so they agree;
reference STEP_CONTENT_PROMPTS and the "HTML Format Guidelines" section and make
the rule explicit (e.g., "You may use inline styles on table cells for
padding/colors to ensure email client compatibility" OR "Avoid inline styles on
table cells; apply styles on table elements instead for compatibility") and
update any example notes accordingly.
In `@apps/api/src/app/ai/schemas/steps-control.schema.ts`:
- Around line 105-121: The throttle schemas use nullable discriminators which
makes the union ambiguous; update aiThrottleFixedControlSchema and
aiThrottleDynamicControlSchema to make the type field non-nullable (use
z.literal(ThrottleTypeEnum.FIXED) and z.literal(ThrottleTypeEnum.DYNAMIC)
without .nullable()), and then convert aiThrottleControlSchema to a
discriminated union (z.discriminatedUnion('type', [aiThrottleFixedControlSchema,
aiThrottleDynamicControlSchema])) so the runtime can unambiguously pick the
correct branch by the type field.
In `@apps/api/src/app/ai/schemas/workflow-generation.schema.ts`:
- Around line 50-57: The tags constraint in workflowMetadataSchema is
inconsistent: the z.array(z.string()).max(16).nullable() allows 16 tags but the
description says "max 5"; update the schema so the constraint and description
match—either change the .max(16) call on the tags field to .max(5) if the
intended limit is 5, or update the descriptive string to "max 16" if 16 is
intended; locate this in workflowMetadataSchema and adjust only the max value or
the description text accordingly to keep them consistent.
In `@apps/api/src/app/ai/services/llm.service.ts`:
- Around line 244-270: The streamChat generator currently calls streamText
inside callWithRetries but then iterates result.textStream without retries or
timeout; modify streamChat so the async iteration over result.textStream is also
protected: move the consumer logic into a retryable wrapper (use the same
callWithRetries or a new helper) so any errors during for await (const chunk of
result.textStream) are caught and retried, and enforce a per-chunk or overall
timeout (use AbortController or this.config.timeout) to abort stalled streams;
reference streamChat, callWithRetries, streamText and result.textStream when
making the change.
- Around line 140-146: The log message hardcodes "OpenAI API call failed" even
when Anthropic (or another provider) is used; update the two logger.error calls
(the one using errorObj?.statusCode and the later one around line 153) to
include the actual provider name instead of "OpenAI" by using the service's
provider identifier (e.g., this.provider, this.client?.name, or whatever
providerName field exists) so the log becomes "<provider> API call failed" and
keep the rest of the errorContext/statusCode/url/responseBody payload unchanged.
- Around line 176-204: callWithRetries creates an AbortController/timeout but
never passes the signal into the API call and only clears the timeout on error;
update callWithRetries to accept and forward an AbortSignal to the underlying
request and always clear the timeout (on success, retry paths, and error).
Specifically, change callWithRetries(fn: (opts?: { abortSignal?: AbortSignal })
=> Promise<T>, ...) so you call fn({ abortSignal: abortController.signal }) (or
otherwise pass the signal through), move clearTimeout(timeoutId) into a finally
block so it always runs, and ensure retries (when checking
maxSchemaValidationRetries and calling this.callWithRetries(...)) reuse a fresh
AbortController/timeout or propagate/renew the signal appropriately; keep
existing checks for isConfigured/model/config and existing error handling via
handleAIError.
- Around line 206-223: In generateObject, update the args to use maxOutputTokens
(same as in generateText) instead of maxTokens so the Vercel AI SDK receives the
correct parameter; locate the generateObject function in llm.service.ts and
replace any occurrence of maxTokens in its args object with maxOutputTokens,
ensuring it uses input.maxTokens ?? this.config.maxTokens (or input.maxTokens)
mapped to maxOutputTokens and matches the pattern used by generateText.
In
`@apps/api/src/app/workflows-v1/usecases/update-workflow/update-workflow.command.ts`:
- Around line 45-54: The description and tags fields currently use `@IsOptional`()
but accept null and therefore still run `@Length` validation; update the DTO by
adding `@ValidateIf`((o) => o.description !== null) above the description property
and `@ValidateIf`((o) => o.tags !== null) above the tags property (matching the
existing pattern used for userPreferences) so validators (Length, ArrayUnique,
IsArray) are skipped when null is explicitly passed.
In
`@apps/api/src/app/workflows-v2/usecases/upsert-workflow/upsert-workflow.command.ts`:
- Around line 109-117: Add `@ValidateIf` checks to skip validation when null is
explicitly passed: for the description property add `@ValidateIf`(o =>
o.description !== null) above the `@IsOptional/`@IsString decorators, and for the
tags property add `@ValidateIf`(o => o.tags !== null) above the
`@IsOptional/`@IsArray/@IsString({ each: true })/@ArrayMaxSize decorators in
upsert-workflow.command.ts so validators only run when the value is not null.
🧹 Nitpick comments (8)
apps/dashboard/src/components/workflow-editor/url-input.tsx (1)
61-66: UI displaysSELFwhen field value is undefined, but form state may remain undefined on submission.The Select component defensively shows
SELFwhenfield.valueis undefined, but the actual form state depends on whetherredirect.targetis included in the step'scontrols.values. If the backend doesn't set this field, submitting without user interaction would sendundefinedrather thanSELF.Ensure
redirect.targetis initialized in the form's defaultValues, either by confirming the backend always provides it instep.controls.values, or by explicitly setting it when the component mounts if undefined:<Select value={field.value ?? RedirectTargetEnum.SELF} onValueChange={(value) => { field.onChange(value); saveForm(); }} + onOpenChange={() => { + if (field.value === undefined) { + field.onChange(RedirectTargetEnum.SELF); + } + }} >Alternatively, ensure the form's schema or step initialization sets
redirect.targettoSELFby default.apps/api/src/app/ai/usecases/get-suggestions/get-suggestions.usecase.ts (1)
6-62: Consider hoisting static suggestions to a module-level constant.
This avoids re-allocating the array on every call and makes reuse/testing easier.♻️ Suggested refactor
+const WORKFLOW_SUGGESTIONS: WorkflowSuggestionDto[] = [ + { + id: 'welcome-email', + type: WorkflowSuggestionType.WELCOME, + title: 'Welcome Email', + description: 'Send a personalized welcome email to new users when they sign up', + icon: 'mail', + examplePrompt: + 'Create a welcome workflow that sends a personalized email to new users with their name and a getting started guide', + }, + { + id: 'password-reset', + type: WorkflowSuggestionType.PASSWORD_RESET, + title: 'Password Reset', + description: 'Secure password reset flow with email verification', + icon: 'lock', + examplePrompt: + 'Create a password reset workflow that sends a secure reset link via email and confirms when the password is changed', + }, + { + id: 'order-confirmation', + type: WorkflowSuggestionType.ORDER_CONFIRMATION, + title: 'Order Confirmation', + description: 'Multi-channel order confirmation with email and in-app notifications', + icon: 'shopping-cart', + examplePrompt: + 'Create an order confirmation workflow that sends an email receipt and an in-app notification with order details', + }, + { + id: 'marketing-campaign', + type: WorkflowSuggestionType.MARKETING, + title: 'Marketing Campaign', + description: 'Promotional notifications with digest and delay capabilities', + icon: 'megaphone', + examplePrompt: + 'Create a marketing workflow with a delay before sending and the ability to digest multiple promotions', + }, + { + id: 'real-time-alert', + type: WorkflowSuggestionType.REAL_TIME_ALERT, + title: 'Real-time Alert', + description: 'Urgent notifications via push and SMS for time-sensitive events', + icon: 'bell', + examplePrompt: + 'Create an urgent alert workflow that sends push notifications immediately and falls back to SMS if needed', + }, + { + id: 'activity-digest', + type: WorkflowSuggestionType.DIGEST, + title: 'Activity Digest', + description: 'Aggregate multiple events into a single summary notification', + icon: 'layers', + examplePrompt: + 'Create a daily digest workflow that collects all user activities and sends a summary email at the end of the day', + }, +]; + `@Injectable`() export class GetSuggestionsUseCase { execute(): WorkflowSuggestionDto[] { - return [ - { - id: 'welcome-email', - type: WorkflowSuggestionType.WELCOME, - title: 'Welcome Email', - description: 'Send a personalized welcome email to new users when they sign up', - icon: 'mail', - examplePrompt: - 'Create a welcome workflow that sends a personalized email to new users with their name and a getting started guide', - }, - { - id: 'password-reset', - type: WorkflowSuggestionType.PASSWORD_RESET, - title: 'Password Reset', - description: 'Secure password reset flow with email verification', - icon: 'lock', - examplePrompt: - 'Create a password reset workflow that sends a secure reset link via email and confirms when the password is changed', - }, - { - id: 'order-confirmation', - type: WorkflowSuggestionType.ORDER_CONFIRMATION, - title: 'Order Confirmation', - description: 'Multi-channel order confirmation with email and in-app notifications', - icon: 'shopping-cart', - examplePrompt: - 'Create an order confirmation workflow that sends an email receipt and an in-app notification with order details', - }, - { - id: 'marketing-campaign', - type: WorkflowSuggestionType.MARKETING, - title: 'Marketing Campaign', - description: 'Promotional notifications with digest and delay capabilities', - icon: 'megaphone', - examplePrompt: - 'Create a marketing workflow with a delay before sending and the ability to digest multiple promotions', - }, - { - id: 'real-time-alert', - type: WorkflowSuggestionType.REAL_TIME_ALERT, - title: 'Real-time Alert', - description: 'Urgent notifications via push and SMS for time-sensitive events', - icon: 'bell', - examplePrompt: - 'Create an urgent alert workflow that sends push notifications immediately and falls back to SMS if needed', - }, - { - id: 'activity-digest', - type: WorkflowSuggestionType.DIGEST, - title: 'Activity Digest', - description: 'Aggregate multiple events into a single summary notification', - icon: 'layers', - examplePrompt: - 'Create a daily digest workflow that collects all user activities and sends a summary email at the end of the day', - }, - ]; + return WORKFLOW_SUGGESTIONS; } }apps/api/src/app/ai/schemas/maily.schema.ts (1)
54-260: Avoid nullablecontentarrays to prevent null propagation.
Line 61 and othercontentfields are nullable, socontent: nullcan pass validation and force downstream renderers to handle nulls. If the renderer expects arrays, prefer required arrays (empty when needed) or normalize after parsing.♻️ Suggested change (apply similarly to other `content` fields)
- content: z.array(mailyInlineContentSchema).nullable(), + content: z.array(mailyInlineContentSchema),apps/api/src/config/env.validators.ts (1)
53-61: Consider conditional validation for AI credentials.
With empty-string defaults, the service can start without a usable key/model and only fail at runtime. Consider failing fast whenAI_LLM_PROVIDERis set/enabled.apps/api/src/app/ai/prompts/step.prompt.ts (1)
259-284: Use a function declaration for the pure helper.
Preferfunctionover a const arrow for this pure utility. As per coding guidelines.♻️ Proposed refactor
-export const buildStepPrompt = ({ +export function buildStepPrompt({ step, workflowMetadata, userPrompt, }: { step: StepMetadata; workflowMetadata: WorkflowMetadata; userPrompt: string; -}): string => { +}): string { const { name: workflowName, description, steps, reasoning } = workflowMetadata; const stepsOverview = steps.map((s, i) => `${i + 1}. ${s.name} (${s.type})`).join('\n'); return `Generate the content for step: **${step.name}** (type: ${step.type}) ## Context of the user's workflow request ${userPrompt} ## The Generated Workflow Context - **Workflow Name**: ${workflowName} - **Description**: ${description || 'Not specified'} - **Design Rationale**: ${reasoning.summary} ## The Generated Workflow Steps Overview ${stepsOverview}`; -}; +}apps/api/src/app/ai/dtos/generate-workflow.dto.ts (1)
27-75: Confirm class-based DTOs are required.
If these DTOs are only used for typing, consider interfaces to align with backend style; keep classes if decorators are required at runtime. As per coding guidelines.apps/api/src/app/ai/usecases/generate-workflow/generate-workflow.usecase.ts (2)
99-120: Consider parallelizing step control value generation for improved latency.Steps are processed sequentially in the
for...ofloop (Lines 111-117). Since each step's generation is independent, usingPromise.allcould significantly reduce total latency, especially for workflows with many steps.Proposed parallel implementation
private async generateStepControlValues({ workflowMetadata, userPrompt, }: { workflowMetadata: WorkflowMetadata; userPrompt: string; }): Promise<StepWithControlValues[]> { const { steps } = workflowMetadata; this.logger.info(`AI Phase 2: Generating control values for ${steps.length} steps...`); - const stepsWithControlValues: StepWithControlValues[] = []; - - for (const step of steps) { - const controlValues = await this.generateSingleStepControlValues({ step, workflowMetadata, userPrompt }); - stepsWithControlValues.push({ - ...step, - controlValues, - }); - } - - return stepsWithControlValues; + const stepsWithControlValues = await Promise.all( + steps.map(async (step) => { + const controlValues = await this.generateSingleStepControlValues({ step, workflowMetadata, userPrompt }); + + return { ...step, controlValues }; + }) + ); + + return stepsWithControlValues; }Note: If rate limiting is a concern, the sequential approach may be intentional. Consider adding a comment to document the choice.
147-159: Redundant return path in email block handling.Line 155 returns
wrappedResult.rootfor the HTML case, but this is already handled by the fallback at Line 158. The early return at Line 155 is unreachable wheneditorType === 'block'due to the return at Line 152.Proposed simplification
if (stepType === StepTypeEnum.EMAIL) { const result = wrappedResult as z.infer<typeof wrappedEmailControlSchema>; const { editorType, body, ...rest } = result.root; // The Maily JSON body is returned as an object, so we need to stringify it. if (editorType === 'block') { return { editorType, body: JSON.stringify(body), ...rest }; } - - return wrappedResult.root as Record<string, unknown>; } return wrappedResult.root as Record<string, unknown>;
apps/api/src/app/workflows-v1/usecases/update-workflow/update-workflow.command.ts
Show resolved
Hide resolved
apps/api/src/app/workflows-v2/usecases/upsert-workflow/upsert-workflow.command.ts
Show resolved
Hide resolved
| export class AiConversationDto { | ||
| @ApiProperty({ description: 'Conversation messages', type: [AiMessageDto] }) | ||
| messages: AiMessageDto[]; | ||
|
|
||
| @ApiProperty({ description: 'Conversation status', enum: AiConversationStatusEnum }) | ||
| status: AiConversationStatusEnum; | ||
|
|
||
| @ApiProperty({ description: 'Generated workflow configuration', type: WorkflowResponseDto }) | ||
| workflow: WorkflowResponseDto; | ||
|
|
||
| @ApiProperty({ description: 'AI reasoning for the workflow design', type: WorkflowReasoningDto }) | ||
| reasoning: WorkflowReasoningDto; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the AI generate workflow endpoint response dto that will be returned to the Dashboard, consists of:
- messages: user prompt, assistant message with reasoning message
- conversation status (maybe will be used in the future)
- generated workflow - that is following the same dto as we have on the workflows controller
- reasoning
| import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger'; | ||
| import { WorkflowSuggestionType } from './generate-workflow.dto'; | ||
|
|
||
| export class WorkflowSuggestionDto { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dto used for the workflow suggestions; I'll polish the endpoint a bit later
| export const VALID_JSON_OUTPUT_REQUIREMENTS = `- ALWAYS return a valid JSON object directly at the root level.`; | ||
| export const STEP_VALID_JSON_ROOT_OUTPUT_REQUIREMENTS = `- ALWAYS return a valid JSON object directly with the key "root" and the value being the JSON object of the step.`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON output requirements for the workflow and steps;
steps additionally have a "root" key at the main object because of the OpenAI limitations on the unions and discriminated unions, more details below
| - Use appropriate formatting and styling only when it is necessary to improve the readability of the content | ||
| - Align content with the workflow's purpose and the user's original request | ||
| - Keep the content consistent with the other steps in the workflow | ||
| - Use appropriate personalization with Liquid templating ({{ subscriber.firstName }}, {{ payload.* }})`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one of the next steps would be to supply with the variables context and individual steps variables context, for example email after digest
| - ALWAYS return required properties: subject, editorType, body | ||
| - subject: string - Email subject line. | ||
| - editorType: "block" | ||
| - body: object - Email body in Maily TipTap JSON format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
forcing these in the system prompt here as the structured output doesn't always return what is needed :D
| }; | ||
|
|
||
| @Injectable() | ||
| export class LlmService implements OnModuleInit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generic llm service that we could reuse
| throw new ServiceUnavailableException('Failed to generate content. Please try again.'); | ||
| } | ||
|
|
||
| private async callWithRetries<T>(fn: () => Promise<T>, retryCount = 0): Promise<T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handles retries when returned object validation doesn't pass; also has another internal retries mechanism that handles the connection issues to the ai provider
| // Phase 1: Generate workflow metadata and step structure | ||
| const workflowMetadata = await this.generateWorkflowMetadata(userPrompt); | ||
| const { reasoning, steps: _steps, ...workflowFields } = workflowMetadata; | ||
|
|
||
| // Phase 2: Generate step control values based on step type | ||
| // Each prompt instructs the AI to wrap the response in { root: { ... } } | ||
| const stepsWithControlValues = await this.generateStepControlValues({ workflowMetadata, userPrompt }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Workflow generation is split into two phases:
- first we generate the workflow metadata: name, description, severity, tags, and steps array with type and name
- then we use workflow metadata to generate steps content
This is done like this for a few reasons:
- structured output of the most ai providers has limitations, for example, 10 levels of schema nesting, which was passed with the maily editor schema
- ai output hardly follows the complex output schema
|
|
||
| @Injectable() | ||
| export class GetSuggestionsUseCase { | ||
| execute(): WorkflowSuggestionDto[] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be polished and used later
| AI_LLM_PROVIDER: str({ choices: ['openai', 'anthropic'], default: 'openai' }), | ||
| AI_LLM_API_KEY: str({ default: '' }), | ||
| AI_LLM_MODEL: str({ default: '' }), | ||
| AI_LLM_MAX_TOKENS: num({ default: 4096 }), | ||
| AI_LLM_TEMPERATURE: num({ default: 0.7 }), | ||
| AI_LLM_MAX_RETRIES: num({ default: 3 }), | ||
| AI_LLM_SCHEMA_VALIDATION_RETRIES: num({ default: 3 }), | ||
| AI_LLM_REQUEST_TIMEOUT_MS: num({ default: 30000 }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
api config that we will be controlling
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/api/src/app/ai/prompts/step.prompt.ts`:
- Around line 236-250: The throttle prompt currently allows `type: "fixed" |
null` which conflicts with the validation schema that expects `"fixed"` or
`"dynamic"`; update the prompt block (the `throttle:` template where
STEP_CRITICAL_OUTPUT_REQUIREMENTS is used) to require `type: "fixed" |
"dynamic"` and, if needed, add brief schema bullets for the `"dynamic"` variant
(e.g., required fields used by dynamic throttles) so generated AI output matches
the Zod schema exactly.
♻️ Duplicate comments (1)
apps/api/src/app/ai/schemas/steps-control.schema.ts (1)
105-121: Prefer a discriminated union for throttle controls.The
typefield is now non-nullable, so you can usez.discriminatedUnionfor clearer errors and faster validation.♻️ Proposed change
-const aiThrottleControlSchema = z.union([aiThrottleFixedControlSchema, aiThrottleDynamicControlSchema]); +const aiThrottleControlSchema = z.discriminatedUnion('type', [ + aiThrottleFixedControlSchema, + aiThrottleDynamicControlSchema, +]);
🧹 Nitpick comments (5)
apps/api/src/app/ai/services/llm.service.ts (3)
10-43: Consider using interfaces instead of types.Per coding guidelines for backend code in
apps/api/**/*.ts, interfaces are preferred over types. These exported type aliases could be interfaces.♻️ Suggested refactor
-export type LlmConfig = { +export interface LlmConfig { provider: LlmProvider; apiKey: string; model: string; maxOutputTokens: number; temperature: number; maxRetries: number; -}; +} -export type GenerateTextInput = { +export interface GenerateTextInput { systemPrompt: string; userPrompt: string; maxOutputTokens?: number; temperature?: number; -}; +} -export type GenerateObjectInput<T extends z.ZodType> = { +export interface GenerateObjectInput<T extends z.ZodType> { systemPrompt: string; userPrompt: string; schema: T; maxOutputTokens?: number; temperature?: number; -}; +} -export type ChatStreamInput = { +export interface ChatStreamInput { systemPrompt: string; message: string; messageHistory: Array<{ role: 'user' | 'assistant' | 'system'; content: string; }>; maxOutputTokens?: number; temperature?: number; -}; +}
74-79: Missing validation for parsed environment variables.
parseIntandparseFloatreturnNaNfor invalid values, which could cause unexpected behavior. Consider adding validation or using default values more defensively.♻️ Suggested defensive parsing
+ private parseIntEnv(value: string | undefined, defaultValue: number): number { + const parsed = parseInt(value || '', 10); + return Number.isNaN(parsed) ? defaultValue : parsed; + } + + private parseFloatEnv(value: string | undefined, defaultValue: number): number { + const parsed = parseFloat(value || ''); + return Number.isNaN(parsed) ? defaultValue : parsed; + } + private initializeConfig(): void { // ... existing code ... this.config = { provider, apiKey, model: process.env.AI_LLM_MODEL || this.getDefaultModel(provider), - maxOutputTokens: parseInt(process.env.AI_LLM_MAX_OUTPUT_TOKENS || '4096', 10), - temperature: parseFloat(process.env.AI_LLM_TEMPERATURE || '0.7'), - maxRetries: parseInt(process.env.AI_LLM_MAX_RETRIES || '3', 10), + maxOutputTokens: this.parseIntEnv(process.env.AI_LLM_MAX_OUTPUT_TOKENS, 4096), + temperature: this.parseFloatEnv(process.env.AI_LLM_TEMPERATURE, 0.7), + maxRetries: this.parseIntEnv(process.env.AI_LLM_MAX_RETRIES, 3), }; - this.maxSchemaValidationRetries = parseInt(process.env.AI_LLM_SCHEMA_VALIDATION_RETRIES || '3', 10); - this.requestTimeoutMs = parseInt(process.env.AI_LLM_REQUEST_TIMEOUT_MS || '30000', 10); + this.maxSchemaValidationRetries = this.parseIntEnv(process.env.AI_LLM_SCHEMA_VALIDATION_RETRIES, 3); + this.requestTimeoutMs = this.parseIntEnv(process.env.AI_LLM_REQUEST_TIMEOUT_MS, 30000);
289-293: Add type guard forstatusCodecomparison.
error?.statusCode >= 500could behave unexpectedly ifstatusCodeis undefined or not a number (e.g.,undefined >= 500isfalse, but a string like"500"would coerce). Consider adding explicit type checking.♻️ Suggested fix
const isRetryableError = error?.name === 'AbortError' || error?.name === 'AI_NoObjectGeneratedError' || - error?.statusCode >= 500 || - error?.statusCode === 429; + (typeof error?.statusCode === 'number' && error.statusCode >= 500) || + error?.statusCode === 429;apps/api/src/app/ai/schemas/steps-control.schema.ts (1)
37-41: Avoid nested nullable forredirect.
aiRedirectSchemais already nullable, soaiRedirectSchema.nullable()creates a redundant nullable wrapper and can bloat the JSON schema.♻️ Proposed simplification
const aiActionSchema = z .object({ label: z.string(), - redirect: aiRedirectSchema.nullable(), + redirect: aiRedirectSchema, }) .nullable();apps/api/src/app/ai/prompts/step.prompt.ts (1)
259-272: Use afunctiondeclaration for the pure helper.
buildStepPromptis pure and can be declared withfunctionfor consistency with backend TS guidelines.As per coding guidelines, use the `function` keyword for pure functions.♻️ Suggested change
-export const buildStepPrompt = ({ +export function buildStepPrompt({ step, workflowMetadata, userPrompt, }: { step: StepMetadata; workflowMetadata: WorkflowMetadata; userPrompt: string; -}): string => { +}): string { const { name: workflowName, description, steps, reasoning } = workflowMetadata; const stepsOverview = steps.map((s, i) => `${i + 1}. ${s.name} (${s.type})`).join('\n'); return `Generate the content for step: **${step.name}** (type: ${step.type}) @@ ${stepsOverview}`; -}; +}
LaunchDarkly flag references🔍 1 flag added or modified
|
What changed? Why was the change needed?
AI Workflow Generation from the user prompt.
For the details, check the comments on the PR code.
Screenshots
Screen.Recording.2026-01-20.at.13.57.51.mov
Screen.Recording.2026-01-20.at.14.10.29.mov