Skip to content

feat(ai): upgrade to OpenAI Responses API and enhance template generation#183

Open
izak-fisher wants to merge 2 commits into
masterfrom
backend/ai/upgrade_openai_responses_api_and_template_generation
Open

feat(ai): upgrade to OpenAI Responses API and enhance template generation#183
izak-fisher wants to merge 2 commits into
masterfrom
backend/ai/upgrade_openai_responses_api_and_template_generation

Conversation

@izak-fisher
Copy link
Copy Markdown
Collaborator

@izak-fisher izak-fisher commented Mar 20, 2026

Problem

Users could only enter a single line of text when describing their business process for AI template generation, making it hard to provide enough detail for complex workflows. The generated templates lacked structured forms — they produced only task names and descriptions, without input fields, output fields, or the ability to pass data between workflow steps. The system was also limited to older AI models with no way for administrators to choose newer, more capable ones.

Fix / Solution

  • Upgraded the AI integration to use OpenAI's latest Responses API, replacing the legacy Chat Completions API
  • Enhanced template generation to produce full structured templates with kickoff forms, task output fields, and data references between steps
  • Added latest AI models (GPT-4.1 family, GPT-5, o3, o4-mini) with gpt-4.1-mini as the recommended default
  • Changed the description input to a multi-line text area so users can write detailed process descriptions
  • Added a built-in system prompt as fallback, with the option to customize via Django Admin
  • Made the OpenAI organization setting optional (only the API key is required)
  • Added 87 unit tests covering all generation paths and error scenarios

Release notes

AI template generation now produces richer workflow templates with kickoff forms, task output fields, and data flow between steps. Users can enter multi-line descriptions, and administrators can choose from the latest AI models. The system uses OpenAI's Responses API for improved performance and reliability.

Changes

API

  • OpenAiModel: added gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5, gpt-5-mini, o3, o4-mini
  • OpenAIPromptTarget: added GET_TEMPLATE target for structured JSON generation
  • OpenAiPromptQueryset: added target_template() filter method
  • Migration 0004: updated model choices and target field
  • BaseAiService: new methods for Responses API calls, JSON parsing, field normalization
  • OpenAiService._get_template_data: supports JSON template path (primary) and legacy text path (fallback)
  • AnonOpenAiService.get_short_template_data: supports JSON template path with minimal output
  • Default system prompt fallback when no admin-configured prompt exists
  • OPENAI_API_KEY and OPENAI_API_ORG env vars added to all docker-compose files

Web-client

  • Template editor page > AI generation modal: replaced single-line input with multi-line textarea (4 rows, resizable), button moved below the textarea

Test cases

API

OpenAI API integration

Authorization Test case Expected result
N/A Build request headers with API key only Headers contain Authorization and Content-Type
N/A Build request headers with API key and org Headers include OpenAI-Organization
API key set Successful API response with output text Returns extracted text
API key set Network error during API call Raises service unavailable error
API key set Empty output in API response Raises service failed error
API key set HTTP error status from API Raises service unavailable error

Template generation (authenticated user)

Authorization Test case Expected result
No API key Generate template without API key configured Returns test/demo template
Authenticated Generate with admin-configured prompt Uses configured model and instructions
Authenticated Generate without admin prompt Uses default model (gpt-4.1-mini) and built-in instructions
Authenticated Generate with prompt that has inactive messages Falls back to built-in instructions
Authenticated API error with admin prompt configured Error logged to Sentry and re-raised
Authenticated API error without admin prompt Error re-raised, no logging
Authenticated Generation limit exceeded Raises limit exceeded error
Authenticated Valid JSON response with tasks and fields Template parsed with normalized fields, conditions chained
Authenticated Invalid JSON in AI response (with admin prompt) Error logged and template generation error raised
Authenticated Invalid JSON in AI response (no admin prompt) Template generation error raised, no logging
Authenticated AI returns empty tasks list (with admin prompt) "Tasks not found" logged and error raised
Authenticated AI returns empty tasks list (no admin prompt) Error raised, no logging
Authenticated AI returns template with empty name Name defaults to user description (truncated)
Authenticated Legacy text-based path with steps prompt Falls back to pipe-delimited parsing
Authenticated Legacy path returns empty steps "Steps not found" logged and error raised
Authenticated Successful generation end-to-end Template data filled, generation count incremented, analytics tracked with success=True
Authenticated Generation fails Analytics tracked with success=False, error re-raised

Template generation (anonymous user)

Authorization Test case Expected result
Anonymous Generate with GET_TEMPLATE prompt Returns minimal task data (name, description, conditions only)
Anonymous Generate without admin prompt Uses default instructions
Anonymous AI returns empty tasks (with prompt) "Tasks not found" logged and error raised
Anonymous AI returns empty tasks (no prompt) Error raised, no logging
Anonymous AI response parse error Template generation error raised
Anonymous Legacy text path, valid response Returns tasks with conditions
Anonymous Legacy text path, no steps found "Steps not found" logged and error raised
Anonymous Empty name in response Defaults to user description (truncated)
Anonymous Tasks in response Output stripped to minimal fields only

Field normalization

Authorization Test case Expected result
N/A Valid field with all attributes All keys preserved with correct values
N/A Field with invalid type Type falls back to "string"
N/A Dropdown field with selections Selections included with generated identifiers
N/A Field with empty selection values Empty values skipped
N/A Non-selection field type with selections provided Selections ignored
N/A Field without identifier Identifier auto-generated
N/A Field with name over 50 characters Name truncated
N/A Field with missing keys Default values applied

JSON extraction and parsing

Authorization Test case Expected result
N/A Plain JSON string Returned as-is
N/A JSON wrapped in code fences Extracted from fences
N/A Invalid JSON JSONDecodeError raised
N/A JSON array instead of object TypeError raised
N/A Template name over limit Truncated
N/A Task without output fields Fields key omitted
N/A Multiple tasks Conditions chain correctly between tasks

Logging and error tracking

Authorization Test case Expected result
Authenticated Error during generation Sentry message sent with user, account, and error details
Authenticated Log with response text Sentry message includes response data
Authenticated Log without response text Sentry message includes null response
Anonymous Error during generation Sentry message sent with IP and user-agent
Anonymous Log with response text Sentry message includes response data

Web-client

Browser Device Authorization Test scenario Expected result
Chrome Desktop Authenticated Open AI generation modal Multi-line textarea with 4 rows visible, placeholder text displayed
Chrome Desktop Authenticated Type text and press Enter New line inserted in textarea (form not submitted)
Chrome Desktop Authenticated Click Generate button Generation starts, loader displayed
Chrome Desktop Authenticated Resize textarea by dragging bottom edge Textarea resizes vertically
Chrome Desktop Authenticated Open modal, check focus Textarea receives focus automatically
Chrome Desktop Authenticated Close modal during generation Generation stops, textarea clears
Chrome Desktop Authenticated Submit with empty textarea Form submits with empty description
Firefox Desktop Authenticated Open AI generation modal Textarea renders with correct border and font styling
Firefox Desktop Authenticated Type multi-line text and generate Generation works, textarea value preserved
Firefox Desktop Authenticated Click into textarea Border color changes to blue on focus
Safari Desktop Authenticated Open AI generation modal Textarea renders with consistent styling
Safari Desktop Authenticated Type text using Enter key Newline inserted correctly
Chrome Mobile Authenticated Open AI generation modal Textarea renders full width, modal adjusts for mobile
Chrome Mobile Authenticated Type on mobile keyboard On-screen keyboard works, textarea scrolls
Safari Mobile Authenticated Open modal and enter text Touch interactions work correctly

…tion

- Migrate from legacy Chat Completions API (openai SDK v0.27) to Responses API via direct HTTP calls
- Add new model choices: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5, gpt-5-mini, o3, o4-mini
- Change default model from gpt-4o to gpt-4.1-mini
- Enhance system prompt to generate templates with kickoff fields, output fields, and variable references
- Preserve AI-provided api_name on fields so {{field-name}} references resolve correctly
- Make OPENAI_API_ORG optional (only OPENAI_API_KEY required)
- Replace single-line input with multi-line textarea in template generator UI
- Add documentation: AI_GENERATION.md, SYSTEM_PROMPT.md, CHANGES.md, RUN_BRANCH.md
- Update tests for new Responses API integration

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment thread backend/src/processes/services/templates/ai.py
Comment thread backend/src/processes/services/templates/ai.py
Comment thread backend/src/processes/services/templates/ai.py
Comment thread backend/src/processes/services/templates/ai.py
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Autofix Details

Bugbot Autofix prepared fixes for all 3 issues found in the latest run.

  • ✅ Fixed: JSON test response breaks legacy fallback
    • Legacy _get_response fallback now returns a dedicated pipe-delimited _test_steps_response, so offline legacy parsing produces tasks instead of failing on JSON.
  • ✅ Fixed: Prompt penalties ignored in Responses payload
    • Both _get_response and _get_json_response now include presence_penalty and frequency_penalty in the Responses API payload from prompt settings (or sane defaults).
  • ✅ Fixed: Only last prompt messages are used
    • Prompt message handling now preserves ordered active messages by aggregating all system instructions and sending ordered user/assistant inputs instead of overwriting with the last entries.

Create PR

Or push these changes by commenting:

@cursor push c54102a461
Preview (c54102a461)
diff --git a/backend/src/processes/services/templates/ai.py b/backend/src/processes/services/templates/ai.py
--- a/backend/src/processes/services/templates/ai.py
+++ b/backend/src/processes/services/templates/ai.py
@@ -200,26 +200,20 @@
     ) -> str:
 
         if not settings.OPENAI_API_KEY:
-            return self._test_response()
+            return self._test_steps_response()
 
-        # Build instructions and input from prompt messages
-        instructions = None
-        user_input = user_description
-        for elem in prompt.messages.order_by('order'):
-            content = insert_fields_values_to_text(
-                text=elem.content,
-                fields_values={'user_description': user_description},
-            )
-            if elem.role == 'system':
-                instructions = content
-            elif elem.role == 'user':
-                user_input = content
+        instructions, user_input = self._get_prompt_instructions_and_input(
+            prompt=prompt,
+            user_description=user_description,
+        )
 
         payload = {
             'model': prompt.model,
             'input': user_input,
             'temperature': prompt.temperature,
             'top_p': prompt.top_p,
+            'presence_penalty': prompt.presence_penalty,
+            'frequency_penalty': prompt.frequency_penalty,
         }
         if instructions:
             payload['instructions'] = instructions
@@ -244,19 +238,10 @@
             return self._test_response()
 
         if prompt and prompt.messages.active().exists():
-            instructions = None
-            user_input = user_description
-            for elem in prompt.messages.order_by('order'):
-                content = insert_fields_values_to_text(
-                    text=elem.content,
-                    fields_values={
-                        'user_description': user_description,
-                    },
-                )
-                if elem.role == 'system':
-                    instructions = content
-                elif elem.role == 'user':
-                    user_input = content
+            instructions, user_input = self._get_prompt_instructions_and_input(
+                prompt=prompt,
+                user_description=user_description,
+            )
         else:
             instructions = DEFAULT_TEMPLATE_INSTRUCTION
             user_input = user_description
@@ -268,6 +253,12 @@
             'input': user_input,
             'temperature': prompt.temperature if prompt else 0.7,
             'top_p': prompt.top_p if prompt else 1,
+            'presence_penalty': (
+                prompt.presence_penalty if prompt else 0
+            ),
+            'frequency_penalty': (
+                prompt.frequency_penalty if prompt else 0
+            ),
         }
         if instructions:
             payload['instructions'] = instructions
@@ -283,6 +274,44 @@
                 )
             raise
 
+    def _get_prompt_instructions_and_input(
+        self,
+        prompt: OpenAiPrompt,
+        user_description: str,
+    ):
+        instructions_parts = []
+        input_messages = []
+        for elem in prompt.messages.active().order_by('order'):
+            content = insert_fields_values_to_text(
+                text=elem.content,
+                fields_values={'user_description': user_description},
+            )
+            if elem.role == 'system':
+                instructions_parts.append(content)
+            elif elem.role in ('user', 'assistant'):
+                input_messages.append({
+                    'role': elem.role,
+                    'content': content,
+                })
+        instructions = (
+            '\n\n'.join(instructions_parts)
+            if instructions_parts
+            else None
+        )
+        if not input_messages:
+            return instructions, user_description
+        if len(input_messages) == 1 and input_messages[0]['role'] == 'user':
+            return instructions, input_messages[0]['content']
+        return instructions, input_messages
+
+    def _test_steps_response(self):
+        return '\n'.join((
+            'Inspect hive | Inspect the beehive to determine readiness for honey collection.',
+            'Smoke the bees | Use a smoker to calm the bees before working with the frames.',
+            'Extract honey | Extract honey from the hive frames and collect it for processing.',
+            'Bottle and label | Bottle the honey and label each jar for storage and sale.',
+        ))
+
     def _test_response(self):
         return json.dumps({
             'name': 'Honey Harvesting',

diff --git a/backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py b/backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py
--- a/backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py
+++ b/backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py
@@ -107,6 +107,34 @@
     assert task_2_predicate['value'] is None
 
 
+def test_get_short_template_data__legacy_path_without_api_key__ok(mocker):
+
+    # arrange
+    description = 'My lovely business process'
+    create_test_prompt()
+    mocker.patch(
+        'src.processes.services.templates.ai.settings.OPENAI_API_KEY',
+        None,
+    )
+    ip = '168.01.01.8'
+    user_agent = 'Some browser'
+
+    service = AnonOpenAiService(
+        ident=ip,
+        user_agent=user_agent,
+    )
+
+    # act
+    template_data = service.get_short_template_data(
+        user_description=description,
+    )
+
+    # assert
+    assert template_data['name'] == description
+    assert len(template_data['tasks']) == 4
+    assert template_data['tasks'][0]['name'] == 'Inspect hive'
+
+
 # === JSON path with GET_TEMPLATE prompt ===
 
 

diff --git a/backend/src/processes/tests/test_services/test_templates/test_ai/test_open_ai_service.py b/backend/src/processes/tests/test_services/test_templates/test_ai/test_open_ai_service.py
--- a/backend/src/processes/tests/test_services/test_templates/test_ai/test_open_ai_service.py
+++ b/backend/src/processes/tests/test_services/test_templates/test_ai/test_open_ai_service.py
@@ -66,7 +66,7 @@
     test_response = mocker.Mock()
     test_response_mock = mocker.patch(
         'src.processes.services.templates.'
-        'ai.OpenAiService._test_response',
+        'ai.OpenAiService._test_steps_response',
         return_value=test_response,
     )
 
@@ -121,8 +121,64 @@
     # assert
     assert response == ai_response
     call_api_mock.assert_called_once()
+    payload = call_api_mock.call_args[0][0]
+    assert payload['presence_penalty'] == prompt.presence_penalty
+    assert payload['frequency_penalty'] == prompt.frequency_penalty
 
 
+def test_get_response__multiple_messages__uses_ordered_input(mocker):
+
+    # arrange
+    description = 'some description'
+    prompt = create_test_prompt(messages_count=4)
+    message_1 = prompt.messages.filter(order=1).first()
+    message_1.role = OpenAIRole.SYSTEM
+    message_1.content = 'System 1'
+    message_1.save()
+    message_2 = prompt.messages.filter(order=2).first()
+    message_2.role = OpenAIRole.USER
+    message_2.content = 'User 1 {{ user_description }}'
+    message_2.save()
+    message_3 = prompt.messages.filter(order=3).first()
+    message_3.role = OpenAIRole.ASSISTANT
+    message_3.content = 'Assistant example'
+    message_3.save()
+    message_4 = prompt.messages.filter(order=4).first()
+    message_4.role = OpenAIRole.SYSTEM
+    message_4.content = 'System 2'
+    message_4.save()
+
+    mocker.patch(
+        'src.processes.services.templates.ai.settings.OPENAI_API_KEY',
+        'some_key',
+    )
+    user = create_test_user()
+    service = OpenAiService(
+        ident=user.id,
+        user=user,
+        auth_type=AuthTokenType.USER,
+    )
+    call_api_mock = mocker.patch(
+        'src.processes.services.templates.'
+        'ai.BaseAiService._call_responses_api',
+        return_value='ok',
+    )
+
+    # act
+    service._get_response(
+        user_description=description,
+        prompt=prompt,
+    )
+
+    # assert
+    payload = call_api_mock.call_args[0][0]
+    assert payload['instructions'] == 'System 1\n\nSystem 2'
+    assert payload['input'] == [
+        {'role': OpenAIRole.USER, 'content': 'User 1 some description'},
+        {'role': OpenAIRole.ASSISTANT, 'content': 'Assistant example'},
+    ]
+
+
 def test_get_response__api_error__raise_exception(mocker):
 
     # arrange
@@ -235,8 +291,64 @@
     assert payload['model'] == 'gpt-4.1-mini'
     assert 'instructions' in payload
     assert payload['input'] == description
+    assert payload['presence_penalty'] == 0
+    assert payload['frequency_penalty'] == 0
 
 
+def test_get_json_response__prompt_messages__uses_ordered_input(mocker):
+
+    # arrange
+    description = 'some description'
+    prompt = create_test_prompt(messages_count=3)
+    prompt.presence_penalty = 0.7
+    prompt.frequency_penalty = -0.4
+    prompt.save()
+    message_1 = prompt.messages.filter(order=1).first()
+    message_1.role = OpenAIRole.SYSTEM
+    message_1.content = 'System message'
+    message_1.save()
+    message_2 = prompt.messages.filter(order=2).first()
+    message_2.role = OpenAIRole.ASSISTANT
+    message_2.content = 'Assistant example'
+    message_2.save()
+    message_3 = prompt.messages.filter(order=3).first()
+    message_3.role = OpenAIRole.USER
+    message_3.content = 'User asks: {{ user_description }}'
+    message_3.save()
+
+    mocker.patch(
+        'src.processes.services.templates.ai.settings.OPENAI_API_KEY',
+        'some_key',
+    )
+    user = create_test_user()
+    service = OpenAiService(
+        ident=user.id,
+        user=user,
+        auth_type=AuthTokenType.USER,
+    )
+    call_api_mock = mocker.patch(
+        'src.processes.services.templates.'
+        'ai.BaseAiService._call_responses_api',
+        return_value='{"name":"T","tasks":[]}',
+    )
+
+    # act
+    service._get_json_response(
+        user_description=description,
+        prompt=prompt,
+    )
+
+    # assert
+    payload = call_api_mock.call_args[0][0]
+    assert payload['instructions'] == 'System message'
+    assert payload['input'] == [
+        {'role': OpenAIRole.ASSISTANT, 'content': 'Assistant example'},
+        {'role': OpenAIRole.USER, 'content': 'User asks: some description'},
+    ]
+    assert payload['presence_penalty'] == prompt.presence_penalty
+    assert payload['frequency_penalty'] == prompt.frequency_penalty
+
+
 def test_get_json_response__api_error__raise_exception(mocker):
 
     # arrange

This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment thread backend/src/processes/services/templates/ai.py
Comment thread backend/src/processes/services/templates/ai.py
if elem.role == 'system':
instructions = content
elif elem.role == 'user':
user_input = content
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only last prompt messages are used

Medium Severity

Message iteration overwrites instructions and user_input, so only the last system and last user entries are sent. Earlier messages and all assistant examples are dropped, breaking multi-message prompt configurations.

Additional Locations (1)
Fix in Cursor Fix in Web

from django.db import migrations, models


class Migration(migrations.Migration):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Union 0004 and 0005 migrations

Comment thread backend/CLAUDE.md Outdated
@@ -0,0 +1,99 @@
# Backend — Django REST API

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Urelated changes, remove backend/CLAUDE.md

def test_get_short_template_data__ok(mocker):
# === Legacy text path ===


Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linters not passed


UserModel = get_user_model()

DEFAULT_TEMPLATE_INSTRUCTION = """You are a workflow template designer.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is promt hardcoded?

FieldType.NUMBER,
}

SELECTION_FIELD_TYPES = FieldType.TYPES_WITH_SELECTIONS
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dublicate


Given a user description of a business process, generate a workflow template JSON as per the structure above."""

VALID_FIELD_TYPES = {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really need? Just use FieldType

Comment thread docker-compose.yml
EMAIL_TIMEOUT: ${EMAIL_TIMEOUT:-}
AI: ${AI:-no}
AI_PROVIDER: ${AI_PROVIDER:-}
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add new variables to the each docker-compose file in the repository

Comment thread AI_GENERATION.md Outdated
## How it works

1. A user enters a business process description in the template editor
2. The backend sends the description to the OpenAI Responses API along with a system prompt (instructions)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove all trash files from the root directory

- Remove doc files from git tracking, add to .gitignore
- Merge migrations 0004 and 0005 into single migration
- Add default field to _normalize_field
- Re-add presence_penalty and frequency_penalty to API payloads
- Add dict validation in _parse_template_from_json
- Fix _get_response to raise when no API key (legacy text path)
- Replace VALID_FIELD_TYPES with FieldType.CHOICES
- Add OPENAI vars to all docker-compose files
- Fix linter violations
- Add 87 unit tests covering BaseAiService, OpenAiService, AnonOpenAiService

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

There are 3 total unresolved issues (including 1 from previous review).

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

if elem.role == 'system':
instructions = content
elif elem.role == 'user':
user_input = content
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inactive prompt messages still affect API calls

High Severity

Message iteration uses prompt.messages.order_by('order') instead of prompt.messages.active(). Deactivated prompt messages are still included and can override instructions/input, so admin disabling a message has no effect.

Additional Locations (1)
Fix in Cursor Fix in Web


task_fields = []
for field_data in raw_task.get('fields', []):
task_fields.append(self._normalize_field(field_data))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JSON parser crashes on malformed response shapes

Medium Severity

_parse_template_from_json assumes kickoff, tasks, and each task are dict/list shapes and calls .get() unguarded. Non-conforming AI JSON raises AttributeError, which is not caught by caller error handling, causing an unhandled server error instead of OpenAiTemplateStepsNotExist.

Fix in Cursor Fix in Web

Comment on lines +451 to +462
def _normalize_field(field_data: dict) -> dict:
field_type = field_data.get('type', FieldType.STRING)
if field_type not in VALID_FIELD_TYPES:
field_type = FieldType.STRING
normalized = {
'order': field_data.get('order', 1),
'name': field_data.get('name', '')[:50],
'type': field_type,
'is_required': bool(field_data.get('is_required', False)),
'description': field_data.get('description', ''),
'default': field_data.get('default', ''),
'api_name': field_data.get('api_name') or create_api_name('field'),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Low templates/ai.py:451

When the AI returns {"name": null}, field_data.get('name', '') returns None instead of the default empty string, and None[:50] raises a TypeError. The same bug exists for description and default.

     @staticmethod
     def _normalize_field(field_data: dict) -> dict:
         field_type = field_data.get('type', FieldType.STRING)
         if field_type not in VALID_FIELD_TYPES:
             field_type = FieldType.STRING
         normalized = {
             'order': field_data.get('order', 1),
-            'name': field_data.get('name', '')[:50],
+            'name': (field_data.get('name') or '')[:50],
             'type': field_type,
             'is_required': bool(field_data.get('is_required', False)),
-            'description': field_data.get('description', ''),
-            'default': field_data.get('default', ''),
+            'description': field_data.get('description') or '',
+            'default': field_data.get('default') or '',
             'api_name': field_data.get('api_name') or create_api_name('field'),
         }
Also found in 1 other location(s)

backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py:638

The test will fail because the mock JSON response doesn't include api_name in the tasks, but the production code get_short_template_data does task[&#39;api_name&#39;] directly (not .get()), which will raise KeyError. The test expects task_1[&#39;api_name&#39;] to be truthy (line 687), but the service will raise an exception before returning any data.

🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file backend/src/processes/services/templates/ai.py around lines 451-462:

When the AI returns `{"name": null}`, `field_data.get('name', '')` returns `None` instead of the default empty string, and `None[:50]` raises a `TypeError`. The same bug exists for `description` and `default`.

Evidence trail:
backend/src/processes/services/templates/ai.py lines 451-462 (REVIEWED_COMMIT) - `_normalize_field` method showing `field_data.get('name', '')[:50]` at line 457, `description` at line 459, `default` at line 460. Lines 485-492 show `field_data` comes directly from parsed AI JSON with no null value validation.

Also found in 1 other location(s):
- backend/src/processes/tests/test_services/test_templates/test_ai/test_anon_open_ai_service.py:638 -- The test will fail because the mock JSON response doesn't include `api_name` in the tasks, but the production code `get_short_template_data` does `task['api_name']` directly (not `.get()`), which will raise `KeyError`. The test expects `task_1['api_name']` to be truthy (line 687), but the service will raise an exception before returning any data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants