Skip to content

Conversation

@martinzerty
Copy link
Contributor

@martinzerty martinzerty commented Dec 15, 2025

Summary by CodeRabbit

  • New Features

    • Mapping inference now records multiple source assessments per requirement (id, name, source framework, coverage, score, annotations) and supports multi-hop composition/propagation.
    • Requirement assessments now expose name, id and source framework.
    • New "focused feared event" field for Strategic Scenarios with UI controls, list/detail column, conditional badge, and import/export support.
    • Added translation keys for focused feared event.
  • Bug Fixes

    • Mapping inference is persisted during imports/merges to prevent loss.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

📝 Walkthrough

Walkthrough

map_audit_results now accepts hop_index and path and builds per-target, per-assessment mapping_inference objects propagated across multi‑hop inferences; MappingEngine.m2m_fields no longer exposes mapping_inference. load_audit_fields adds name, id, and source_framework to requirement assessments; views persist mapping_inference; frontend renders multiple source assessments.

Changes

Cohort / File(s) Summary
Mapping engine core
backend/core/mappings/engine.py
Removed mapping_inference from MappingEngine.m2m_fields. map_audit_results signature gains hop_index and path; constructs per-target mapping_inference (result, annotation, hop_index, path) and populates source_requirement_assessments (id, urn, str, coverage, score, is_scored, source_framework, used_mapping_set); composes multi‑hop inferences. load_audit_fields now injects name, id (string) and source_framework (id/name) into each requirement assessment.
Views: persist inference
backend/core/views.py
Best-inference application now copies per-requirement mapping_inference into target assessments and includes mapping_inference among fields updated in bulk_update so it is persisted.
Frontend: multi-source inference UI
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/+page.svelte, frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte
Migrated from singular mappingInference.sourceRequirementAssessment to mappingInference.sourceRequirementAssessments (object). Derived stores and templates iterate Object.entries(...), render multiple per-source items (framework, coverage, optional score, annotation), use per-item IDs/links, and guard implementation_groups rendering.
ebios_rm: focused feared event
backend/ebios_rm/migrations/0021_strategicscenario_focused_feared_event.py, backend/ebios_rm/models.py, backend/ebios_rm/serializers.py
Adds focused_feared_event FK to StrategicScenario (null/blank, SET_NULL, related_name focused_strategic_scenarios). Model: get_gravity_display now prefers focused_feared_event.gravity when set. Serializers: cross-field validation ensures the focused event belongs to the selected ro_to_couple; read/import serializers expose the field.
Frontend: focused feared event surfaces
frontend/src/lib/components/Forms/ModelForm/StrategicScenarioForm.svelte, frontend/src/lib/utils/crud.ts, frontend/src/lib/utils/schemas.ts, frontend/src/lib/utils/table.ts, frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.server.ts, frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.svelte, frontend/messages/en.json, frontend/messages/fr.json
Adds focused_feared_event schema field, CRUD mappings, list/detail column and translations; form adds dependent AutocompleteSelect keyed on ro_to_couple; server load narrows feared events when focused and exposes isFocusedOnFearedEvent; UI shows focused badge in strategic scenario detail.

Sequence Diagram(s)

sequenceDiagram
    participant FE as Frontend
    participant API as Server (API)
    participant Engine as MappingEngine
    participant DB as Database

    Note over FE,API: Request assessment view or apply mapping
    FE->>API: GET /requirement-assessments/:id
    API->>Engine: load_audit_fields(audit)
    Engine->>DB: fetch audits & requirement_assessments
    DB-->>Engine: assessments payload
    Engine-->>API: audit payload (adds name,id,source_framework)

    Note over FE,API: Apply best mapping (single or multi‑hop)
    FE->>API: POST /apply-mapping (best_results, path)
    API->>Engine: best_mapping_inferences(best_results, path, hop_index=1...)
    Engine->>Engine: map_audit_results(hop_index, path) — build per-assessment mapping_inference (result, path, hop_index, source_requirement_assessments...)
    Engine->>DB: update requirement_assessments (include mapping_inference)
    DB-->>Engine: write OK
    Engine-->>API: updated assessment
    API-->>FE: 200 + assessment with per-assessment mapping_inference
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hopped through paths and stitched each clue,
many tiny sources now show through.
Names and frameworks lined in rows I bring,
annotations whispering what they sing.
Hop on — the mappings spring!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.36% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'fix: mapping inference internal structure' accurately reflects the main changes, which involve restructuring mapping_inference from a public m2m field to a per-assessment-level structure with updated internal organization across backend and frontend layers.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch CA-1444-restore-mapping-inference-explanations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
backend/core/mappings/engine.py (1)

549-555: Prefer isinstance() for type checking.

While the UUID-to-string conversion is necessary for handling the "id" field, consider using isinstance(attr, UUID) instead of type(attr) is UUID to follow Python best practices for type checking.

Apply this diff:

-                if (
-                    type(attr) is UUID
-                ):  # Handle the case of the RA's id to avoid error when merging m2m_fields
+                if isinstance(attr, UUID):
                     attr = str(attr)

The comment can also be simplified since the primary purpose is UUID serialization rather than just merging.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71d44e3 and cee2fb7.

📒 Files selected for processing (2)
  • backend/core/mappings/engine.py (4 hunks)
  • backend/core/views.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (python)
🔇 Additional comments (4)
backend/core/views.py (1)

6710-6715: Correctly propagating and persisting mapping_inference

Including mapping_inference in the copied fields loop and bulk_update list is consistent with how result, status, and observation are handled and should correctly persist the new inference metadata for mapped requirement assessments. Assuming the RequirementAssessment model defines a mapping_inference DB field of a serializable type (e.g. JSON), this looks sound.

Also applies to: 6729-6729

backend/core/mappings/engine.py (3)

2-2: LGTM!

The UUID import is necessary to support the type checking added later in the load_audit_fields method.


539-541: LGTM!

Adding the "name" field with the string representation of the requirement assessment is a clean solution for providing human-readable identification in the mapping_inference structure.


39-44: This review comment is factually incorrect and should be rejected.

"mapping_inference" was never in m2m_fields—it is a JSONField on the RequirementAssessment model, not a many-to-many relationship. The code shows "mapping_inference" being used as a regular field in serializers, helpers, and views, never as a collection to be merged.

Adding "id" (a UUID primary key) to m2m_fields is problematic. The m2m_fields list is used by the merging logic (lines 298–308, 347–353, 396–408) to combine collections from colliding requirement assessments. Primary keys should never be merged this way. The UUID-to-string conversion at lines 550–552 appears to be a workaround masking a design issue rather than proper handling. Primary key identifiers are immutable and should not participate in collection merging during mapping operations.

Likely an incorrect or invalid review comment.

Comment on lines 440 to 456
target_audit["requirement_assessments"][dst]["mapping_inference"] = {
"result": target_audit["requirement_assessments"][dst].get(
"result", ""
),
"source_requirement_assessment": {
"id": source_audit["requirement_assessments"][src].get("id"),
"str": source_audit["requirement_assessments"][src].get("name"),
"coverage": "full" if rel in ("equal", "superset") else "partial",
"score": source_audit["requirement_assessments"][src].get("score"),
"is_scored": source_audit["requirement_assessments"][src].get(
"is_scored"
),
},
"annotation": source_audit["requirement_assessments"][src]
.get("mapping_inference", {})
.get("annotation", ""),
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Potential KeyError when accessing source assessment fields.

The mapping_inference block executes for every mapping in the loop, regardless of whether the if or elif conditions at lines 281 and 368 were met. This creates two issues:

  1. KeyError risk: If src is not in source_audit["requirement_assessments"] (i.e., neither condition was met), line 445 will raise a KeyError when trying to access source_audit["requirement_assessments"][src].get("id").

  2. Logic issue: Even if src exists but the relationship type is unexpected, this creates a mapping_inference entry for a requirement that wasn't actually mapped, polluting the target audit with spurious entries.

The mapping_inference block should only execute when a mapping actually occurs.

Apply this diff to fix the issue by checking that the source exists before creating the inference:

+            # Only add mapping_inference if src was actually in source_audit
+            if src in source_audit["requirement_assessments"]:
             target_audit["requirement_assessments"][dst]["mapping_inference"] = {
                 "result": target_audit["requirement_assessments"][dst].get(
                     "result", ""
                 ),
                 "source_requirement_assessment": {
                     "id": source_audit["requirement_assessments"][src].get("id"),
                     "str": source_audit["requirement_assessments"][src].get("name"),
                     "coverage": "full" if rel in ("equal", "superset") else "partial",
                     "score": source_audit["requirement_assessments"][src].get("score"),
                     "is_scored": source_audit["requirement_assessments"][src].get(
                         "is_scored"
                     ),
                 },
                 "annotation": source_audit["requirement_assessments"][src]
                 .get("mapping_inference", {})
                 .get("annotation", ""),
             }

Alternatively, if mapping_inference should only be added when a mapping is created (recommended), move this block inside both the if and elif blocks to avoid duplication.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backend/core/mappings/engine.py around lines 440 to 456: the
mapping_inference block is executed unconditionally causing KeyError and
spurious entries when src is not present or no mapping was created; only create
mapping_inference when a mapping actually occurred by either (A) moving this
entire block inside the existing if and elif branches that create mappings so it
runs only when src is present and mapping is made, or (B) guard the block with
an explicit existence check (if src in source_audit["requirement_assessments"]
and dst in target_audit["requirement_assessments"]) before accessing source
fields and adding mapping_inference.

@martinzerty martinzerty changed the title feat: add ID field and repare the str field feat: add ID field and repare the str field in mapping_inference in mapping engine Dec 16, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
backend/core/mappings/engine.py (2)

405-452: Critical: mapping_inference added unconditionally for all mappings.

This block executes for every mapping in the loop, even when neither the if condition (lines 286-289) nor the elif condition (lines 358-361) was met. This creates two problems:

  1. Spurious entries: Target requirements get mapping_inference entries even when no actual mapping occurred (e.g., when the relationship type is unhandled or the source requirement wasn't in the audit).

  2. Data pollution: The target audit contains inference metadata for requirements that weren't mapped, which could mislead consumers of this data.

The mapping_inference block should only execute when a mapping was actually created.

🔎 Move this block inside both mapping branches:

Move lines 405-452 inside both the if block (after line 356) and the elif block (after line 403) to ensure mapping_inference is only added when an actual mapping occurs. Alternatively, guard the entire block with:

+        # Only add mapping_inference if a mapping was created
+        if src in source_audit["requirement_assessments"] and dst in target_audit["requirement_assessments"]:
             target_assessment["mapping_inference"] = {
                 ...
             }

This is the same issue flagged in the past review comment.


284-284: Critical: KeyError when source requirement not in audit.

Line 284 accesses source_audit["requirement_assessments"][src] before checking whether src exists in the dictionary. This will raise a KeyError when a mapping references a source requirement that isn't present in the source audit (e.g., non-assessable requirements, requirements filtered out, or data inconsistencies).

The existence checks at lines 288 and 360 come too late—the KeyError occurs before reaching them.

🔎 Apply this diff to guard the access:
-        src_assessment = source_audit["requirement_assessments"][src]
         target_assessment = target_audit["requirement_assessments"][dst]
+        
+        # Skip if source requirement not in audit
+        if src not in source_audit["requirement_assessments"]:
+            continue
+            
+        src_assessment = source_audit["requirement_assessments"][src]
🧹 Nitpick comments (1)
backend/core/mappings/engine.py (1)

2-2: Remove unused import.

The UUID import is not used anywhere in this file.

🔎 Apply this diff to remove the unused import:
-from uuid import UUID
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cee2fb7 and 2785d91.

📒 Files selected for processing (1)
  • backend/core/mappings/engine.py (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/core/mappings/engine.py (2)
backend/core/views.py (4)
  • get (442-449)
  • get (5308-5309)
  • frameworks (6174-6208)
  • result (7507-7508)
backend/core/models.py (2)
  • frameworks (1398-1399)
  • frameworks (1464-1465)
🪛 GitHub Actions: Backend Linters
backend/core/mappings/engine.py

[error] 1-1: ruff format --check . failed. 1 file would be reformatted: core/mappings/engine.py. Run 'ruff format' to fix code style issues.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: test (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: Analyze (python)
🔇 Additional comments (3)
backend/core/mappings/engine.py (3)

260-266: LGTM: Signature extended for multi-hop inference.

The addition of hop_index and path parameters enables tracking inference chains across multiple framework mappings.


536-547: LGTM: Multi-hop tracking implementation.

The loop correctly tracks hop index and propagates the path through each mapping step. Note: the type mismatch with hop_index has been flagged separately.


578-587: LGTM: Metadata additions for requirement assessments.

The additions of name, id, and source_framework provide useful context for mapping inference tracking. Converting IDs to strings ensures JSON serialization compatibility.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (4)
backend/core/mappings/engine.py (4)

1-1: Verify that formatting issues are resolved.

Previous reviews flagged that this file failed ruff format --check. Please ensure you've run ruff format core/mappings/engine.py to fix any formatting issues.

#!/bin/bash
# Run ruff format to check and fix formatting issues
cd backend && ruff format --check core/mappings/engine.py

264-264: Critical: Fix type mismatch for hop_index parameter.

hop_index is declared as str but is compared to integer 1 at line 430 and initialized as integer 1 at line 527. This comparison will always fail since Python strings never equal integers.

🔎 Proposed fix
-        hop_index: str,
+        hop_index: int,

322-322: Critical: Assignment doesn't update target_audit.

Line 322 assigns target_assessment = src_assessment.copy(), creating a new dictionary that is never stored back into target_audit["requirement_assessments"][dst]. All subsequent mutations to target_assessment (including the mapping_inference added at lines 407-482) will be lost and won't appear in the returned target_audit.

🔎 Proposed fix
                     else:
-                        target_assessment = src_assessment.copy()
+                        target_audit["requirement_assessments"][dst] = src_assessment.copy()
+                        target_assessment = target_audit["requirement_assessments"][dst]

474-476: Remove debug print statement.

Debug print statements should not be committed to production code.

🔎 Proposed fix
                         == 0
                     ):
-                        print(
-                            f"no source_requirement_assessments for {mif_id} in \n{src_assessment}"
-                        )
🧹 Nitpick comments (4)
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/+page.svelte (2)

228-269: Unused source_urn variable and unreachable annotation block.

  1. The source_urn variable from destructuring is never used. Consider prefixing with underscore to indicate intentional non-use.
  2. The annotation block (lines 262-267) checks mappingInference.annotation, but this is hardcoded to '' on line 38, so this block will never render.
🔎 Proposed fix
-{#each Object.entries(mappingInference.sourceRequirementAssessments) as [source_urn, source_requirement_assessment]}
+{#each Object.entries(mappingInference.sourceRequirementAssessments) as [_source_urn, source_requirement_assessment]}

If the annotation should be shown per-item, consider using source_requirement_assessment.annotation instead:

-{#if mappingInference.annotation}
+{#if source_requirement_assessment.annotation}
 	<p class="whitespace-pre-line py-1">
 		<span class="italic">{m.annotationColon()}</span>
-		{mappingInference.annotation}
+		{source_requirement_assessment.annotation}
 	</p>
 {/if}

248-252: Inconsistent translation for coverage field.

This file uses toCamelCase(source_requirement_assessment.coverage) for the coverage badge (line 250), but the edit page at line 425 uses safeTranslate(source_requirement_assessment.coverage) directly without toCamelCase. Consider aligning both files for consistency.

frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte (1)

412-461: Same issues: unused variable and unreachable annotation block.

  1. source_urn from destructuring is unused - consider prefixing with underscore.
  2. The annotation block (lines 454-459) will never render since mappingInference.annotation is hardcoded to ''.
  3. Minor: The display order of "Coverage" and "Framework" differs from the view page (view shows Framework → Coverage, edit shows Coverage → Framework).
🔎 Proposed fix
-{#each Object.entries(mappingInference.sourceRequirementAssessments) as [source_urn, source_requirement_assessment]}
+{#each Object.entries(mappingInference.sourceRequirementAssessments) as [_source_urn, source_requirement_assessment]}

For the annotation, if it should be per-item:

-{#if mappingInference.annotation}
+{#if source_requirement_assessment.annotation}
 	<p class="whitespace-pre-line py-1">
 		<span class="italic">{m.annotationColon()}</span>
-		{mappingInference.annotation}
+		{source_requirement_assessment.annotation}
 	</p>
 {/if}
backend/core/mappings/engine.py (1)

2-2: Remove unused import.

The UUID import is not used anywhere in this file.

🔎 Proposed fix
-from uuid import UUID
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2785d91 and 3c00f15.

📒 Files selected for processing (3)
  • backend/core/mappings/engine.py (6 hunks)
  • frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/+page.svelte (3 hunks)
  • frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (python)
🔇 Additional comments (6)
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/+page.svelte (1)

96-105: LGTM!

The implementation groups check correctly guards against both undefined/null values and empty arrays.

frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte (1)

413-421: LGTM!

The link generation correctly uses source_requirement_assessment.id for the URL and source_requirement_assessment.str for the display text.

backend/core/mappings/engine.py (4)

39-43: LGTM!

Removing mapping_inference from m2m_fields aligns with the PR's architectural change to handle mapping inference differently.


407-482: Warning: This mapping_inference logic depends on fixes to critical issues.

The mapping_inference block operates on target_assessment, but line 322 creates a new dictionary without storing it back into target_audit. This means all the mapping inference data built here will be lost. Additionally, src_assessment accessed at line 286 can raise a KeyError.

Ensure the critical issues at lines 286 and 322 are fixed before this logic can function correctly.


527-539: Multi-hop logic looks correct, but fix the type mismatch.

The multi-hop traversal logic correctly increments hop_index and passes the path to map_audit_results. However, hop_index is initialized as an integer here but the function signature declares it as str (see comment on line 264).


569-580: LGTM!

Adding name, id, and source_framework fields enriches the requirement assessment data structure and aligns with the PR objectives to add ID field and repair the str field in mapping_inference.

Comment on lines 34 to 39
let mappingInference = $derived({
sourceRequirementAssessment:
data.requirementAssessment.mapping_inference.source_requirement_assessment,
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference.source_requirement_assessments,
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add null safety for sourceRequirementAssessments.

If data.requirementAssessment.mapping_inference or source_requirement_assessments is null/undefined, accessing it directly will cause a runtime error. The iteration on line 228 using Object.entries() will also fail if the value is not an object.

🔎 Proposed fix
 let mappingInference = $derived({
 	sourceRequirementAssessments:
-		data.requirementAssessment.mapping_inference.source_requirement_assessments,
+		data.requirementAssessment.mapping_inference?.source_requirement_assessments ?? {},
 	result: data.requirementAssessment.mapping_inference.result,
 	annotation: ''
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mappingInference = $derived({
sourceRequirementAssessment:
data.requirementAssessment.mapping_inference.source_requirement_assessment,
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference.source_requirement_assessments,
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
let mappingInference = $derived({
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference?.source_requirement_assessments ?? {},
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
🤖 Prompt for AI Agents
In
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/+page.svelte
around lines 34–39, the derived store sets sourceRequirementAssessments directly
from data.requirementAssessment.mapping_inference.source_requirement_assessments
which can be null/undefined and will break Object.entries later; change the
assignment to use optional chaining and a safe default (e.g., an empty object if
you expect an object for Object.entries, or an empty array if you iterate
arrays) — e.g., read via optional chaining and coalesce to {} (or [] as
appropriate), and also add a guard before the Object.entries usage to ensure it
only runs when the value is an object to avoid runtime errors.

Comment on lines 185 to 190
let mappingInference = $derived({
sourceRequirementAssessment:
data.requirementAssessment.mapping_inference.source_requirement_assessment,
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference.source_requirement_assessments,
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same null safety concern as the view page.

Add a fallback for source_requirement_assessments to prevent runtime errors if the value is null/undefined.

🔎 Proposed fix
 let mappingInference = $derived({
 	sourceRequirementAssessments:
-		data.requirementAssessment.mapping_inference.source_requirement_assessments,
+		data.requirementAssessment.mapping_inference?.source_requirement_assessments ?? {},
 	result: data.requirementAssessment.mapping_inference.result,
 	annotation: ''
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mappingInference = $derived({
sourceRequirementAssessment:
data.requirementAssessment.mapping_inference.source_requirement_assessment,
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference.source_requirement_assessments,
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
let mappingInference = $derived({
sourceRequirementAssessments:
data.requirementAssessment.mapping_inference?.source_requirement_assessments ?? {},
result: data.requirementAssessment.mapping_inference.result,
annotation: ''
});
🤖 Prompt for AI Agents
In
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte
around lines 185-190, the derived store initialization reads
data.requirementAssessment.mapping_inference.source_requirement_assessments
directly which can be null/undefined; update the initializer to defensively
handle missing values by using optional chaining and a fallback (e.g. default to
an empty array) for source_requirement_assessments (and mapping_inference if
necessary) so the derived store never receives null/undefined and downstream
code won't throw.

Comment on lines +437 to +444
{#if source_requirement_assessment.is_scored}
<p class="whitespace-pre-line py-1">
<span class="italic">{m.scoreSemiColon()}</span>
<span class="badge h-fit">
{safeTranslate(source_requirement_assessment.score)}
</span>
</p>
{/if}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for safeTranslate usage with score
rg -n 'safeTranslate.*score' --type=svelte -B2 -A2

# Search for score field definitions and types
fd -e ts -e js -e svelte | xargs rg -l 'score.*:' | head -10

# Look for score property in data models
rg -n 'is_scored|score' frontend/src/routes/\(app\)/\(third-party\)/requirement-assessments -B1 -A1 --type=svelte

Repository: intuitem/ciso-assistant-community

Length of output: 960


🏁 Script executed:

# Check the specific file to understand score context
cat -n frontend/src/routes/\(app\)/\(third-party\)/requirement-assessments/\[id=uuid\]/edit/+page.svelte | sed -n '430,450p'

Repository: intuitem/ciso-assistant-community

Length of output: 1045


🏁 Script executed:

# Look for score type in API or data models
rg -n 'score.*number|score.*integer|score.*string' --type=ts --type=js -B1 -A1 | head -30

Repository: intuitem/ciso-assistant-community

Length of output: 2226


🏁 Script executed:

# Check how score is displayed in other places (compliance-assessments table mode)
rg -n 'source_requirement_assessment.score|\.score' frontend/src/routes/\(app\)/\(third-party\)/compliance-assessments/\[id=uuid\]/table-mode/+page.svelte -B2 -A2

# Check how score is displayed in compliance-assessments view
rg -n 'requirement_assessment.score|\.score' frontend/src/routes/\(app\)/\(third-party\)/compliance-assessments/\[id=uuid\]/+page.svelte -B2 -A2

# Look at the safeTranslate function definition
rg -n 'export.*function safeTranslate|const safeTranslate' -B2 -A5

Repository: intuitem/ciso-assistant-community

Length of output: 5883


Remove safeTranslate() from numeric score display.

source_requirement_assessment.score is a numeric value (type: number | null), not a translation key. Using safeTranslate() here is incorrect—it expects a string translation key and will stringify the number instead. Display the score directly like {source_requirement_assessment.score ?? '--'} or use the formatScoreValue() helper as done elsewhere in the codebase (compliance-assessments view page).

🤖 Prompt for AI Agents
In
frontend/src/routes/(app)/(third-party)/requirement-assessments/[id=uuid]/edit/+page.svelte
around lines 437 to 444, the code wraps the numeric score in safeTranslate which
expects a string key and will stringify numbers; replace that with a direct
numeric display (e.g. use source_requirement_assessment.score ?? '--') or call
the existing formatScoreValue() helper used elsewhere to render scores so
numbers are shown correctly and nulls render a placeholder.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (6)
backend/core/mappings/engine.py (6)

264-264: Critical: Type mismatch for hop_index parameter.

hop_index is declared as str in the function signature but is compared to integer 1 at line 430 and initialized as integer 1 at line 527. This type inconsistency will cause the conditional at line 430 to always evaluate to False since a string never equals an integer in Python.

🔎 Recommended fix
     def map_audit_results(
         self,
         source_audit: dict[str, str | dict[str, str]],
         requirement_mapping_set: dict,
-        hop_index: str,
+        hop_index: int,
         path: list[str],
     ) -> dict[str, str | dict[str, str]]:

286-287: Critical: Potential KeyError when accessing source assessment.

Line 286 directly accesses source_audit["requirement_assessments"][src] without checking if src exists in the dictionary. If src is not present, this will raise a KeyError before the conditional checks at lines 288-291 and 360-363 can execute.

🔎 Recommended fix
+            if src not in source_audit["requirement_assessments"]:
+                continue
+            
             src_assessment = source_audit["requirement_assessments"][src]
             target_assessment = target_audit["requirement_assessments"][dst]

Alternatively, use .get() with a default and skip if None:

-            src_assessment = source_audit["requirement_assessments"][src]
+            src_assessment = source_audit["requirement_assessments"].get(src)
+            if not src_assessment:
+                continue
             target_assessment = target_audit["requirement_assessments"][dst]

322-322: Critical: Assignment doesn't update target_audit.

Line 322 assigns target_assessment = src_assessment.copy(), creating a new dictionary object that is never stored back into target_audit["requirement_assessments"][dst]. The initial reference obtained at line 287 is lost, so all subsequent mutations to target_assessment (including the mapping_inference added at lines 407-481) won't be reflected in the returned target_audit.

🔎 Apply this diff to fix the assignment
                     )
                 else:
-                    target_assessment = src_assessment.copy()
+                    target_audit["requirement_assessments"][dst] = src_assessment.copy()
+                    target_assessment = target_audit["requirement_assessments"][dst]

474-476: Remove debug print statement.

The debug print statement should not be committed to production code. If diagnostics are needed, use proper logging instead.

🔎 Apply this diff to remove the print statement
                     ):
-                        print(
-                            f"no source_requirement_assessments for {mif_id} in \n{src_assessment}"
-                        )
+                        pass  # or use logger.debug(...) if logging is needed

407-481: Critical: mapping_inference block executes unconditionally.

The mapping_inference construction block (lines 407-481) executes for every mapping in the loop, regardless of whether the if or elif conditions at lines 288-291 and 360-363 were met. This creates two critical issues:

  1. KeyError risk: If src is not in source_audit["requirement_assessments"] (i.e., neither condition was met), line 433 will raise a KeyError when trying to access source_audit["requirement_assessments"][src].get("id").

  2. Logic issue: Even if src exists but the relationship type is unexpected, this creates a mapping_inference entry for a requirement that wasn't actually mapped, polluting the target audit with spurious entries.

The mapping_inference block should only execute when a mapping actually occurs.

🔎 Recommended fix

Move the entire mapping_inference block (lines 407-481) inside both the if and elif branches, or add a guard condition:

+            # Only add mapping_inference if a mapping was created
+            if src not in source_audit["requirement_assessments"] or dst not in target_audit["requirement_assessments"]:
+                continue
+
             target_assessment["mapping_inference"] = {
                 "result": target_assessment.get("result", ""),
                 "used_path": path,
                 "annotation": src_assessment.get("mapping_inference", {}).get(
                     "annotation", ""
                 ),
             }
             # ... rest of mapping_inference construction

1-1: Fix formatting to pass pipeline checks.

The backend linter (ruff format --check) failed because this file needs reformatting.

#!/bin/bash
# Run ruff format to fix formatting issues
cd backend && ruff format core/mappings/engine.py
🧹 Nitpick comments (1)
backend/core/mappings/engine.py (1)

2-2: Verify or remove unused import.

The UUID import is not used in the visible code. If it's not needed elsewhere in the file, consider removing it.

Verification script
#!/bin/bash
# Check if UUID is used anywhere in the file
rg '\bUUID\b' backend/core/mappings/engine.py -n
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c00f15 and 92c8757.

📒 Files selected for processing (2)
  • backend/core/mappings/engine.py (6 hunks)
  • backend/core/views.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/core/views.py
🧰 Additional context used
🧬 Code graph analysis (1)
backend/core/mappings/engine.py (2)
backend/core/views.py (2)
  • frameworks (7494-7533)
  • result (8910-8911)
backend/core/models.py (2)
  • frameworks (1637-1638)
  • frameworks (1703-1704)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: build_community_frontend
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (3)
backend/core/mappings/engine.py (3)

39-43: LGTM: Correct removal of mapping_inference from m2m_fields.

This change aligns with the new per-assessment mapping_inference structure where it's no longer treated as an m2m field.


527-538: Logic correct, pending type fix.

The multi-hop traversal logic correctly propagates hop_index and path through the map_audit_results calls. However, this depends on fixing the type mismatch for hop_index (should be int, not str).


569-580: LGTM: Proper augmentation of requirement assessment data.

The addition of name, id, and source_framework fields correctly enriches the requirement assessment data structure. These fields provide essential context for the mapping inference feature and align with the PR objectives.

@nas-tabchiche nas-tabchiche changed the title feat: add ID field and repare the str field in mapping_inference in mapping engine fix: mapping inference internal structure Dec 29, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
backend/core/mappings/engine.py (5)

291-292: Critical: KeyError risk - accessing dictionaries before existence checks.

Lines 291-292 directly index the dictionaries without checking if the keys exist first. The guards at lines 293-296 and 358-361 that check if src in source_audit["requirement_assessments"] come AFTER the access, so a KeyError will be raised before the guards can protect against it.

🔎 Proposed fix

Check for key existence before accessing:

+            if src not in source_audit["requirement_assessments"]:
+                continue
+            
             src_assessment = source_audit["requirement_assessments"][src]
             target_assessment = target_audit["requirement_assessments"][dst]

320-320: Critical: Assignment creates orphaned dictionary.

Line 320 assigns target_assessment = src_assessment.copy(), creating a new dictionary object that is never stored back into target_audit["requirement_assessments"][dst].

The initial reference obtained at line 292 is lost, so all subsequent mutations to target_assessment (including the mapping_inference added at lines 414-489) won't be reflected in the returned target_audit.

🔎 Proposed fix

Store the copy back into the audit structure:

                     else:
-                        target_assessment = src_assessment.copy()
+                        target_audit["requirement_assessments"][dst] = src_assessment.copy()
+                        target_assessment = target_audit["requirement_assessments"][dst]

414-489: Critical: Mapping inference block executes unconditionally, causing KeyError.

The mapping_inference block (lines 414-489) executes for every mapping in the loop, regardless of whether the if condition (lines 293-296) or elif condition (lines 358-361) were met. This creates two issues:

  1. KeyError risk: If neither condition was met (e.g., relationship is not in the expected types), line 440 attempts to access source_audit["requirement_assessments"][src] without verifying src exists, raising a KeyError.

  2. Logic issue: Even if src exists but no mapping was created, this pollutes the target audit with spurious mapping_inference entries for requirements that weren't actually mapped.

🔎 Proposed fix

Move the entire mapping_inference block inside both the if and elif blocks so it only executes when a mapping actually occurs. Alternatively, guard the block with an explicit check:

+            # Only create mapping_inference if a mapping was actually made
+            if src not in source_audit["requirement_assessments"]:
+                continue
+                
             target_assessment["mapping_inference"] = {
                 "result": target_assessment.get("result", ""),
                 "used_path": path,

Recommended approach: Move the entire block (lines 414-489) to execute only after successful mapping creation in the if and elif branches.


262-263: Critical: Type mismatch in hop_index parameter.

The parameter hop_index is declared as str on line 262, but:

  • Line 437 compares it to integer 1: if hop_index == 1:
  • Line 534 initializes it as integer: hop_index = 1

In Python, string "1" ≠ integer 1, so the condition at line 437 will always be False, causing the mapping inference logic to incorrectly treat the real source as a transition audit.

🔎 Proposed fix

Change the type annotation to int:

     def map_audit_results(
         self,
         source_audit: dict[str, str | dict[str, str]],
         requirement_mapping_set: dict,
-        hop_index: str,
+        hop_index: int,
         path: list[str],
     ) -> dict[str, str | dict[str, str]]:

481-483: Remove debug print statement.

Debug print statements should not be committed to production code. If diagnostic information is needed, replace with proper logging using logger.debug(...).

🔎 Proposed fix
-                        print(
-                            f"no source_requirement_assessments for {mif_id} in \n{src_assessment}"
-                        )
🧹 Nitpick comments (1)
backend/core/views.py (1)

8229-8259: Propagating mapping_inference looks correct; consider tightening the update loop

Including mapping_inference in both the per-field assignment and the bulk_update field list is consistent with the new per-assessment semantics. Two small improvements would make this more robust:

  1. The if … .get(field) guard will skip valid falsy values (e.g. score == 0, empty observation, or an empty mapping_inference structure). Using key presence instead of truthiness avoids that.
  2. requirement_assessments_to_update.append(req) sits inside the inner for field loop, so the same object is appended multiple times. This is functionally harmless but noisy and a bit wasteful.

You can address both with a small refactor:

Suggested refactor for the mapping propagation loop
-                for req in target_requirement_assessments:
-                    for field in [
-                        "result",
-                        "status",
-                        "observation",
-                        "score",
-                        "is_scored",
-                        "documentation_score",
-                        "mapping_inference",
-                    ]:
-                        if best_results["requirement_assessments"][
-                            req.requirement.urn
-                        ].get(field):
-                            req.__setattr__(
-                                field,
-                                best_results["requirement_assessments"][
-                                    req.requirement.urn
-                                ][field],
-                            )
-                        requirement_assessments_to_update.append(req)
+                for req in target_requirement_assessments:
+                    ra_data = best_results["requirement_assessments"][
+                        req.requirement.urn
+                    ]
+                    updated = False
+                    for field in [
+                        "result",
+                        "status",
+                        "observation",
+                        "score",
+                        "is_scored",
+                        "documentation_score",
+                        "mapping_inference",
+                    ]:
+                        if field in ra_data:
+                            setattr(req, field, ra_data[field])
+                            updated = True
+                    if updated:
+                        requirement_assessments_to_update.append(req)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 92c8757 and c21ebd0.

📒 Files selected for processing (2)
  • backend/core/mappings/engine.py
  • backend/core/views.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: build (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: test (3.12)
🔇 Additional comments (2)
backend/core/mappings/engine.py (2)

576-587: LGTM! Useful metadata additions.

The additions of name, id, and source_framework fields to each requirement assessment provide necessary metadata for frontend rendering of multiple source assessments in mapping inference.


1-1: No action needed. The file is properly formatted and passes ruff format --check.

Copy link
Collaborator

@nas-tabchiche nas-tabchiche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few fixes to be made here:

  • A target requirement assessment must have at least one source requirement assessment.

PR works as expected on direct mappings, but there are inconsistent behaviors in multi-hop mappings.
For example, I have a NIST CSF 2.0 audit with quite a few assessed requirements.

Image

If I map it to Cisco CCF (thus with ISO27001:2022 as a pivot: CSF 2.0 -> ISO27001 -> Cisco CCF), some of the target requirement assessments have source requirement assessments in their mapping inference, some do not.

API examples:

This one has a source requirement assessment listed:

{
    "id": "244a370e-8233-41c8-b722-a0ece010e1d0",
    "name": "CCF 8 - Cybersecurity Legal and Regulatory Requirements",
    "mapping_inference": {
        "result": "non_compliant",
        "used_path": [
            "urn:intuitem:risk:framework:nist-csf-2.0",
            "urn:intuitem:risk:framework:iso27001-2022",
            "urn:intuitem:risk:framework:cisco-ccf-v3.0"
        ],
        "annotation": "",
        "source_requirement_assessments": {
            "urn:intuitem:risk:req_node:nist-csf-2.0:pr.ds-10": {
                "id": "60cc01c9-76db-4499-a722-e5c9dfd90fa0",
                "urn": "urn:intuitem:risk:req_node:nist-csf-2.0:pr.ds-10",
                "str": "PR.DS-10",
                "coverage": "partial",
                "score": null,
                "is_scored": false,
                "source_framework": {
                    "id": "222cba0f-fa72-4282-9f4c-521a36671d28",
                    "name": "NIST - NIST CSF v2.0"
                },
                "used_mapping_set": {}
            }
        }
    },
}

This one does not:

{
    "id": "5c7b2f8e-f25b-4186-9ed3-06d0931c33d8",
    "name": "CCF 1 - Control Self-Assessments",
    "mapping_inference": {
        "result": "non_compliant",
        "used_path": [
            "urn:intuitem:risk:framework:nist-csf-2.0",
            "urn:intuitem:risk:framework:iso27001-2022",
            "urn:intuitem:risk:framework:cisco-ccf-v3.0"
        ],
        "annotation": "",
        "source_requirement_assessments": {}
    },
}
  • It seems it only writes up to one single source requirement assessment in the mapping inference.

For example, when mapping ISO27001:2022 to NIST CSF 2.0, the requirement GV.OC-01 has 7 antecedents: 4.1, 6.1.1, 6.1.2, 6.1.3, 8.1, 8.2, 8.3.

However, the mapping inference for a mapped GV.OC-01 only displays 8.3 as source requirement assessment:

{
    "id": "1877ef93-07bb-46b3-a52b-e2aa5263e23e",
    "name": "GV.OC-01",
    "description": "La mission organisationnelle est comprise et informe la gestion des risques de cybersécurité",
    "mapping_inference": {
        "result": "partially_compliant",
        "used_path": [
            "urn:intuitem:risk:framework:iso27001-2022",
            "urn:intuitem:risk:framework:nist-csf-2.0"
        ],
        "annotation": "",
        "source_requirement_assessments": {
            "urn:intuitem:risk:req_node:iso27001-2022:8.3": {
                "id": "201c51b8-7f88-4f42-b41e-c6bfe5a54576",
                "urn": "urn:intuitem:risk:req_node:iso27001-2022:8.3",
                "str": "8.3 - Information security risk treatment",
                "coverage": "partial",
                "score": null,
                "is_scored": false,
                "source_framework": {
                    "id": "ab915be7-6c69-426e-b262-4a07c1758404",
                    "name": "ISO/IEC - International standard ISO/IEC 27001:2022"
                },
                "used_mapping_set": {}
            }
        }
    },
}

I suspect the engine to overwrite the source_requirement_assessments attribute instead of adding to it, as here 8.3 is the last antecedent to be traversed:

    requirement_mappings:
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:4.1
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:6.1.1
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:6.1.2
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:6.1.3
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:8.1
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:8.2
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
    - source_requirement_urn: urn:intuitem:risk:req_node:iso27001-2022:8.3
      target_requirement_urn: urn:intuitem:risk:req_node:nist-csf-2.0:gv.oc-01
      relationship: intersect
  • This PR introduces a used_mapping_set attribute, but it is never filled or used.

The PR can be merged once these have been addressed

ab-smith and others added 4 commits December 29, 2025 15:24
…trategic scenario (#3099)

* feat: ebios rm - allow users to focus on a specific feared event on strategic scenario

* coderabbit feedback

* restrict graph when a focused on a feared event
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/core/mappings/engine.py (1)

258-503: Fix KeyError risk and collision handling in map_audit_results

Two coupled issues in the new mapping logic:

  1. Potential KeyError on missing source assessments

    src_assessment = source_audit["requirement_assessments"][src] (Line 291) runs before any existence check. If the RMS contains a mapping for a requirement that has no assessment in source_audit["requirement_assessments"] (e.g., filtered out or not assessed), this will raise a KeyError and abort the whole mapping.

  2. Defaultdict causes every target to look like a collision

    target_assessment = target_audit["requirement_assessments"][dst] (Line 292) uses a defaultdict(dict). This creates an empty {} entry for dst on first access, so subsequent checks like if dst in target_audit["requirement_assessments"]: are always True. As a result:

    • The “collision” branch in the equal/superset case always runs, even for the first mapping to dst.
    • The full-copy path (src_assessment.copy()) is never taken, so you only merge result and m2m fields instead of copying the whole assessment (including status/score/documentation_score/etc.).
    • Similar logic in the subset/intersect branch relies on dst in ... to distinguish first vs. subsequent merges, which currently can’t work as intended.

Both issues affect correctness of inferred requirement assessments and their mapping_inference metadata.

A safer pattern is:

  • Resolve src_assessment via .get() and skip mappings with no source RA.
  • Compute a separate has_existing flag before touching the defaultdict, then use that flag instead of re-checking membership after __getitem__.

For example:

Proposed fix sketch for the core loop
-        for mapping in requirement_mapping_set["requirement_mappings"]:
-            src = mapping["source_requirement_urn"]
-            dst = mapping["target_requirement_urn"]
-            rel = mapping["relationship"]
-
-            src_assessment = source_audit["requirement_assessments"][src]
-            target_assessment = target_audit["requirement_assessments"][dst]
+        source_ras = source_audit.get("requirement_assessments", {})
+
+        for mapping in requirement_mapping_set["requirement_mappings"]:
+            src = mapping["source_requirement_urn"]
+            dst = mapping["target_requirement_urn"]
+            rel = mapping["relationship"]
+
+            src_assessment = source_ras.get(src)
+            if not src_assessment:
+                # No source assessment to map from
+                continue
+
+            has_existing = dst in target_audit["requirement_assessments"]
+            target_assessment = target_audit["requirement_assessments"][dst]
@@
-            if (
-                rel in ("equal", "superset")
-                and src in source_audit["requirement_assessments"]
-            ):
+            if rel in ("equal", "superset"):
@@
-                    # Handle collision: merge m2m fields if target already exists
-                    if dst in target_audit["requirement_assessments"]:
+                    # Handle collision: merge m2m fields if target already exists
+                    if has_existing:
@@
-                    for field in self.fields_to_map:
+                    for field in self.fields_to_map:
                         if field not in ["score", "is_scored", "documentation_score"]:
                             if (
                                 field == "result"
-                                and dst in target_audit["requirement_assessments"]
+                                and has_existing
                             ):
@@
-            elif (
-                rel in ("subset", "intersect")
-                and src in source_audit["requirement_assessments"]
-            ):
+            elif rel in ("subset", "intersect"):
@@
-                src_applied_controls = src_assessment.get("applied_controls", [])
-                if (
-                    dst in target_audit["requirement_assessments"]
-                    and "applied_controls" in target_assessment
-                ):
+                src_applied_controls = src_assessment.get("applied_controls", [])
+                if has_existing and "applied_controls" in target_assessment:
@@
-                        if (
-                            dst in target_audit["requirement_assessments"]
-                            and m2m_field in target_assessment
-                        ):
+                        if has_existing and m2m_field in target_assessment:
@@
-                if (
-                    dst in target_audit["requirement_assessments"]
-                    and "result" in target_assessment
-                ):
+                if has_existing and "result" in target_assessment:
@@
-                if scores_compatible:
-                    src_assessment = source_audit["requirement_assessments"][src]
-                    for score_field in ["score", "is_scored", "documentation_score"]:
+                if scores_compatible:
+                    for score_field in ["score", "is_scored", "documentation_score"]:

And reuse src_assessment in the hop_index == 1 block:

-            if hop_index == 1:
-                ra = source_audit["requirement_assessments"][src]
+            if hop_index == 1:
+                ra = src_assessment

This keeps defaultdict behavior while correctly distinguishing first vs. subsequent mappings and avoiding KeyError when a mapping has no corresponding source assessment.

Also applies to: 543-557

🧹 Nitpick comments (1)
frontend/src/lib/components/Forms/ModelForm/StrategicScenarioForm.svelte (1)

39-54: Dependent focused_feared_event autocomplete is wired correctly

The keyed block and URL parameters correctly tie the focused feared event options to the selected RO/TO couple and study; binding and field name match the schema and crud map.

As an optional improvement, you could disable or hide this field until a ro_to_couple is selected to avoid querying feared-events with an undefined ro_to_couples filter.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8812aa2 and 4a09a28.

⛔ Files ignored due to path filters (1)
  • enterprise/backend/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (12)
  • backend/core/mappings/engine.py
  • backend/ebios_rm/migrations/0021_strategicscenario_focused_feared_event.py
  • backend/ebios_rm/models.py
  • backend/ebios_rm/serializers.py
  • frontend/messages/en.json
  • frontend/messages/fr.json
  • frontend/src/lib/components/Forms/ModelForm/StrategicScenarioForm.svelte
  • frontend/src/lib/utils/crud.ts
  • frontend/src/lib/utils/schemas.ts
  • frontend/src/lib/utils/table.ts
  • frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.server.ts
  • frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.svelte
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-12-29T15:52:09.213Z
Learnt from: melinoix
Repo: intuitem/ciso-assistant-community PR: 3107
File: frontend/src/routes/(app)/(internal)/recap/+page.svelte:49-51
Timestamp: 2025-12-29T15:52:09.213Z
Learning: In the recap page (frontend/src/routes/(app)/(internal)/recap/+page.svelte), perimeter.folder is guaranteed to be present when perimeter.compliance_assessments exist, so null safety checks are not necessary for perimeter.folder.str in that context.

Applied to files:

  • frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.svelte
📚 Learning: 2025-11-17T11:09:18.488Z
Learnt from: monsieurswag
Repo: intuitem/ciso-assistant-community PR: 2860
File: frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts:0-0
Timestamp: 2025-11-17T11:09:18.488Z
Learning: In frontend/src/routes/(app)/(internal)/[model=urlmodel]/[id=uuid]/+page.server.ts, the select action performs batch PATCH operations in Promise.all without transaction semantics. Partial failures (e.g., one object fails to update due to race conditions) are acceptable and should not trigger a fail() return, as most objects will still be updated successfully. Console warnings are sufficient for logging failures.

Applied to files:

  • frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.server.ts
🧬 Code graph analysis (2)
backend/ebios_rm/serializers.py (2)
backend/ebios_rm/models.py (2)
  • ro_to_couple (841-842)
  • ebios_rm_study (985-986)
backend/core/serializer_fields.py (2)
  • FieldsRelatedField (26-96)
  • HashSlugRelatedField (12-23)
backend/ebios_rm/migrations/0021_strategicscenario_focused_feared_event.py (1)
backend/ebios_rm/models.py (1)
  • name (1080-1085)
🪛 GitHub Actions: Backend Linters
backend/core/mappings/engine.py

[error] 1-1: Ruff format check failed. 1 file would be reformatted. Run 'ruff format' to fix.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend
  • GitHub Check: test (3.12)
  • GitHub Check: Analyze (python)
🔇 Additional comments (15)
frontend/messages/fr.json (1)

1385-1386: Focused feared event translations look consistent

The new keys align with the English originals and the surrounding EBIOS RM terminology; no issues from a UX/i18n perspective.

frontend/src/lib/utils/table.ts (1)

1993-2012: Strategic-scenarios column wiring is consistent

Head/body entries for the new focused feared event column correctly map the label key (focusedFearedEvent) to the backend field (focused_feared_event); this matches existing conventions.

backend/ebios_rm/migrations/0021_strategicscenario_focused_feared_event.py (1)

1-25: Migration for focused_feared_event is well-formed

The FK definition (nullable, SET_NULL, related_name="focused_strategic_scenarios") matches the intended optional “focused” relationship and should apply cleanly.

backend/core/mappings/engine.py (2)

588-600: Additional RA metadata in load_audit_fields is appropriate

Adding name, stringified id, and a structured source_framework object to each requirement assessment makes the later mapping_inference payloads richer and matches how the frontend renders source assessments. Provided every ComplianceAssessment always has a non-null framework, this looks safe.

If there are edge cases where audit.framework can be None (e.g., partially configured audits), consider guarding these assignments or skipping such audits to avoid attribute errors.


1-645: No action needed — the file is already properly formatted according to ruff standards.

Verification shows ruff format --check passes with "1 file already formatted", indicating backend/core/mappings/engine.py complies with the backend formatter requirements.

Likely an incorrect or invalid review comment.

frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.svelte (1)

14-15: Confirm focused_feared_event path on PageData

The UI wiring for the focused feared event badge looks good, but the derived value comes from data.data.focused_feared_event while the rest of the template uses data.attackPaths / data.fearedEventsWithAssets, and the summary suggests data.focused_feared_event.

Please double‑check the PageData shape and server load function to ensure the correct path is either:

  • data.focused_feared_event, or
  • data.data.focused_feared_event if the strategic scenario is indeed nested under data.data.

Also applies to: 21-34

frontend/src/lib/utils/schemas.ts (1)

1041-1048: New StrategicScenarioSchema field is consistent and correctly typed

focused_feared_event being an optional nullable UUID aligns with the new relation and allows clearing the focus without breaking validation.

frontend/messages/en.json (1)

1445-1452: New focused feared event translations look good

focusedFearedEvent and focusedOn are clearly named and consistent with surrounding EBIOS RM terminology.

frontend/src/routes/(app)/(internal)/strategic-scenarios/[id=uuid]/+page.server.ts (1)

24-45: Focused feared event loading logic is consistent with intended behavior

Conditionally restricting fearedEventsIds to focused_feared_event.id and exposing isFocusedOnFearedEvent cleanly support a “focused view” while preserving the previous fallback when no focus is set.

frontend/src/lib/utils/crud.ts (1)

1364-1412: CRUD mapping for focused_feared_event is correctly wired

The new focused_feared_event foreignKey and detailView field entries for strategic-scenarios align with the schema, form, and route usage, so generic lists/details will correctly resolve and link the focused feared event.

backend/ebios_rm/serializers.py (3)

269-295: Excellent validation logic with proper edge case handling.

The cross-field validation correctly handles both create and update scenarios:

  • For ro_to_couple: Uses or fallback since it's a required field (cannot be None)
  • For focused_feared_event: Uses explicit in attrs check to distinguish between "not being updated" vs "being cleared to None"

This distinction is intentional and correct given their different nullability constraints.

Consider adding model-level validation (a clean() method on StrategicScenario) to ensure focused_feared_event belongs to ro_to_couple.feared_events for all save paths, not just serializer-based API calls. This would prevent data inconsistency from Django admin, management commands, or direct ORM operations.

🔎 Optional model-level validation enhancement

Add to StrategicScenario model in backend/ebios_rm/models.py:

def clean(self):
    super().clean()
    if self.focused_feared_event and self.ro_to_couple:
        if not self.ro_to_couple.feared_events.filter(
            id=self.focused_feared_event.id
        ).exists():
            raise ValidationError({
                'focused_feared_event': 'The feared event must belong to the selected RoTo couple.'
            })

302-302: LGTM!

Field addition is consistent with other FieldsRelatedField usage in this serializer.


337-337: LGTM!

The focused_feared_event field is correctly added to the import/export serializer using HashSlugRelatedField with pk as the slug field, consistent with other foreign key fields in this serializer.

Also applies to: 347-347

backend/ebios_rm/models.py (2)

729-737: LGTM!

The focused_feared_event field is properly defined with:

  • Appropriate nullability (null=True, blank=True) for an optional field
  • Safe deletion behavior (on_delete=SET_NULL) preventing cascading data loss
  • Clear documentation via verbose_name and help_text
  • Proper related_name for reverse lookups

764-769: LGTM! Correct fallback logic.

The gravity calculation correctly prioritizes focused_feared_event.gravity when set, falling back to ro_to_couple.get_gravity(). Both code paths handle the unrated case (gravity = -1) through FearedEvent.format_gravity().

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
backend/core/mappings/engine.py (2)

291-292: Critical: KeyError risk on line 291.

Line 291 directly accesses source_audit["requirement_assessments"][src] before any existence check. If src is not in the dictionary, this will raise a KeyError before the conditional checks at lines 295 and 363 can prevent it.

🔎 Proposed fix
+            if src not in source_audit["requirement_assessments"]:
+                continue
+            
             src_assessment = source_audit["requirement_assessments"][src]
             target_assessment = target_audit["requirement_assessments"][dst]

Alternatively, use .get() with a default and skip if None:

-            src_assessment = source_audit["requirement_assessments"][src]
+            src_assessment = source_audit["requirement_assessments"].get(src)
+            if not src_assessment:
+                continue
             target_assessment = target_audit["requirement_assessments"][dst]

417-497: Critical: mapping_inference block executes unconditionally.

The entire mapping_inference block (lines 417-497) executes for every mapping in the loop, regardless of whether the if condition at lines 293-296 or the elif condition at lines 361-364 was satisfied. This creates two issues:

  1. Spurious entries: If the relationship type doesn't match or other conditions fail, this creates a mapping_inference entry for a requirement that wasn't actually mapped, polluting the target audit.

  2. KeyError risk: Line 472 accesses source_audit["requirement_assessments"][src] without checking existence. If src is not in the dict (though it would fail earlier at line 291), this will raise a KeyError.

The mapping_inference block should only execute when a mapping is actually created.

🔎 Proposed fix

Option 1: Guard the entire block with an existence check:

+            # Only add mapping_inference if src was actually processed
+            if src not in source_audit["requirement_assessments"]:
+                continue
+            if rel not in ("equal", "superset", "subset", "intersect"):
+                continue
+            
             mapping_set_info = {

Option 2 (Recommended): Move this block inside both the if and elif blocks to ensure it only runs when a mapping is created. You would need to deduplicate the code or extract it into a helper function that's called from both branches.

🧹 Nitpick comments (1)
backend/core/mappings/engine.py (1)

448-469: Logic looks correct, minor optimization possible.

The merge_source_requirement_assessment function correctly merges source requirement assessment data across hops, with appropriate precedence rules for coverage and field updates.

🔎 Minor optimization: simplify coverage logic
                 if new_cov:
-                    if existing_cov == "full" or new_cov == "full":
-                        existing["coverage"] = "full"
-                    else:
-                        existing["coverage"] = "partial"
+                    existing["coverage"] = "full" if (existing_cov == "full" or new_cov == "full") else "partial"
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a09a28 and 01b4b47.

📒 Files selected for processing (1)
  • backend/core/mappings/engine.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: build_community_frontend
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: test (3.12)
  • GitHub Check: build (3.12)
🔇 Additional comments (4)
backend/core/mappings/engine.py (4)

262-266: LGTM!

The addition of hop_index and path parameters enables proper multi-hop inference tracking. The type is correctly set to int for hop_index.


305-318: LGTM!

The pattern of obtaining target_assessment as a reference to the dict in target_audit["requirement_assessments"][dst] (line 292) and then mutating it directly works correctly with the defaultdict(dict). Subsequent mutations properly update the stored dict.


543-554: LGTM!

The multi-hop tracking logic is correctly implemented: hop_index starts at 1 and increments for each hop, and both hop_index and path are properly passed to map_audit_results.


585-596: LGTM!

The addition of name, id, and source_framework fields enriches each requirement assessment with useful metadata for display and traceability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants