Skip to content

Conversation

monsieurswag
Copy link
Contributor

@monsieurswag monsieurswag commented Oct 8, 2025

Summary by CodeRabbit

  • New Features

    • Supports multiple baseline sources per requirement assessment, aggregating related evidence and controls.
    • Automatically carries over observations from the correct mapped baseline items.
  • Bug Fixes

    • Corrects linking when multiple mappings exist, ensuring evidences and applied controls are drawn from all relevant baseline assessments.
    • Prevents mismatches by selecting the appropriate baseline source for each assessment.
  • Refactor

    • Streamlined creation flow with bulk updates and M2M assignments for improved reliability and scalability during assessment generation.

Copy link
Contributor

coderabbitai bot commented Oct 8, 2025

Walkthrough

compute_requirement_assessments_results now returns a tuple: (list of RequirementAssessment, dict mapping each RequirementAssessment to source reference IDs). ComplianceAssessmentViewSet.perform_create unpacks the tuple, derives baseline assessments from the mapping, sets observations from selected baselines, and builds M2M relations from the mapped baseline items.

Changes

Cohort / File(s) Summary
Model return signature and data mapping
backend/core/models.py
compute_requirement_assessments_results signature updated to -> tuple[list["RequirementAssessment"], dict["RequirementAssessment", list[str]]]; collects and returns assessment_source_dict mapping each assessment to source reference IDs alongside computed assessments.
View control-flow update to consume tuple and build derived relations
backend/core/views.py
Added itertools import; perform_create updated to unpack (computed_assessments, assessment_source_dict), derive source_assessment_ids via chaining, query baseline assessments, use selected_source_id/selected_baseline_ra for observations, and construct evidences/applied_controls M2M relations from the mapped baseline assessments.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant View as ComplianceAssessmentViewSet
  participant Model as RequirementAssessmentModel
  participant DB as Database

  Client->>View: POST create ComplianceAssessment
  View->>Model: compute_requirement_assessments_results(mapping_set, source_assessment)
  Model-->>View: (computed_assessments, assessment_source_dict)

  Note over View: derive source IDs via assessment_source_dict\nquery baseline assessments (prefetch evidences, applied_controls)

  View->>DB: Query baseline assessments by source IDs
  DB-->>View: Baseline assessments

  loop For each computed assessment
    View->>View: select selected_source_id from assessment_source_dict
    View->>View: selected_baseline_ra = baseline_assessments[selected_source_id]
    View->>View: set observation = selected_baseline_ra.observation
    View->>View: build evidences from selected_baseline_ra.evidences
    View->>View: build applied_controls from chain of mapped baseline assessments
  end

  View->>DB: Bulk create/update assessments and M2M relations
  DB-->>View: Persisted entities
  View-->>Client: Created assessment payload
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

Poem

I mapped my hops from one burrow to two,
A tuple of trails where bright carrots grew. 🥕
I chained the clues each baseline knew,
Tied evidences, controls in a clever queue.
Now assessments smile — a pair, not just a shoe.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly captures the core enhancement of including all applied controls during the mapping process, directly reflecting the changes to how applied_controls are aggregated and assigned via the updated compute_requirement_assessments_results and view logic.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch CA-1283-Add-all-applied-controls-during-mapping

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2e3d87 and 787354f.

📒 Files selected for processing (1)
  • backend/core/models.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/core/models.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: test (3.12)
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: build (3.12)
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: build_community_frontend

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 776140c and c2e3d87.

📒 Files selected for processing (2)
  • backend/core/models.py (3 hunks)
  • backend/core/views.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/core/views.py (1)
backend/core/models.py (2)
  • compute_requirement_assessments_results (5161-5249)
  • applied_controls (5979-5980)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: build (3.12)
  • GitHub Check: build_community_frontend
  • GitHub Check: enterprise-startup-docker-compose-test
  • GitHub Check: build_enterprise_frontend
  • GitHub Check: startup-functional-test (3.12)
  • GitHub Check: enterprise-startup-functional-test (3.12)
  • GitHub Check: startup-docker-compose-test
  • GitHub Check: test (3.12)
🔇 Additional comments (3)
backend/core/models.py (3)

5161-5165: LGTM! Method signature enhanced to return source mapping metadata.

The updated signature correctly returns both the computed assessments and a dict tracking which source assessments contributed to each. This enables the caller to retrieve all related baseline objects (evidences, applied_controls) from the appropriate source assessments.


5230-5232: LGTM! Correctly captures all source assessment IDs.

The code properly stores ALL source requirement assessment IDs that contributed to the mapping, not just the one selected for inference. This is the intended behavior to enable the caller to aggregate evidences and applied_controls from all related baseline assessments.


5249-5249: LGTM! Return statement matches updated signature.

The method correctly returns the tuple as declared in the signature. Per the AI summary, the caller in backend/core/views.py has been updated to unpack and use both values appropriately.

Comment on lines 5989 to 5997
(
requirement_assessment,
baseline_ra.evidences.all(),
baseline_ra.applied_controls.all(),
selected_baseline_ra.evidences.all(),
itertools.chain.from_iterable(
requirement_assessment.applied_controls.all()
for requirement_assessment in baseline_requirement_assessments
),
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix generator capturing the wrong applied controls

itertools.chain.from_iterable(...) captures baseline_requirement_assessments by reference. Because you append that generator to m2m_operations and only iterate it after the outer loop finishes, every stored generator runs with baseline_requirement_assessments set to whatever the last iteration assigned. Earlier requirement assessments therefore end up copying the applied controls of the final baseline mapping, which is a regression. Materialize the union while you’re still inside the loop (e.g. build a list or set of control IDs) and store that concrete collection instead of the lazy generator.

Suggested fix:

-                    m2m_operations.append(
-                        (
-                            requirement_assessment,
-                            selected_baseline_ra.evidences.all(),
-                            itertools.chain.from_iterable(
-                                requirement_assessment.applied_controls.all()
-                                for requirement_assessment in baseline_requirement_assessments
-                            ),
-                        )
-                    )
+                    baseline_control_ids = {
+                        ac.id
+                        for baseline_ra in baseline_requirement_assessments
+                        for ac in baseline_ra.applied_controls.all()
+                    }
+                    m2m_operations.append(
+                        (
+                            requirement_assessment,
+                            selected_baseline_ra.evidences.all(),
+                            baseline_control_ids,
+                        )
+                    )
@@
-                for assessment, evidences, controls in m2m_operations:
-                    assessment.evidences.add(*[ev.id for ev in evidences])
-                    assessment.applied_controls.add(*[ac.id for ac in controls])
+                for assessment, evidences, control_ids in m2m_operations:
+                    assessment.evidences.add(*[ev.id for ev in evidences])
+                    if control_ids:
+                        assessment.applied_controls.add(*control_ids)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
(
requirement_assessment,
baseline_ra.evidences.all(),
baseline_ra.applied_controls.all(),
selected_baseline_ra.evidences.all(),
itertools.chain.from_iterable(
requirement_assessment.applied_controls.all()
for requirement_assessment in baseline_requirement_assessments
),
)
)
baseline_control_ids = {
ac.id
for baseline_ra in baseline_requirement_assessments
for ac in baseline_ra.applied_controls.all()
}
m2m_operations.append(
(
requirement_assessment,
selected_baseline_ra.evidences.all(),
baseline_control_ids,
)
)
🤖 Prompt for AI Agents
In backend/core/views.py around lines 5989 to 5997 the code appends an
itertools.chain.from_iterable(...) generator that closes over
baseline_requirement_assessments, causing all stored generators to reflect the
final loop value; instead, inside the loop materialize the applied-controls
collection (e.g., build a list or set of control IDs or control instances from
requirement_assessment.applied_controls.all() or from the baseline mapping) and
append that concrete collection to m2m_operations so later iteration uses the
correct, per-iteration controls rather than a late-bound generator.

@Mohamed-Hacene Mohamed-Hacene changed the title feat: Add all applied controls during mapping feat: add all applied controls during mapping Oct 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants