feat: add optional heuristic deployment maturity indicator#5358
feat: add optional heuristic deployment maturity indicator#5358ravii4 wants to merge 6 commits intonetbirdio:mainfrom
Conversation
📝 WalkthroughWalkthroughReplaces NewAPIHandler's positional parameters with a single APIHandlerDeps struct, adds an optional deployment maturity feature (config flag, heuristic, API field, tests, OpenAPI/types update), wires the feature into account handlers, and adds a frontend card component. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (UI / API consumer)
participant API as API Handler
participant AccountMgr as AccountManager
participant PeersMgr as PeersManager
participant Policies as PoliciesManager
rect rgba(200,200,255,0.5)
Client->>API: GET /api/accounts
API->>AccountMgr: Fetch account(s) metadata
API->>PeersMgr: Fetch peers for account
API->>Policies: Fetch policy count for account
end
rect rgba(200,255,200,0.5)
API->>API: EvaluateDeploymentMaturity(peers, policies, activeDays)
API->>Client: Respond with account(s) + optional deployment_maturity
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (6)
shared/management/http/api/openapi.yml (1)
253-257: Markdeployment_maturityas read-only to reflect server-computed data.This field is derived heuristically and not client-settable, so adding
readOnly: trueclarifies API intent for consumers.✅ Suggested schema tweak
deployment_maturity: description: Optional deployment maturity stage derived from peer count, policy count, and account age. This field is informational only and does not affect account behavior. type: string enum: [ exploration, functional, operational, production ] example: operational + readOnly: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/management/http/api/openapi.yml` around lines 253 - 257, The deployment_maturity property in the OpenAPI schema is server-computed and should be marked read-only; update the deployment_maturity schema to include readOnly: true so clients know they cannot set it (keep the description, type, enum, and example intact). Target the deployment_maturity property definition in the OpenAPI YAML (the schema where description/type/enum/example are defined) and add readOnly: true.proxy/web/src/components/DeploymentMaturityCard.tsx (2)
6-6: Consider derivingDeploymentMaturityStagefrom generated API types.This local type duplicates the enum values defined server-side. If the API types are code-generated (e.g., from OpenAPI), importing or deriving from them would prevent drift.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@proxy/web/src/components/DeploymentMaturityCard.tsx` at line 6, The local string union type DeploymentMaturityStage duplicates server-side enum values—replace it by importing or deriving the type from the generated API types (e.g., the OpenAPI/SDK types) so it stays in sync; find usages of DeploymentMaturityStage in DeploymentMaturityCard and change the type annotation to the imported/generated enum/type (and update any runtime checks or mapping to use the generated values) to avoid duplication and drift.
17-33: Hoist static lookup objects outside the component.
titleByStageanddescriptionByStageare constant and don't depend on props or state, yet they're re-allocated on every render. Move them to module scope.♻️ Proposed refactor
+const titleByStage: Record<DeploymentMaturityStage, string> = { + exploration: "Exploration deployment", + functional: "Functional deployment", + operational: "Operational deployment", + production: "Production deployment", +}; + +const descriptionByStage: Record<DeploymentMaturityStage, string> = { + exploration: + "This deployment is suitable for initial testing. Review core setup, access policies, and peer coverage before relying on it for daily work.", + functional: + "This deployment covers basic connectivity. Review routing, DNS, and access policies to ensure they match how your team actually works.", + operational: + "This deployment supports day-to-day use. Periodically review audit events, policy changes, and onboarding flows to keep it predictable.", + production: + "This deployment is ready for sustained production use. Keep an eye on change management, observability, and backup procedures.", +}; + export const DeploymentMaturityCard = ({ stage }: DeploymentMaturityCardProps) => { if (!stage) { return null; } - const titleByStage: Record<DeploymentMaturityStage, string> = { - exploration: "Exploration deployment", - ... - }; - - const descriptionByStage: Record<DeploymentMaturityStage, string> = { - exploration: "...", - ... - }; - const titleText = titleByStage[stage]; const descriptionText = descriptionByStage[stage];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@proxy/web/src/components/DeploymentMaturityCard.tsx` around lines 17 - 33, Move the two constant lookup objects titleByStage and descriptionByStage out of the DeploymentMaturityCard component and into module scope so they are allocated once; declare them at the top-level of the file (keeping their type Record<DeploymentMaturityStage, string> and the same keys/values), remove the in-component declarations, and ensure the component uses the top-level titleByStage and descriptionByStage identifiers unchanged.management/server/http/handlers/accounts/accounts_handler.go (1)
141-171: Two extra DB round-trips on every account request when enabled.
GetPeersandListPoliciesare called on everyGET /accountsandPUT /accounts/{id}when the flag is on. For large deployments these can be expensive. The inline NOTE (lines 150-152) already acknowledges this — consider caching the maturity result (e.g., in-memory with a short TTL or recompute on peer/policy mutations) before enabling this in production environments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/http/handlers/accounts/accounts_handler.go` around lines 141 - 171, computeDeploymentMaturity currently triggers two DB calls (h.accountManager.GetPeers and h.accountManager.ListPolicies) on every account request when enableDeploymentMaturity is true; change this to cache the computed maturity per account to avoid repeated round-trips: add an in-memory cache field on the handler (map or sync.Map with TTL/expiry and mutex or use an existing cache lib), have computeDeploymentMaturity first check the cache by accountID and return cached value if fresh, otherwise perform the GetPeers/ListPolicies calls, store the computed value with an expiry, and ensure cache invalidation/update when peers or policies mutate (hook into peer/policy mutation handlers to delete or refresh the accountID entry).management/server/types/deployment_maturity.go (1)
18-31: Consider extracting magic numbers into named constants.The threshold values (3, 8, 5 for peers; 1, 3, 2 for policies; 14 for days) are scattered through the conditionals. Named constants would make the heuristic easier to tune and reason about.
♻️ Proposed refactor
+const ( + minPeersForFunctional = 3 + minPoliciesForFunctional = 1 + minPeersForOperational = 5 + minPoliciesForOperational = 2 + minPeersForProduction = 8 + minPoliciesForProduction = 3 + minDaysForProduction = 14 +) + func EvaluateDeploymentMaturity(peerCount int, policyCount int, activeDays int) DeploymentMaturity { - if peerCount < 3 || policyCount < 1 { + if peerCount < minPeersForFunctional || policyCount < minPoliciesForFunctional { return DeploymentMaturityExploration } - if peerCount >= 8 && policyCount >= 3 && activeDays >= 14 { + if peerCount >= minPeersForProduction && policyCount >= minPoliciesForProduction && activeDays >= minDaysForProduction { return DeploymentMaturityProduction } - if peerCount >= 5 && policyCount >= 2 { + if peerCount >= minPeersForOperational && policyCount >= minPoliciesForOperational { return DeploymentMaturityOperational } return DeploymentMaturityFunctional }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/types/deployment_maturity.go` around lines 18 - 31, The EvaluateDeploymentMaturity function uses hard-coded threshold numbers (3, 8, 5 for peerCount; 1, 3, 2 for policyCount; 14 for activeDays); extract these magic numbers into well-named package-level constants (e.g., minPeersExploration, minPeersOperational, minPeersProduction, minPoliciesExploration, minPoliciesOperational, minPoliciesProduction, minActiveDaysProduction) and replace the numeric literals in EvaluateDeploymentMaturity with those constants so the heuristic is easier to read and tune and to centralize thresholds for tests and documentation.management/server/types/deployment_maturity_test.go (1)
5-24: Uset.Runwith named subtests for better diagnostics and add boundary cases.The current test lacks subtest names, making failures hard to identify. Also, only 5 cases are tested — boundary conditions at the exact thresholds are missing (e.g.,
{3, 1, 0}for the Functional boundary,{8, 3, 13}for Production just-miss,{5, 2, 0}for Operational boundary, negative inputs).♻️ Proposed improvement
func TestEvaluateDeploymentMaturity(t *testing.T) { tests := []struct { + name string peers int policies int days int expected DeploymentMaturity }{ - {1, 0, 1, DeploymentMaturityExploration}, - {3, 1, 1, DeploymentMaturityFunctional}, - {5, 2, 5, DeploymentMaturityOperational}, - {8, 3, 14, DeploymentMaturityProduction}, - {8, 3, 5, DeploymentMaturityOperational}, + {"below_peer_threshold", 1, 0, 1, DeploymentMaturityExploration}, + {"below_policy_threshold", 5, 0, 30, DeploymentMaturityExploration}, + {"functional_boundary", 3, 1, 0, DeploymentMaturityFunctional}, + {"functional_typical", 3, 1, 1, DeploymentMaturityFunctional}, + {"operational_boundary", 5, 2, 0, DeploymentMaturityOperational}, + {"operational_typical", 5, 2, 5, DeploymentMaturityOperational}, + {"production_boundary", 8, 3, 14, DeploymentMaturityProduction}, + {"production_miss_days", 8, 3, 13, DeploymentMaturityOperational}, + {"high_peers_low_days", 8, 3, 5, DeploymentMaturityOperational}, } for _, tt := range tests { - stage := EvaluateDeploymentMaturity(tt.peers, tt.policies, tt.days) - if stage != tt.expected { - t.Errorf("expected %s, got %s", tt.expected, stage) - } + t.Run(tt.name, func(t *testing.T) { + stage := EvaluateDeploymentMaturity(tt.peers, tt.policies, tt.days) + if stage != tt.expected { + t.Errorf("EvaluateDeploymentMaturity(%d, %d, %d) = %s, want %s", + tt.peers, tt.policies, tt.days, stage, tt.expected) + } + }) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/types/deployment_maturity_test.go` around lines 5 - 24, Replace the table-driven loop in TestEvaluateDeploymentMaturity with named subtests using t.Run so failures show which case failed (use a descriptive name per case like "peers=3_policies=1_days=0"), and expand the tests slice to include boundary and edge cases such as {3,1,0} (Functional boundary), {8,3,13} (just-miss Production), {5,2,0} (Operational boundary) and some negative inputs to validate robustness; keep calling EvaluateDeploymentMaturity for each subtest and assert equality with the expected DeploymentMaturity so each case reports independently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@management/server/http/handlers/accounts/accounts_handler.go`:
- Around line 141-171: computeDeploymentMaturity currently triggers two DB calls
(h.accountManager.GetPeers and h.accountManager.ListPolicies) on every account
request when enableDeploymentMaturity is true; change this to cache the computed
maturity per account to avoid repeated round-trips: add an in-memory cache field
on the handler (map or sync.Map with TTL/expiry and mutex or use an existing
cache lib), have computeDeploymentMaturity first check the cache by accountID
and return cached value if fresh, otherwise perform the GetPeers/ListPolicies
calls, store the computed value with an expiry, and ensure cache
invalidation/update when peers or policies mutate (hook into peer/policy
mutation handlers to delete or refresh the accountID entry).
In `@management/server/types/deployment_maturity_test.go`:
- Around line 5-24: Replace the table-driven loop in
TestEvaluateDeploymentMaturity with named subtests using t.Run so failures show
which case failed (use a descriptive name per case like
"peers=3_policies=1_days=0"), and expand the tests slice to include boundary and
edge cases such as {3,1,0} (Functional boundary), {8,3,13} (just-miss
Production), {5,2,0} (Operational boundary) and some negative inputs to validate
robustness; keep calling EvaluateDeploymentMaturity for each subtest and assert
equality with the expected DeploymentMaturity so each case reports
independently.
In `@management/server/types/deployment_maturity.go`:
- Around line 18-31: The EvaluateDeploymentMaturity function uses hard-coded
threshold numbers (3, 8, 5 for peerCount; 1, 3, 2 for policyCount; 14 for
activeDays); extract these magic numbers into well-named package-level constants
(e.g., minPeersExploration, minPeersOperational, minPeersProduction,
minPoliciesExploration, minPoliciesOperational, minPoliciesProduction,
minActiveDaysProduction) and replace the numeric literals in
EvaluateDeploymentMaturity with those constants so the heuristic is easier to
read and tune and to centralize thresholds for tests and documentation.
In `@proxy/web/src/components/DeploymentMaturityCard.tsx`:
- Line 6: The local string union type DeploymentMaturityStage duplicates
server-side enum values—replace it by importing or deriving the type from the
generated API types (e.g., the OpenAPI/SDK types) so it stays in sync; find
usages of DeploymentMaturityStage in DeploymentMaturityCard and change the type
annotation to the imported/generated enum/type (and update any runtime checks or
mapping to use the generated values) to avoid duplication and drift.
- Around line 17-33: Move the two constant lookup objects titleByStage and
descriptionByStage out of the DeploymentMaturityCard component and into module
scope so they are allocated once; declare them at the top-level of the file
(keeping their type Record<DeploymentMaturityStage, string> and the same
keys/values), remove the in-component declarations, and ensure the component
uses the top-level titleByStage and descriptionByStage identifiers unchanged.
In `@shared/management/http/api/openapi.yml`:
- Around line 253-257: The deployment_maturity property in the OpenAPI schema is
server-computed and should be marked read-only; update the deployment_maturity
schema to include readOnly: true so clients know they cannot set it (keep the
description, type, enum, and example intact). Target the deployment_maturity
property definition in the OpenAPI YAML (the schema where
description/type/enum/example are defined) and add readOnly: true.
This constructor already exceeded the parameter threshold before this PR. Refactoring NewAPIHandler into a dependency struct would be a larger architectural change and is outside the scope of this PR. |
|
license signed |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
management/server/http/handler.go (1)
68-92: Consider adding aValidate()method (or early nil guards) to distinguish required vs optional deps.The struct has 22 fields with no annotation or enforcement separating required fields (
AccountManager,AuthManager,AppMetrics, etc.) from optional ones (ReverseProxyManager,ProxyGRPCServer, etc.). A misconfigured caller will see a panic deep inside handler initialization rather than a clear error at the call site.♻️ Suggested addition
+// Validate checks that all required fields are non-nil. +func (d APIHandlerDeps) Validate() error { + if d.AccountManager == nil { + return fmt.Errorf("AccountManager is required") + } + if d.AuthManager == nil { + return fmt.Errorf("AuthManager is required") + } + if d.AppMetrics == nil { + return fmt.Errorf("AppMetrics is required") + } + // … other required fields + return nil +}Then call
deps.Validate()at the top ofNewAPIHandler.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/http/handler.go` around lines 68 - 92, Add a Validate method on APIHandlerDeps that checks required fields (e.g., AccountManager, AuthManager, AppMetrics, PermissionsManager, NetworksManager, ResourceManager, etc.) for nil and returns a descriptive error listing missing deps; keep truly optional fields (e.g., ReverseProxyManager, ProxyGRPCServer, ReverseProxyDomainManager) out of the failure conditions. Update NewAPIHandler to call deps.Validate() immediately and propagate the error so callers get a clear failure instead of panics; additionally, where appropriate add defensive nil-guards inside methods that rely on optional deps (e.g., ReverseProxyManager, ProxyGRPCServer) to avoid runtime panics when those are intentionally unset.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@management/server/http/handler.go`:
- Around line 202-204: The call to reverseproxymanager.RegisterEndpoints passes
deps.ReverseProxyAccessLogs without nil-checking, which can lead to a panic when
accesslogsmanager.GetAllAccessLogs is later invoked; update the guard in the
handler so that RegisterEndpoints is only called when deps.ReverseProxyManager,
deps.ReverseProxyDomainManager, and deps.ReverseProxyAccessLogs are all non-nil
(or alternatively pass a safe non-nil stub), i.e. add
deps.ReverseProxyAccessLogs != nil to the existing if condition that surrounds
reverseproxymanager.RegisterEndpoints so the access logs manager is validated
before being stored/used.
---
Nitpick comments:
In `@management/server/http/handler.go`:
- Around line 68-92: Add a Validate method on APIHandlerDeps that checks
required fields (e.g., AccountManager, AuthManager, AppMetrics,
PermissionsManager, NetworksManager, ResourceManager, etc.) for nil and returns
a descriptive error listing missing deps; keep truly optional fields (e.g.,
ReverseProxyManager, ProxyGRPCServer, ReverseProxyDomainManager) out of the
failure conditions. Update NewAPIHandler to call deps.Validate() immediately and
propagate the error so callers get a clear failure instead of panics;
additionally, where appropriate add defensive nil-guards inside methods that
rely on optional deps (e.g., ReverseProxyManager, ProxyGRPCServer) to avoid
runtime panics when those are intentionally unset.
| if deps.ReverseProxyManager != nil && deps.ReverseProxyDomainManager != nil { | ||
| reverseproxymanager.RegisterEndpoints(deps.ReverseProxyManager, *deps.ReverseProxyDomainManager, deps.ReverseProxyAccessLogs, router) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find RegisterEndpoints function in reverseproxymanager
rg -n "func RegisterEndpoints" --type=go -A 20 -B 2Repository: netbirdio/netbird
Length of output: 11526
deps.ReverseProxyAccessLogs must be checked for nil before passing to RegisterEndpoints.
The guard on line 202 ensures ReverseProxyManager and ReverseProxyDomainManager are non-nil, but deps.ReverseProxyAccessLogs is passed unchecked to RegisterEndpoints, which passes it directly to accesslogsmanager.RegisterEndpoints (line 32 of manager/api.go). The accesslogs handler stores this manager without validation, and calling the /events/proxy endpoint with a nil manager will panic when GetAllAccessLogs is invoked.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@management/server/http/handler.go` around lines 202 - 204, The call to
reverseproxymanager.RegisterEndpoints passes deps.ReverseProxyAccessLogs without
nil-checking, which can lead to a panic when accesslogsmanager.GetAllAccessLogs
is later invoked; update the guard in the handler so that RegisterEndpoints is
only called when deps.ReverseProxyManager, deps.ReverseProxyDomainManager, and
deps.ReverseProxyAccessLogs are all non-nil (or alternatively pass a safe
non-nil stub), i.e. add deps.ReverseProxyAccessLogs != nil to the existing if
condition that surrounds reverseproxymanager.RegisterEndpoints so the access
logs manager is validated before being stored/used.
There was a problem hiding this comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@management/server/http/handler.go`:
- Around line 253-261: The call to reverseproxymanager.RegisterEndpoints in
registerReverseProxyAndOAuth passes deps.ReverseProxyAccessLogs without checking
it for nil; update the guard to ensure deps.ReverseProxyAccessLogs is non-nil
(in addition to deps.ReverseProxyManager and deps.ReverseProxyDomainManager)
before invoking reverseproxymanager.RegisterEndpoints so it never receives a nil
access-logs dependency (or alternatively handle a nil-safe value inside
RegisterEndpoints), referencing registerReverseProxyAndOAuth,
APIHandlerDeps.ReverseProxyAccessLogs and reverseproxymanager.RegisterEndpoints
to locate the change.
cbad720 to
0299073
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (3)
proxy/web/src/components/DeploymentMaturityCard.tsx (1)
17-33: Consider hoisting static lookup maps outside the component.
titleByStageanddescriptionByStageare constant objects recreated on every render. Moving them to module scope avoids unnecessary allocations.♻️ Suggested change
+const titleByStage: Record<DeploymentMaturityStage, string> = { + exploration: "Exploration deployment", + functional: "Functional deployment", + operational: "Operational deployment", + production: "Production deployment", +}; + +const descriptionByStage: Record<DeploymentMaturityStage, string> = { + exploration: + "This deployment is suitable for initial testing. Review core setup, access policies, and peer coverage before relying on it for daily work.", + functional: + "This deployment covers basic connectivity. Review routing, DNS, and access policies to ensure they match how your team actually works.", + operational: + "This deployment supports day-to-day use. Periodically review audit events, policy changes, and onboarding flows to keep it predictable.", + production: + "This deployment is ready for sustained production use. Keep an eye on change management, observability, and backup procedures.", +}; + export const DeploymentMaturityCard = ({ stage }: DeploymentMaturityCardProps) => { if (!stage) { return null; } - const titleByStage: Record<DeploymentMaturityStage, string> = { - exploration: "Exploration deployment", - functional: "Functional deployment", - operational: "Operational deployment", - production: "Production deployment", - }; - - const descriptionByStage: Record<DeploymentMaturityStage, string> = { - exploration: - "This deployment is suitable for initial testing. ...", - functional: - "This deployment covers basic connectivity. ...", - operational: - "This deployment supports day-to-day use. ...", - production: - "This deployment is ready for sustained production use. ...", - }; - const titleText = titleByStage[stage];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@proxy/web/src/components/DeploymentMaturityCard.tsx` around lines 17 - 33, Move the two static lookup objects titleByStage and descriptionByStage out of the DeploymentMaturityCard component into module scope as top-level constants (e.g., declare const titleByStage: Record<DeploymentMaturityStage,string> = ... and similarly for descriptionByStage) so they are created once, then update the component to reference these top-level constants without changing their keys or values; ensure the symbol names remain titleByStage and descriptionByStage so existing usages inside DeploymentMaturityCard continue to work.management/server/types/deployment_maturity_test.go (1)
5-24: Uset.Runsubtests and include inputs in error messages for easier debugging.Without subtests, a failure reports only expected vs got, with no indication of which case failed. Adding names also lets you run individual cases with
-run.♻️ Suggested improvement
func TestEvaluateDeploymentMaturity(t *testing.T) { tests := []struct { + name string peers int policies int days int expected DeploymentMaturity }{ - {1, 0, 1, DeploymentMaturityExploration}, - {3, 1, 1, DeploymentMaturityFunctional}, - {5, 2, 5, DeploymentMaturityOperational}, - {8, 3, 14, DeploymentMaturityProduction}, - {8, 3, 5, DeploymentMaturityOperational}, + {"below_min_peers", 1, 0, 1, DeploymentMaturityExploration}, + {"functional_threshold", 3, 1, 1, DeploymentMaturityFunctional}, + {"operational_threshold", 5, 2, 5, DeploymentMaturityOperational}, + {"production_threshold", 8, 3, 14, DeploymentMaturityProduction}, + {"production_peers_but_few_days", 8, 3, 5, DeploymentMaturityOperational}, } for _, tt := range tests { - stage := EvaluateDeploymentMaturity(tt.peers, tt.policies, tt.days) - if stage != tt.expected { - t.Errorf("expected %s, got %s", tt.expected, stage) - } + t.Run(tt.name, func(t *testing.T) { + stage := EvaluateDeploymentMaturity(tt.peers, tt.policies, tt.days) + if stage != tt.expected { + t.Errorf("EvaluateDeploymentMaturity(%d, %d, %d) = %s, want %s", + tt.peers, tt.policies, tt.days, stage, tt.expected) + } + }) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/types/deployment_maturity_test.go` around lines 5 - 24, Update TestEvaluateDeploymentMaturity to use t.Run subtests: for each case in the tests slice call t.Run with a descriptive name that includes the inputs (e.g., fmt.Sprintf("peers=%d_policies=%d_days=%d", peers, policies, days)), invoke EvaluateDeploymentMaturity inside the subtest, and on failure call t.Errorf including the input values as well as expected and actual (so the error message shows peers, policies, days, expected, got). Ensure you reference the existing TestEvaluateDeploymentMaturity test table and the EvaluateDeploymentMaturity function when making the change.management/server/http/handlers/accounts/accounts_handler.go (1)
141-171: Maturity computation adds two queries per account request — acceptable when gated, but worth monitoring.The inline comment (Lines 150-152) correctly flags the per-request overhead. Since the feature is flag-gated and disabled by default, this is fine for now. If enabled for accounts with large peer/policy sets, the cost of
GetPeers+ListPolicieson every GET/PUT could become noticeable.Consider caching the maturity result (e.g., recompute on peer/policy mutation and store alongside account metadata) if usage telemetry shows this becoming a hot path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/http/handlers/accounts/accounts_handler.go` around lines 141 - 171, computeDeploymentMaturity currently issues GetPeers and ListPolicies on every account request when enableDeploymentMaturity is true, causing two extra queries per request; change this to read a cached maturity value from account metadata (meta) when present and only call EvaluateDeploymentMaturity + persist the computed value on peer/policy mutation events (i.e., in accountManager methods that add/remove peers or create/update/delete policies). Specifically, update computeDeploymentMaturity to prefer a cached field on types.AccountMeta (e.g., Maturity or similar) and avoid calling h.accountManager.GetPeers/ListPolicies during reads; implement recomputation logic that calls types.EvaluateDeploymentMaturity and writes the new value back to account metadata from the mutation handlers so subsequent calls to computeDeploymentMaturity return the cached string without extra queries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@management/server/http/handler.go`:
- Around line 253-261: The guard in registerReverseProxyAndOAuth only checks
deps.ReverseProxyManager and deps.ReverseProxyDomainManager but not
deps.ReverseProxyAccessLogs, so add a nil check for ReverseProxyAccessLogs
before calling reverseproxymanager.RegisterEndpoints; update the if condition to
require deps.ReverseProxyAccessLogs != nil (in addition to ReverseProxyManager
!= nil and ReverseProxyDomainManager != nil) so RegisterEndpoints is only
invoked when all three dependencies exist, preventing downstream nil dereference
in handlers like /events/proxy.
---
Nitpick comments:
In `@management/server/http/handlers/accounts/accounts_handler.go`:
- Around line 141-171: computeDeploymentMaturity currently issues GetPeers and
ListPolicies on every account request when enableDeploymentMaturity is true,
causing two extra queries per request; change this to read a cached maturity
value from account metadata (meta) when present and only call
EvaluateDeploymentMaturity + persist the computed value on peer/policy mutation
events (i.e., in accountManager methods that add/remove peers or
create/update/delete policies). Specifically, update computeDeploymentMaturity
to prefer a cached field on types.AccountMeta (e.g., Maturity or similar) and
avoid calling h.accountManager.GetPeers/ListPolicies during reads; implement
recomputation logic that calls types.EvaluateDeploymentMaturity and writes the
new value back to account metadata from the mutation handlers so subsequent
calls to computeDeploymentMaturity return the cached string without extra
queries.
In `@management/server/types/deployment_maturity_test.go`:
- Around line 5-24: Update TestEvaluateDeploymentMaturity to use t.Run subtests:
for each case in the tests slice call t.Run with a descriptive name that
includes the inputs (e.g., fmt.Sprintf("peers=%d_policies=%d_days=%d", peers,
policies, days)), invoke EvaluateDeploymentMaturity inside the subtest, and on
failure call t.Errorf including the input values as well as expected and actual
(so the error message shows peers, policies, days, expected, got). Ensure you
reference the existing TestEvaluateDeploymentMaturity test table and the
EvaluateDeploymentMaturity function when making the change.
In `@proxy/web/src/components/DeploymentMaturityCard.tsx`:
- Around line 17-33: Move the two static lookup objects titleByStage and
descriptionByStage out of the DeploymentMaturityCard component into module scope
as top-level constants (e.g., declare const titleByStage:
Record<DeploymentMaturityStage,string> = ... and similarly for
descriptionByStage) so they are created once, then update the component to
reference these top-level constants without changing their keys or values;
ensure the symbol names remain titleByStage and descriptionByStage so existing
usages inside DeploymentMaturityCard continue to work.
…and account response
…oyment stage display
…struct (fix Sonar S107)
0299073 to
9039ebf
Compare
|
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |



This PR introduces an optional, heuristic
deployment_maturityindicator for self-hosted deployments.The indicator provides an informational signal about deployment stage (exploration → functional → operational → production) derived from:
The field is:
(No routing, policy, enforcement, or persistence behavior is modified.)
What’s Included
1. Core Evaluation Logic
EvaluateDeploymentMaturityinmanagement/server/types(The logic is deterministic and purely heuristic.)
2. API Wiring (Flag-Gated)
EnableDeploymentMaturityWhen enabled:
deployment_maturityis included in the Account responseWhen disabled:
Field is omitted entirely
OpenAPI schema updated
Generated types updated
(The field is optional and does not change existing API contracts.)
3. UI Component
DeploymentMaturityCardcomponentBackwards Compatibility
Testing
Validated locally using:
go fmt ./... go vet ./management/server/... go test ./management/server/...