From 56f4c28798538667514b387a29d8e86a6ca48186 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 7 Nov 2025 17:36:24 +0000 Subject: [PATCH 01/17] EIM NB API and CLi decomp --- .../eim-nbapi-cli-decomposition.md | 175 ++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 design-proposals/eim-nbapi-cli-decomposition.md diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md new file mode 100644 index 000000000..a9322e84a --- /dev/null +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -0,0 +1,175 @@ +# Design Proposal: Exposing only the required North Bound APIs and CLI commands for the workflow as part of EIM decomposition + +Author(s) EIM-Core team + +Last updated: 7/11/25 + +## Abstract + +In context of EIM decomposition the North Bound API service should be treated as an independent interchangeable module. +The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) calls out a need for exposing both a full set of EMF APIs, and a need for exposing only a subset of APIs as required by individual workflows taking advantage of a modular architecture. This proposal will explore, how the APIs can be decomposed and how the decomposed output can be used as version of API service module. + +## Background and Context + +In EMF 2025.2 the API service is deployed via a helm chart deployed by Argo CD. The API service is run and deployed in a container kick-started from the API service container image. The API is build using the OpenAPI spec. There are multiple levels of APIs currently available with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated) + +The list of domain APIs include: + +- Catalog and Catalog utilities APIs +- App deployment manager and app resource manager APIs +- Cluster APIs +- EIM APIs +- Alert Monitoring APIs +- MPS and RPS APIs +- Metadata broker and Tenancy APIs + +There are two levels to the API decomposition + +- Decomposition of above domain levels +- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIS includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) + +The following questions must be answered and investigated: + +- How the API service is build currently +- How the API service container image is build currently +- How the API service helm charts are build currently +- What level of decomposition is needed from the required workflows +- How to decomposition API at domain level +- How to decomposition API within domain level +- How to build various API service version as per desired workflows using the modular APIs +- How to deliver the various API service versions as per desired workflows +- How to expose the list of available APIs for client consumption (orch-cli) + +Uncertainties: + +- How does potential removal of the API gateway affect the exposing of the APIs to the client +- How will the decomposition and availability of APIs within the API service be mapped back to the Inventory and the set of SB APIs. + +### Decomposing the release of API service as a module + +Once the investigation is completed on how the API service is create today decisions must be done on a the service will be build and released as a module. + +- The build of the API service itself will depend on the results of top2bottom and bottom2top decomposition investigations. +- The individual versions of API service can be packaged as versioned container images: + - apiv2-emf:x.x.x + - apiv2-workflow1:x.x.x + - apiv2-workflow2:x.x.x +- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behavior of the API service in runtime. +- The API service itself should still be packaged for deployment as a helmchart regardless of deployment via ArgoCD or other medium/technique. Decision should be made if common helmchart is used with override values for container image and other related values (preferred) or individual helmcharts need to be released. + +### Decomposing the API service + +An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavours of same API service providing different set of APIs. + +- Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. +- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. +- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently part of the EIM OpenAPI spec. + +The following are the usual options to decomposing or exposing subsets of APIs. + +- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. +- Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. +- ~~Authentication & Authorization Based Filtering ~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. +- ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. +- ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases +- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. +- Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. + +### Consuming the APIs from the CLI + +The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. + +# Appendix: OpenAPI Spec Manipulation with Extensions + +This approach uses OpenAPI's extension mechanism (properties starting with `x-`) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. + +## How It Works + +### 1. Adding Custom Extensions to Your OpenAPI Spec + +```yaml +openapi: 3.0.0 +info: + title: My API + version: 1.0.0 + +paths: + /users: + get: + summary: Get all users + x-audience: ["public", "partner"] + x-use-case: ["user-management", "reporting"] + x-access-level: "read" + responses: + '200': + description: Success + + /users/{id}: + get: + summary: Get user by ID + x-audience: ["public", "partner", "internal"] + x-use-case: ["user-management"] + responses: + '200': + description: Success + delete: + summary: Delete user + x-audience: ["internal"] + x-use-case: ["admin"] + x-access-level: "write" + responses: + '204': + description: Deleted + + /admin/analytics: + get: + summary: Get analytics data + x-audience: ["internal"] + x-use-case: ["analytics", "reporting"] + x-sensitive: true + responses: + '200': + description: Analytics data + +components: + schemas: + User: + type: object + x-audience: ["public", "partner", "internal"] + properties: + id: + type: string + name: + type: string + email: + type: string + x-audience: ["internal"] # Email only for internal use + ssn: + type: string + x-audience: ["internal"] + x-sensitive: true + +# Audience-based filtering +x-audience: ["public", "partner", "internal", "admin"] + +# Use case categorization +x-use-case: ["user-management", "reporting", "analytics", "billing"] + +# Access level requirements +x-access-level: "read" | "write" | "admin" + +# Sensitivity marking +x-sensitive: true + +# Client-specific +x-client-type: ["mobile", "web", "api"] + +# Environment restrictions +x-environment: ["production", "staging", "development"] + +# Rate limiting categories +x-rate-limit-tier: "basic" | "premium" | "enterprise" + +# Deprecation info +x-deprecated-for: ["internal"] +x-sunset-date: "2024-12-31" \ No newline at end of file From cd5a0a1a2672793c880750329c0dc18f500e8db0 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 7 Nov 2025 17:38:33 +0000 Subject: [PATCH 02/17] Update eim-nbapi-cli-decomposition.md --- design-proposals/eim-nbapi-cli-decomposition.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index a9322e84a..46336aca5 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -69,7 +69,7 @@ The following are the usual options to decomposing or exposing subsets of APIs. - ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. - Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. -- ~~Authentication & Authorization Based Filtering ~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. +- ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. - ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. - ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases - OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. From 3cb93a58451b0eaf9cc1463df43bac28493e1b32 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 12 Nov 2025 02:40:49 -0800 Subject: [PATCH 03/17] Adding possible improvements to building API spec per scenario --- .../eim-nbapi-cli-decomposition.md | 176 +++++++++++++++++- 1 file changed, 174 insertions(+), 2 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 46336aca5..cc383055c 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -143,7 +143,7 @@ components: type: string email: type: string - x-audience: ["internal"] # Email only for internal use + x-audience: ["internal"] ssn: type: string x-audience: ["internal"] @@ -172,4 +172,176 @@ x-rate-limit-tier: "basic" | "premium" | "enterprise" # Deprecation info x-deprecated-for: ["internal"] -x-sunset-date: "2024-12-31" \ No newline at end of file +x-sunset-date: "2024-12-31" +``` + +## How NB API is Currently Built + +Currently apiv2 (infra-core repository) stores REST API definitions of services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . + +Content of api/proto Directory - two folders: +services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. +resources - Data Models (DTOs/Entities) - seperate file per each resource. + +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. + +The following it the current, full buf configuration:# (buf.gen.yaml) + +```yaml +plugins: + # go - https://pkg.go.dev/google.golang.org/protobuf + - name: go + out: internal/pbapi + opt: + - paths=source_relative + + # go grpc - https://pkg.go.dev/google.golang.org/grpc + - name: go-grpc + out: internal/pbapi + opt: + - paths=source_relative + - require_unimplemented_servers=false + + # go install github.com/sudorandom/protoc-gen-connect-openapi@v0.17.0 + - name: connect-openapi + path: protoc-gen-connect-openapi + out: api/openapi + strategy: all + opt: + - format=yaml + - short-service-tags + - short-operation-ids + - path=openapi.yaml + + # grpc-gateway - https://grpc-ecosystem.github.io/grpc-gateway/ + - name: grpc-gateway + out: internal/pbapi + opt: + - paths=source_relative + + # docs - https://github.com/pseudomuto/protoc-gen-doc + - plugin: doc + out: docs + opt: markdown,proto.md + strategy: all + + - plugin: go-const + out: internal/pbapi + path: ["go", "run", "./cmd/protoc-gen-go-const"] + opt: + - paths=source_relative +``` + +The plugin takes as an input one full openapi spec that includes all services (services.proto). + +Key Items: +- Input: api/proto/**/*.proto +- Config: buf.gen.yaml, buf.work.yaml, buf.yaml +- Output: openapi.yaml +- Tool: protoc-gen-connect-openapi + +Buf also generates: +- the Go code ( Go structs, gRPC clients/services) in internal/pbapi +- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) +- documentation: docs/proto.md + +Next, targets "oapi-patch" and "oapi-banner" are executed on the generated openapi.yaml file: + +"make oapi-patch" - post-process: cleans up the generated OpenAPI by removing verbose proto package prefixes (e.g.: resources.compute.v1.HostResource → HostResource) + +## Solution 1 + +Split services.yaml file into multiple files per service, then change make buf-gen target to process only services used by the scenario, example: + +```bash +bug generate --path api/proto/services/instance/v1 api/proto/services/os/v1 +``` + +This generates the openapi spec openapi.yaml only for the services supported by particular scenario. + +## Soultion 2 - more robust + +- Generate full openapi.yaml file with "buf generate" same way it is done now. buf already generates the spec with option 'short-service-tags'. This means it adds a tag to each service in the openapi spec matching its service name. +- Write a small filter that will parse the spec and select only operations per service with a certain service-tag and generate a new spec supporting only the particular scenario. + +We can add some manifest that will store the list of services per scenario. + +## Solution 3 + +No splitting of service.yaml . +This approach uses custom annotations/options to connect services to scenarios. + +- define custom option/annotations by extending google.protobuf.ServiceOptions and google.protobuf.MethodOptions, example: + +```go +syntax = "proto3"; +package annotations.common.v1; + +import "google/protobuf/descriptor.proto"; + +// Service-level: applies to the whole service (default) +extend google.protobuf.ServiceOptions { + repeated string scenario = 50001; // e.g., ["scenario-1", "scenario-2"] +} + +// Method-level: override/add per RPC if needed +extend google.protobuf.MethodOptions { + repeated string scenario = 50011; +} +``` +Add the file to api/proto/annotations. + +Use it in api/proto/services/services.proto: + +Service level selection per scenario: +```go +(...) +import "annotations/scenario_annotations.proto"; +(...) +service OSUpdateRun { + option (annotations.common.v1.scenario) = "scenario-1"; + // Get a list of OS Update Runs. + rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; + } + // Get a specific OS Update Run. + rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } + // Delete a OS Update Run. + rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { + option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } +} +(...) +``` + +Or: + +Method level selection per scenario: +```go +(...) +import "annotations/scenario_annotations.proto"; +(...) +service OSUpdateRun { + // Get a list of OS Update Runs. + rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { + option (annotations.common.v1.scenario) = "scenario-1"; + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; + } + // Get a specific OS Update Run. + rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } + // Delete a OS Update Run. + rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { + option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } +} +(...) +``` + +- Use buf generate to generate the full openapi.yaml spec. + +- Create and run a filter that reads generated .pb file that contains new service annotations, takes openapi.yaml as input and removes all services without the scenario annotation. The filter also takes as input the scenario name and returns a scenario specific openapi spec. +- OR patch the spec generating tool ( protoc-gen-connect-openapi) so it supports new annotations and includes them in the new full spec - so it reads your annotations directly and writes x-* fields into the OpenAPI. It will create an openapi spec with fields and services annotatted by a specific scenario. This requires literally creating a custom plugin that takes scenario as input and generates openapi spec per scenario only - wrapper of protoc-gen-connect-openapi. From 3e4e27ccb893cc232aded22f5eb324d298a734e1 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Thu, 13 Nov 2025 16:03:00 +0000 Subject: [PATCH 04/17] Typos --- design-proposals/eim-nbapi-cli-decomposition.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index cc383055c..aa7e49554 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -47,14 +47,14 @@ Uncertainties: ### Decomposing the release of API service as a module -Once the investigation is completed on how the API service is create today decisions must be done on a the service will be build and released as a module. +Once the investigation is completed on how the API service is created today decisions must be done on how the service will be build and released as a module. -- The build of the API service itself will depend on the results of top2bottom and bottom2top decomposition investigations. +- The build of the API service itself will depend on the results of "top to bottom" and "bottom to top" decomposition investigations. - The individual versions of API service can be packaged as versioned container images: - apiv2-emf:x.x.x - apiv2-workflow1:x.x.x - apiv2-workflow2:x.x.x -- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behavior of the API service in runtime. +- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behaviour of the API service in runtime. - The API service itself should still be packaged for deployment as a helmchart regardless of deployment via ArgoCD or other medium/technique. Decision should be made if common helmchart is used with override values for container image and other related values (preferred) or individual helmcharts need to be released. ### Decomposing the API service @@ -63,7 +63,7 @@ An investigation needs to be conducted into how the API service can be decompose - Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. - The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. -- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently part of the EIM OpenAPI spec. +- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently are part of the EIM OpenAPI spec. The following are the usual options to decomposing or exposing subsets of APIs. @@ -72,7 +72,7 @@ The following are the usual options to decomposing or exposing subsets of APIs. - ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. - ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. - ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases -- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. +- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give us the automated approach for creating individual OpenAPI specs for workflows based on labels. - Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. ### Consuming the APIs from the CLI @@ -181,11 +181,11 @@ Currently apiv2 (infra-core repository) stores REST API definitions of services Content of api/proto Directory - two folders: services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. -resources - Data Models (DTOs/Entities) - seperate file per each resource. +resources - Data Models (DTOs/Entities) - separate file per each resource. Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. -The following it the current, full buf configuration:# (buf.gen.yaml) +The following is the current, full buf configuration:# (buf.gen.yaml) ```yaml plugins: From 8b12d627a2800ac0cc513c1368d1c0cb8ed554fc Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 14 Nov 2025 16:00:03 +0000 Subject: [PATCH 05/17] Update --- .../eim-nbapi-cli-decomposition.md | 106 ++---------------- 1 file changed, 10 insertions(+), 96 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index aa7e49554..53a226b0c 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -26,15 +26,17 @@ The list of domain APIs include: There are two levels to the API decomposition - Decomposition of above domain levels -- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIS includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) +- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIs includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) The following questions must be answered and investigated: - How the API service is build currently + - It is build from a proto definition and code is autogenerated by "buf" tool - [See How NB API is Currently Built](#how-nb-api-is-currently-built) - How the API service container image is build currently - How the API service helm charts are build currently - What level of decomposition is needed from the required workflows - How to decomposition API at domain level + - At domain level the APIs are deployed as separate services - How to decomposition API within domain level - How to build various API service version as per desired workflows using the modular APIs - How to deliver the various API service versions as per desired workflows @@ -62,8 +64,9 @@ Once the investigation is completed on how the API service is created today deci An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavours of same API service providing different set of APIs. - Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. -- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. +- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the EMF - they are already decomposed/modular at this level and deployed as separate services. - The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently are part of the EIM OpenAPI spec. +- The APIs service may also need to be decomposed at individual internal service level ie host resource may need to ha different data model across use cases. The following are the usual options to decomposing or exposing subsets of APIs. @@ -77,103 +80,14 @@ The following are the usual options to decomposing or exposing subsets of APIs. ### Consuming the APIs from the CLI -The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. +The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. The prevention could happen at command call level where a configuration would be checked before a RUNe command is called for a given command. -# Appendix: OpenAPI Spec Manipulation with Extensions -This approach uses OpenAPI's extension mechanism (properties starting with `x-`) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. +## Summary -## How It Works - -### 1. Adding Custom Extensions to Your OpenAPI Spec - -```yaml -openapi: 3.0.0 -info: - title: My API - version: 1.0.0 - -paths: - /users: - get: - summary: Get all users - x-audience: ["public", "partner"] - x-use-case: ["user-management", "reporting"] - x-access-level: "read" - responses: - '200': - description: Success - - /users/{id}: - get: - summary: Get user by ID - x-audience: ["public", "partner", "internal"] - x-use-case: ["user-management"] - responses: - '200': - description: Success - delete: - summary: Delete user - x-audience: ["internal"] - x-use-case: ["admin"] - x-access-level: "write" - responses: - '204': - description: Deleted - - /admin/analytics: - get: - summary: Get analytics data - x-audience: ["internal"] - x-use-case: ["analytics", "reporting"] - x-sensitive: true - responses: - '200': - description: Analytics data - -components: - schemas: - User: - type: object - x-audience: ["public", "partner", "internal"] - properties: - id: - type: string - name: - type: string - email: - type: string - x-audience: ["internal"] - ssn: - type: string - x-audience: ["internal"] - x-sensitive: true - -# Audience-based filtering -x-audience: ["public", "partner", "internal", "admin"] - -# Use case categorization -x-use-case: ["user-management", "reporting", "analytics", "billing"] - -# Access level requirements -x-access-level: "read" | "write" | "admin" - -# Sensitivity marking -x-sensitive: true - -# Client-specific -x-client-type: ["mobile", "web", "api"] - -# Environment restrictions -x-environment: ["production", "staging", "development"] - -# Rate limiting categories -x-rate-limit-tier: "basic" | "premium" | "enterprise" - -# Deprecation info -x-deprecated-for: ["internal"] -x-sunset-date: "2024-12-31" -``` +1. Assuming that in phase 1 we will retain Traefik for all workflows, we need to check how the Traefik->EIM mapping will behave and needs to behave when EIM only supports subset of APIs, and establish if the set of API calls supported by Treafik API Gateway maps to the supported APIs in EIM API service subset. +2. We need to make sure that our API supports specific usecases and on the other hand it needs to keep compatibility with other workflows - to achieve that, we may need to make code changes in data models. As an example we need to make sure that mandatory fields are supported accordingly across usecases ie. instance creation will require OSprofile for general usecase, but this may not be true for self installed OSes/Edge Nodes. Collaboration with teams/ADR owners is needed to establish what changes are needed at Resource Manager/Inventory levels to accommodate workflows and how will the changes impact the APIs. +3. We need to understand all the scenarios and required services to be supported. And define the APIs per scenario. ## How NB API is Currently Built From 947dd13a3001f305ae9f09e0e86827092f3818bb Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Fri, 14 Nov 2025 10:08:12 -0800 Subject: [PATCH 06/17] REST API per scenario --- .../eim-nbapi-cli-decomposition.md | 125 ++++-------------- 1 file changed, 23 insertions(+), 102 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 53a226b0c..e2969014a 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -82,7 +82,6 @@ The following are the usual options to decomposing or exposing subsets of APIs. The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. The prevention could happen at command call level where a configuration would be checked before a RUNe command is called for a given command. - ## Summary 1. Assuming that in phase 1 we will retain Traefik for all workflows, we need to check how the Traefik->EIM mapping will behave and needs to behave when EIM only supports subset of APIs, and establish if the set of API calls supported by Treafik API Gateway maps to the supported APIs in EIM API service subset. @@ -91,15 +90,21 @@ The best approach would be for the EMF to provide a service/endpoint that will c ## How NB API is Currently Built -Currently apiv2 (infra-core repository) stores REST API definitions of services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . +Currently, apiv2 (infra-core repository) holds definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . + +The input to protoc-gen-connect-openapi comes from: +api/proto/services directory - one file (services.yaml) containing API pperations on all the available resources (Service Layer). +api/proto/resources directory - multiple files with data models - separate file with data model per single inventory resource. + +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). -Content of api/proto Directory - two folders: -services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. -resources - Data Models (DTOs/Entities) - separate file per each resource. +### What is Buf -Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. +Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. -The following is the current, full buf configuration:# (buf.gen.yaml) +In infra-core/apiv2, "buf generate" command is executed within the "make generate" or "make buf-gen" target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. + +The following is the current, full buf configuration (buf.gen.yaml): ```yaml plugins: @@ -146,7 +151,7 @@ plugins: - paths=source_relative ``` -The plugin takes as an input one full openapi spec that includes all services (services.proto). +Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. Key Items: - Input: api/proto/**/*.proto @@ -154,108 +159,24 @@ Key Items: - Output: openapi.yaml - Tool: protoc-gen-connect-openapi -Buf also generates: +Based on the content of api/proto/ , buf also generates: - the Go code ( Go structs, gRPC clients/services) in internal/pbapi - gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) - documentation: docs/proto.md -Next, targets "oapi-patch" and "oapi-banner" are executed on the generated openapi.yaml file: - -"make oapi-patch" - post-process: cleans up the generated OpenAPI by removing verbose proto package prefixes (e.g.: resources.compute.v1.HostResource → HostResource) +## Building REST API Spec per Scenario -## Solution 1 +The following is the proposed solution (draft) to the requirement for decomposistion of EMF, where the exposed REST API is limited to support specific scenario and maintains comatibility with other scenarios. -Split services.yaml file into multiple files per service, then change make buf-gen target to process only services used by the scenario, example: +1. Split services.yaml file into multiple folders/files per service. +2. Maintain a manifest that lists names of REST API services suported by scenario. +3. Expose a new endpoint that list supported services in current scenario. +4. Change "buf-gen" make target to process only services used by the scenario, by using additional parameter "path", list of services need to come from the manifest in step 2). Example to use service1 and service2 services: ```bash -bug generate --path api/proto/services/instance/v1 api/proto/services/os/v1 -``` - -This generates the openapi spec openapi.yaml only for the services supported by particular scenario. - -## Soultion 2 - more robust - -- Generate full openapi.yaml file with "buf generate" same way it is done now. buf already generates the spec with option 'short-service-tags'. This means it adds a tag to each service in the openapi spec matching its service name. -- Write a small filter that will parse the spec and select only operations per service with a certain service-tag and generate a new spec supporting only the particular scenario. - -We can add some manifest that will store the list of services per scenario. - -## Solution 3 - -No splitting of service.yaml . -This approach uses custom annotations/options to connect services to scenarios. - -- define custom option/annotations by extending google.protobuf.ServiceOptions and google.protobuf.MethodOptions, example: - -```go -syntax = "proto3"; -package annotations.common.v1; - -import "google/protobuf/descriptor.proto"; - -// Service-level: applies to the whole service (default) -extend google.protobuf.ServiceOptions { - repeated string scenario = 50001; // e.g., ["scenario-1", "scenario-2"] -} - -// Method-level: override/add per RPC if needed -extend google.protobuf.MethodOptions { - repeated string scenario = 50011; -} -``` -Add the file to api/proto/annotations. - -Use it in api/proto/services/services.proto: - -Service level selection per scenario: -```go -(...) -import "annotations/scenario_annotations.proto"; -(...) -service OSUpdateRun { - option (annotations.common.v1.scenario) = "scenario-1"; - // Get a list of OS Update Runs. - rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; - } - // Get a specific OS Update Run. - rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } - // Delete a OS Update Run. - rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { - option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } -} -(...) -``` - -Or: - -Method level selection per scenario: -```go -(...) -import "annotations/scenario_annotations.proto"; -(...) -service OSUpdateRun { - // Get a list of OS Update Runs. - rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { - option (annotations.common.v1.scenario) = "scenario-1"; - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; - } - // Get a specific OS Update Run. - rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } - // Delete a OS Update Run. - rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { - option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } -} -(...) +bug generate --path api/proto/services/service1/v1 api/proto/services/service2/v1 ``` -- Use buf generate to generate the full openapi.yaml spec. +5. Step 4 generated the openapi spec openapi.yaml only for the services supported by particular scenario. +6. CLI is built based on the full REST API spec (also built earlier), but gets the list of supported services from the new API andpoint (step 3) and adjust its internal logic so it calls only supported REST API services/endpoints. When simple curl calls are used to unsupported endpoints, - default message about unsupported service is returned. -- Create and run a filter that reads generated .pb file that contains new service annotations, takes openapi.yaml as input and removes all services without the scenario annotation. The filter also takes as input the scenario name and returns a scenario specific openapi spec. -- OR patch the spec generating tool ( protoc-gen-connect-openapi) so it supports new annotations and includes them in the new full spec - so it reads your annotations directly and writes x-* fields into the OpenAPI. It will create an openapi spec with fields and services annotatted by a specific scenario. This requires literally creating a custom plugin that takes scenario as input and generates openapi spec per scenario only - wrapper of protoc-gen-connect-openapi. From a89d5b6bb79035c808152a3ceae3e3dae91b4eb8 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Fri, 28 Nov 2025 05:53:24 -0800 Subject: [PATCH 07/17] Update --- .../eim-nbapi-cli-decomposition.md | 510 ++++++++++++++---- 1 file changed, 396 insertions(+), 114 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index e2969014a..594f675d1 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -1,19 +1,59 @@ # Design Proposal: Exposing only the required North Bound APIs and CLI commands for the workflow as part of EIM decomposition -Author(s) EIM-Core team +Author(s) Edge Infrastructure Manager Team Last updated: 7/11/25 ## Abstract -In context of EIM decomposition the North Bound API service should be treated as an independent interchangeable module. -The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) calls out a need for exposing both a full set of EMF APIs, and a need for exposing only a subset of APIs as required by individual workflows taking advantage of a modular architecture. This proposal will explore, how the APIs can be decomposed and how the decomposed output can be used as version of API service module. +In the context of EIM decomposition, the North Bound API service should be treated as an independent interchangeable module. +The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) calls out a need for exposing both a full set of EIM APIs, and a need for exposing only a subset of EIM APIs as required by individual workflows taking advantage of a modular architecture. This proposal explores how the exposed APIs can be decomposed and adjusted to reflect only the supported EIM services per particular scenario. It defines how different scenarios can be supported by API versions that match only the services and features required per scenario, while keeping the full API support in place. ## Background and Context -In EMF 2025.2 the API service is deployed via a helm chart deployed by Argo CD. The API service is run and deployed in a container kick-started from the API service container image. The API is build using the OpenAPI spec. There are multiple levels of APIs currently available with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated) +In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound API service that exposes the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, the end user is not allowed to call the EIM APIs directly. They API calls reach first the API gateway external to EIM (Traefik gateway) which are mapped to EIM internal API endpoints and passed to EIM. +**Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. The API service communicates with Inventory via gRPC, which then manages the SB API interactions. -The list of domain APIs include: +### About APIV2 + +**Apiv2** is just one of EIM resource managers who talks to one EIM internal component, the Inventory, over gRPC. Same as other RMs it updates status of resources and retrieves the status allowing user performing operations on the EIM resources for manipulating Edge Nodes. +In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. + +### How NB API is Currently Built + +Currently, apiv2 (infra-core repository) holds the definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. + +The input to protoc-gen-connect-openapi comes from: +- `api/proto/services` directory - one file (services.proto) containing API operations on all the available resources (Service Layer) +- `api/proto/resources` directory - multiple files with data models - separate file with data model per single inventory resource + +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). + +### What is Buf + +Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. + +In infra-core/apiv2, "buf generate" command is executed within the "make generate" or "make buf-gen" target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. + +Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. + +Key Items: +- Input: api/proto/**/*.proto +- Config: buf.gen.yaml, buf.work.yaml, buf.yaml +- Output: openapi.yaml +- Tool: protoc-gen-connect-openapi + +Based on the content of api/proto/ , buf also generates: +- the Go code ( Go structs, gRPC clients/services) in internal/pbapi +- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) +- documentation: docs/proto.md + + +### About CLI + +There are multiple levels of APIs currently available, with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated). + +The list of domain APIs includes: - Catalog and Catalog utilities APIs - App deployment manager and app resource manager APIs @@ -23,50 +63,108 @@ The list of domain APIs include: - MPS and RPS APIs - Metadata broker and Tenancy APIs -There are two levels to the API decomposition +There are two levels to the API decomposition: -- Decomposition of above domain levels -- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIs includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) +- **Cross-domain decomposition**: Separation of the above domain-level APIs (e.g., only exposing EIM + Cluster APIs without App Orchestrator APIs) +- **Intra-domain decomposition**: Separation within a domain (e.g., at the EIM domain level, where the overall set of APIs includes onboarding/provisioning/Day 2 APIs, but another workflow may support only onboarding/provisioning without Day 2 support) The following questions must be answered and investigated: -- How the API service is build currently - - It is build from a proto definition and code is autogenerated by "buf" tool - [See How NB API is Currently Built](#how-nb-api-is-currently-built) -- How the API service container image is build currently -- How the API service helm charts are build currently -- What level of decomposition is needed from the required workflows -- How to decomposition API at domain level - - At domain level the APIs are deployed as separate services -- How to decomposition API within domain level -- How to build various API service version as per desired workflows using the modular APIs -- How to deliver the various API service versions as per desired workflows -- How to expose the list of available APIs for client consumption (orch-cli) +- How is the API service built currently? + - It is built from a proto definition and code is autogenerated by the "buf" tool - [See How NB API is Currently Built](#how-nb-api-is-currently-built) +- How is the API service container image built currently? +- How are the API service Helm charts built currently? +- What level of decomposition is needed for the required workflows? +- How to decompose APIs at the domain level? + - At the domain level, the APIs are deployed as separate services +- How to decompose APIs within the domain level? +- How to build various API service versions as per desired workflows using the modular APIs? +- How to deliver the various API service versions as per desired workflows? +- How to expose the list of available APIs for client consumption (orch-cli)? + +## Decomposing the release of API service as a module + +Once the investigation is completed on how the API service is created today, decisions must be made on how the service will be built and released as a module. + +**Build and Release Strategy:** -Uncertainties: +- The build of the API service itself will depend on the results of "top-to-bottom" and "bottom-to-top" decomposition investigations. +- Individual versions of the API service can be packaged as versioned container images: + - `apiv2-emf:x.x.x` (full EMF with all APIs) + - `apiv2-workflow1:x.x.x` (e.g., onboarding + provisioning only) + - `apiv2-workflow2:x.x.x` (e.g., minimal edge node management) -- How does potential removal of the API gateway affect the exposing of the APIs to the client -- How will the decomposition and availability of APIs within the API service be mapped back to the Inventory and the set of SB APIs. +**Recommended Approach: Multiple Container Images (One per Scenario)** -### Decomposing the release of API service as a module +**Why this is the only viable option:** -Once the investigation is completed on how the API service is created today decisions must be done on how the service will be build and released as a module. +`buf generate` doesn't just create OpenAPI specs—it generates the entire Go codebase including: +- Go structs from proto messages +- gRPC client and server code +- HTTP gateway handlers (REST to gRPC proxy) +- Type conversions and validators -- The build of the API service itself will depend on the results of "top to bottom" and "bottom to top" decomposition investigations. -- The individual versions of API service can be packaged as versioned container images: - - apiv2-emf:x.x.x - - apiv2-workflow1:x.x.x - - apiv2-workflow2:x.x.x -- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behaviour of the API service in runtime. -- The API service itself should still be packaged for deployment as a helmchart regardless of deployment via ArgoCD or other medium/technique. Decision should be made if common helmchart is used with override values for container image and other related values (preferred) or individual helmcharts need to be released. +**This means:** You cannot have a single image with "all code" and selectively enable services at runtime. If a service isn't generated by `buf`, the Go code doesn't exist and handlers can't be registered. + +**Build Strategy:** +- Build **separate container images per scenario**, each containing only the required API subset + - `apiv2:eim-full-2025.2` (full EMF with all APIs) + - `apiv2:eim-minimal-2025.2` (onboarding + provisioning only) + - `apiv2:eim-provisioning-2025.2` (provisioning workflow only) +- Each image is built from scenario manifests via Makefile +- Each build runs `buf generate` with only the proto files for that scenario's services + +**Build Process Per Scenario:** +```bash +# For eim-minimal scenario +1. Read scenarios/eim-minimal.yaml → services: [onboarding, provisioning] +2. Run: buf generate api/proto/services/onboarding/v1 api/proto/services/provisioning/v1 +3. Compile Go code (only onboarding and provisioning code exists) +4. Build Docker image: apiv2:eim-minimal-2025.2 +``` + +**Pros:** +- ✅ Only compiles and includes needed services (smaller images) +- ✅ Explicit API surface per image +- ✅ Clear separation between scenarios +- ✅ Better security (reduced attack surface—unused code doesn't exist) +- ✅ Faster startup (fewer services to initialize) +- ✅ Aligns with how `buf generate` actually works + +**Cons:** +- Multiple images to build and maintain in CI/CD +- More storage in container registry +- Need to rebuild all images for common code changes + +**Helm Chart:** + +Single Helm chart for all scenarios will use a flag to use scenariospecific image + +**Benefits:** +- Single Helm chart to maintain +- Image selection controlled by tag that includes scenario name +- Easy to switch scenarios by changing one value +- Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/minimal.yaml` sets `eimScenario: eim-minimal` set in deployment configuration) ### Decomposing the API service -An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavours of same API service providing different set of APIs. +An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavors of the same API service providing different sets of APIs. + +**Design Principles:** + +1. **Single Source of Truth**: The total set of APIs serves as the main source of the API service, and other flavors/subsets are automatically derived from this based on required functionality. This makes maintenance simple and centralized. -- Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. -- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the EMF - they are already decomposed/modular at this level and deployed as separate services. -- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently are part of the EIM OpenAPI spec. -- The APIs service may also need to be decomposed at individual internal service level ie host resource may need to ha different data model across use cases. +2. **Domain-Level Decomposition**: The API service should be decomposed at the domain level, meaning that all domains or a subset of domains should be available as part of the EMF. + - At this level, APIs are already decomposed/modular and deployed as separate services (e.g., EIM APIs, Cluster APIs, App Orchestrator APIs) + - **For EIM-focused scenarios**: Only the EIM domain APIs would be included + +3. **Intra-Domain Decomposition**: The API service should be decomposed within the domain level, meaning that only a subset of available APIs may need to be released and/or exposed at the API service level. + - **Example**: Within the EIM domain, we may not want to expose Day 2 functionality for some workflows, even though Day 2 operations are part of the full EIM OpenAPI spec + - This allows workflows focused on onboarding/provisioning to omit upgrade, maintenance, and troubleshooting APIs + +4. **Resource-Level Decomposition** (Under Investigation): The API service may also need to be decomposed at the individual internal service level. + - **Example**: Host resource might need different data models across use cases + - **Note**: This would require separate data models and may increase complexity significantly The following are the usual options to decomposing or exposing subsets of APIs. @@ -80,103 +178,287 @@ The following are the usual options to decomposing or exposing subsets of APIs. ### Consuming the APIs from the CLI -The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. The prevention could happen at command call level where a configuration would be checked before a RUNe command is called for a given command. +The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are currently supported by the deployed API service. Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 -## Summary +**CLI Workflow:** -1. Assuming that in phase 1 we will retain Traefik for all workflows, we need to check how the Traefik->EIM mapping will behave and needs to behave when EIM only supports subset of APIs, and establish if the set of API calls supported by Treafik API Gateway maps to the supported APIs in EIM API service subset. -2. We need to make sure that our API supports specific usecases and on the other hand it needs to keep compatibility with other workflows - to achieve that, we may need to make code changes in data models. As an example we need to make sure that mandatory fields are supported accordingly across usecases ie. instance creation will require OSprofile for general usecase, but this may not be true for self installed OSes/Edge Nodes. Collaboration with teams/ADR owners is needed to establish what changes are needed at Resource Manager/Inventory levels to accommodate workflows and how will the changes impact the APIs. -3. We need to understand all the scenarios and required services to be supported. And define the APIs per scenario. +1. **Discovery on Login**: The CLI requests API capability information upon user login +2. **Configuration Caching**: The CLI saves the supported API configuration locally +3. **Command Validation**: Before executing commands, the CLI checks the cached configuration +4. **Graceful Degradation**: If a command maps to an unsupported API, the CLI displays a clear error message -## How NB API is Currently Built +**Implementation Approach:** -Currently, apiv2 (infra-core repository) holds definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . +``` +CLI Command Flow: +┌─────────────────┐ +│ User runs │ +│ orch-cli login │ +└────────┬────────┘ + │ + ▼ +┌─────────────────────────┐ +│ GET /../capabilities│ ← New service endpoint +└────────┬────────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ Response: │ +│ { │ +│ "scenario": "eim-min",│ +│ "apis": [ │ +│ "onboarding", │ +│ "provisioning" │ +│ ] │ +│ } │ +└────────┬────────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ CLI caches config │ +│ in ~/.orch-cli/config │ +└─────────────────────────┘ +``` -The input to protoc-gen-connect-openapi comes from: -api/proto/services directory - one file (services.yaml) containing API pperations on all the available resources (Service Layer). -api/proto/resources directory - multiple files with data models - separate file with data model per single inventory resource. +**Command Execution:** +- The CLI checks the capability configuration before executing a command +- If the required API is not in the supported list, display: + ``` + Error: This orchestrator deployment does not support + Available features: onboarding, provisioning + ``` +- For direct curl calls to unsupported endpoints, the API service returns a standard 404 or 501 (Not Implemented) with a descriptive message -Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). +## Summary of Action Items -### What is Buf +### 1. Traefik Gateway Compatibility -Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. +**Status**: Phase 1 retains Traefik for all workflows -In infra-core/apiv2, "buf generate" command is executed within the "make generate" or "make buf-gen" target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. +**Action Items**: +- Investigate how the Traefik→EIM mapping behaves when EIM only supports a subset of APIs +- Determine error handling when Traefik routes to non-existent EIM endpoints -The following is the current, full buf configuration (buf.gen.yaml): +### 2. Data Model and API Compatibility -```yaml -plugins: - # go - https://pkg.go.dev/google.golang.org/protobuf - - name: go - out: internal/pbapi - opt: - - paths=source_relative - - # go grpc - https://pkg.go.dev/google.golang.org/grpc - - name: go-grpc - out: internal/pbapi - opt: - - paths=source_relative - - require_unimplemented_servers=false - - # go install github.com/sudorandom/protoc-gen-connect-openapi@v0.17.0 - - name: connect-openapi - path: protoc-gen-connect-openapi - out: api/openapi - strategy: all - opt: - - format=yaml - - short-service-tags - - short-operation-ids - - path=openapi.yaml - - # grpc-gateway - https://grpc-ecosystem.github.io/grpc-gateway/ - - name: grpc-gateway - out: internal/pbapi - opt: - - paths=source_relative - - # docs - https://github.com/pseudomuto/protoc-gen-doc - - plugin: doc - out: docs - opt: markdown,proto.md - strategy: all - - - plugin: go-const - out: internal/pbapi - path: ["go", "run", "./cmd/protoc-gen-go-const"] - opt: - - paths=source_relative -``` +**Action Items**: +- Ensure APIs support specific use cases while maintaining compatibility with other workflows +- Review and potentially modify data models to accommodate multiple scenarios +- **Example**: Instance creation requires OS profile for general use case, but this may not be true for self-installed OSes/Edge Nodes + - Make fields conditionally required based on scenario + - Document field requirements per scenario in OpenAPI spec +- Collaborate with teams/ADR owners to establish: + - Required changes at Resource Manager level + - Required changes at Inventory level + - Impact on APIs from these changes -Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. +**Timeline**: Investigation required once the set of services is known per each scenario -Key Items: -- Input: api/proto/**/*.proto -- Config: buf.gen.yaml, buf.work.yaml, buf.yaml -- Output: openapi.yaml -- Tool: protoc-gen-connect-openapi +### 3. Scenario Definition and API Mapping -Based on the content of api/proto/ , buf also generates: -- the Go code ( Go structs, gRPC clients/services) in internal/pbapi -- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) -- documentation: docs/proto.md +**Action Items**: +- Define all supported scenarios (e.g., full EMF, minimal onboarding, edge provisioning only) +- For each scenario, document: + - Required services (which resource managers are needed) + - Required API endpoints (which operations are exposed) + - Data model variations (if any) + - Deployment configuration (Helm values, feature flags) +- Create a scenario-to-API mapping matrix +- Define API specifications per scenario + +**Status**: Investigation in progress ## Building REST API Spec per Scenario -The following is the proposed solution (draft) to the requirement for decomposistion of EMF, where the exposed REST API is limited to support specific scenario and maintains comatibility with other scenarios. +The following is the proposed solution (draft) to the requirement for decomposition of EMF, where the exposed REST API is limited to support a specific scenario and maintains compatibility with other scenarios. -1. Split services.yaml file into multiple folders/files per service. -2. Maintain a manifest that lists names of REST API services suported by scenario. -3. Expose a new endpoint that list supported services in current scenario. -4. Change "buf-gen" make target to process only services used by the scenario, by using additional parameter "path", list of services need to come from the manifest in step 2). Example to use service1 and service2 services: +### Implementation Steps -```bash -bug generate --path api/proto/services/service1/v1 api/proto/services/service2/v1 +#### Step 1: Restructure Proto Definitions + +Split the monolithic `services.proto` file into multiple folders/files per service: + +``` +api/proto/services/ +├── onboarding/ +│ └── v1/ +│ └── onboarding_service.proto +├── provisioning/ +│ └── v1/ +│ └── provisioning_service.proto +├── maintenance/ +│ └── v1/ +│ └── maintenance_service.proto +└── telemetry/ + └── v1/ + └── telemetry_service.proto ``` -5. Step 4 generated the openapi spec openapi.yaml only for the services supported by particular scenario. -6. CLI is built based on the full REST API spec (also built earlier), but gets the list of supported services from the new API andpoint (step 3) and adjust its internal logic so it calls only supported REST API services/endpoints. When simple curl calls are used to unsupported endpoints, - default message about unsupported service is returned. +#### Step 2: Define Scenario Manifests + +Maintain scenario manifests that list the REST API services supported by each scenario. + +**Recommended Approach: Scenario manifest files in repository** + +```yaml +# scenarios/eim-minimal.yaml +name: eim-minimal +description: Minimal EIM for onboarding and provisioning only +services: + - onboarding + - provisioning + +# scenarios/eim-full.yaml +name: eim-full +description: Full EIM with all capabilities +services: + - onboarding + - provisioning + - maintenance + - telemetry +``` + +**Why manifest files:** +- Makefile-driven builds read the manifest to determine which services to compile +- Version controlled in git repository +- No runtime database dependencies +- Each scenario gets its own container image +- Clear, declarative configuration + +#### Step 3: Expose API Capabilities Endpoint + +Add a new service that lists supported services in the current scenario as part of other ADR. + +#### Step 4: Modify Build Process + +Modify "buf-gen" make target to build the openapi spec for suported services as per scenario manifest. (Later Tag the image with scenario name and version). + +#### Step 5: Generate Scenario-Specific OpenAPI Specs + +Step 4 generates the `openapi.yaml` file containing only the services supported by the particular scenario. The output file can be named per scenario. An image is build per scenario and pushed seperately. + +#### Step 6: CLI Integration + +The CLI handles scenario variations through dynamic capability discovery: + +1. **Build**: CLI is built based on the full REST API spec (generated with `SCENARIO=eim-full`) +2. **Runtime**: CLI queries the `/api/v2/capabilities` endpoint on login +3. **Validation**: Before executing commands, CLI checks if the required service is in the `supported_services` list +4. **Error Handling**: + - For CLI commands: Display user-friendly error message + - For direct curl calls: API returns HTTP 404 or 501 with descriptive message + +### Summary of Current Requirements +- Provide scenario-based API exposure for EIM (full and subsets like onboarding/provisioning). +- Deliver per-scenario OpenAPI specs and container images or runtime-config flags. +- Add a capabilities endpoint that advertises supported services and scenario. +- Ensure Traefik routing and error handling work for missing endpoints. +- Maintain single source of truth for API definitions with automated generation. +- Keep CLI operable against any scenario via discovery, caching, and validation. +- Preserve compatibility with Inventory and SB APIs; no 1:1 mapping changes required. +- Support Helm-driven configuration (image/tag, feature flags, scenario selection). +- Include SPDX headers and follow EMF Mage/ArgoCD workflows. + +### Rationale +The approach aims to narrow the operational surface to the specific workflows being targeted, while ensuring the full EMF remains available for comprehensive deployments. Rather than maintaining multiple divergent API specifications, it generates tailored subsets from a single master definition to minimize duplication and drift. User experience is improved by enabling the CLI to detect capabilities at login and prevent unsupported commands upfront. Deployment remains flexible through GitOps profiles, ArgoCD application values, and Helm overrides so teams can switch scenarios without rebuilding. This enables incremental decomposition that can be adopted progressively without breaking existing integrations or workflows. + +### Investigation Needed + +The following investigation tasks will drive validation of the decomposition approach: + +- Validate feasibility of splitting services.proto and generating per-scenario specs via buf/protoc-gen-connect-openapi. +- Evaluate Inventory data model variations and conditional field requirements per scenario. +- Confirm deployment pipeline changes (mage targets) and ArgoCD app configs integration. +- Measure impact on gRPC gateway generation and handler registration per scenario. + +### Implementation Plan for Orch CLI + +A concise plan to enable scenario-aware CLI behavior with capability discovery and graceful command handling. + +- Add login-time discovery: GET capabilities (new service) to retrieve scenario, version, supported_services. +- Cache capabilities in ~/.orch-cli/config with TTL and manual refresh. +- Validate commands against supported_services; show clear errors and available features. +- Hide unsupported help entries when possible. +- Ensure full-spec build while runtime limits features based on capabilities. +- Add E2E tests targeting all scenarios. + +### Implementation Plan for EIM API + +Here is a short plan to implement scenario-based EIM API decomposition: + +1. **Restructure Proto Files** + - Split monolithic `services.proto` into service-scoped folders (onboarding, provisioning, maintenance, telemetry) + - Each service in its own directory: `api/proto/services//v1/_service.proto` + +2. **Create Scenario Manifests** + - Add `scenarios/` directory with YAML files for each scenario + - Define service lists per scenario (eim-full, eim-minimal, etc.) + +3. **Update Makefile Build Process** + - Modify `buf-gen` target to read scenario manifest and generate only specified services + - Run `buf generate` with only the proto paths for services in that scenario + - Build per-scenario images with `docker-build` target + - Add `build-all-scenarios` target to build all images (with `clean` between each) + - Image naming convention: `apiv2:-` + - Critical: Must clean generated code between scenarios to avoid conflicts + +4. **Implement Capabilities Service** + - Add `CapabilitiesService` protobuf definition + - Implement handler to return scenario name, supported services, and version + - Endpoint reads scenario from build-time embedded configuration + +5. **Handler Registration** + - Conditionally register gRPC and HTTP gateway handlers based on compiled services + - Only services included in buf-gen for that scenario will have gRPC handlers + +6. **OpenAPI Generation** + - Produce per-scenario OpenAPI outputs: `api/openapi/-openapi.yaml` + - Each image embeds its own OpenAPI spec + +7. **Update Helm Chart** + - Use single, common Helm chart for all scenarios + - Add `image.tag` value to select which scenario image to deploy + - Add `scenario.name` value for documentation/validation + - Expose capabilities endpoint in service definition + +8. **ArgoCD Integration** + - Update ArgoCD application templates to use scenario-based image tags + - Add `argo.eimScenario` value to cluster configs + - Profiles specify which scenario to deploy (e.g., minimal profile uses `eim-minimal`) + +9. **CI/CD Pipeline** + - Build all scenario images in CI + - Tag with both scenario name and version + - Push all images to registry + +### Test plan + +This section describes a practical test plan to verify EIM’s scenario-based APIs. It checks that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear and safe. + +- Integration: REST->gRPC gateway for included services; absence returns 404/501 with descriptive messages. +- CLI E2E: Login discovery, caching, command blocking, error messaging. +- Traefik: Route correctness per scenario; behavior for missing endpoints. +- Data model: Conditional field requirements validated per scenario. +- Profiles: Deploy each scenario via mage deploy:kindPreset; verify openapi and endpoints. +- Regression: Ensure full EMF scenario parity with current API suite. + +### Open Issues +- Strategy for OpenAPI proto-level splitting of services into seperate files - modified directory structure and buf-gen make target implementation. +- Long-term plan post-Traefik gateway removal and impacts. +- Handling version compatibility between CLI and the proposed capabilities service (what happens when the service does not exist and CLI expects it to exist?). +- Detailed scenario definitions on the Inventory level - NB APIs should be alligned with the Inventory resource availability in each scenario. +- Managing cross-domain scenarios when EIM-only vs multi-domain APIs are required. +- Managing apiv2 image version used by infra-core argo application - deployment level. + +### Uncertainties + +- How does potential removal of the API gateway affect the exposure of APIs to the client? (In relation to ADR: https://jira.devtools.intel.com/browse/ITEP-79422) +- How will the decomposition and availability of APIs within the API service be mapped back to the Inventory and the set of South Bound (SB) APIs? +- Which approach to exposing the set of operational EMF services/features is accepted (In relation to ADR: https://github.com/open-edge-platform/edge-manageability-framework/pull/1106) + + + + + + From 3f5e6a54837b5f1f82e4f2a8b127697f4e7e8fd9 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Fri, 28 Nov 2025 06:05:30 -0800 Subject: [PATCH 08/17] Update --- design-proposals/eim-nbapi-cli-decomposition.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 594f675d1..533abe21d 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -268,13 +268,13 @@ CLI Command Flow: **Status**: Investigation in progress -## Building REST API Spec per Scenario +### Building REST API Spec per Scenario The following is the proposed solution (draft) to the requirement for decomposition of EMF, where the exposed REST API is limited to support a specific scenario and maintains compatibility with other scenarios. -### Implementation Steps +#### Implementation Steps -#### Step 1: Restructure Proto Definitions +##### Step 1: Restructure Proto Definitions Split the monolithic `services.proto` file into multiple folders/files per service: @@ -294,7 +294,7 @@ api/proto/services/ └── telemetry_service.proto ``` -#### Step 2: Define Scenario Manifests +##### Step 2: Define Scenario Manifests Maintain scenario manifests that list the REST API services supported by each scenario. @@ -325,19 +325,19 @@ services: - Each scenario gets its own container image - Clear, declarative configuration -#### Step 3: Expose API Capabilities Endpoint +##### Step 3: Expose API Capabilities Endpoint Add a new service that lists supported services in the current scenario as part of other ADR. -#### Step 4: Modify Build Process +##### Step 4: Modify Build Process Modify "buf-gen" make target to build the openapi spec for suported services as per scenario manifest. (Later Tag the image with scenario name and version). -#### Step 5: Generate Scenario-Specific OpenAPI Specs +##### Step 5: Generate Scenario-Specific OpenAPI Specs Step 4 generates the `openapi.yaml` file containing only the services supported by the particular scenario. The output file can be named per scenario. An image is build per scenario and pushed seperately. -#### Step 6: CLI Integration +##### Step 6: CLI Integration The CLI handles scenario variations through dynamic capability discovery: From a40311d06bc9d4ec7aab5a42f9143c6e9a693aea Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 17 Dec 2025 04:58:37 -0800 Subject: [PATCH 09/17] Cleanup --- .../eim-nbapi-cli-decomposition.md | 485 ++++++++---------- 1 file changed, 202 insertions(+), 283 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 533abe21d..6d2192fcc 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -2,7 +2,7 @@ Author(s) Edge Infrastructure Manager Team -Last updated: 7/11/25 +Last updated: 17/12/25 ## Abstract @@ -11,47 +11,7 @@ The [EIM proposal for modular decomposition](https://github.com/open-edge-platfo ## Background and Context -In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound API service that exposes the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, the end user is not allowed to call the EIM APIs directly. They API calls reach first the API gateway external to EIM (Traefik gateway) which are mapped to EIM internal API endpoints and passed to EIM. -**Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. The API service communicates with Inventory via gRPC, which then manages the SB API interactions. - -### About APIV2 - -**Apiv2** is just one of EIM resource managers who talks to one EIM internal component, the Inventory, over gRPC. Same as other RMs it updates status of resources and retrieves the status allowing user performing operations on the EIM resources for manipulating Edge Nodes. -In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. - -### How NB API is Currently Built - -Currently, apiv2 (infra-core repository) holds the definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. - -The input to protoc-gen-connect-openapi comes from: -- `api/proto/services` directory - one file (services.proto) containing API operations on all the available resources (Service Layer) -- `api/proto/resources` directory - multiple files with data models - separate file with data model per single inventory resource - -Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). - -### What is Buf - -Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. - -In infra-core/apiv2, "buf generate" command is executed within the "make generate" or "make buf-gen" target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. - -Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. - -Key Items: -- Input: api/proto/**/*.proto -- Config: buf.gen.yaml, buf.work.yaml, buf.yaml -- Output: openapi.yaml -- Tool: protoc-gen-connect-openapi - -Based on the content of api/proto/ , buf also generates: -- the Go code ( Go structs, gRPC clients/services) in internal/pbapi -- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) -- documentation: docs/proto.md - - -### About CLI - -There are multiple levels of APIs currently available, with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated). +There are multiple levels of APIs currently available within EMF, with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated). The list of domain APIs includes: @@ -65,8 +25,8 @@ The list of domain APIs includes: There are two levels to the API decomposition: -- **Cross-domain decomposition**: Separation of the above domain-level APIs (e.g., only exposing EIM + Cluster APIs without App Orchestrator APIs) -- **Intra-domain decomposition**: Separation within a domain (e.g., at the EIM domain level, where the overall set of APIs includes onboarding/provisioning/Day 2 APIs, but another workflow may support only onboarding/provisioning without Day 2 support) +- **Cross-domain decomposition**: Separation of the above domain-level APIs (e.g., only exposing EIM APIs - without Cluster APIs, App Orchestrator APIs and others) +- **Intra-domain decomposition**: Separation within a domain (e.g., at the EIM domain level, where the overall set of APIs may include onboarding/provisioning/Day 2 APIs, but another workflow may support only onboarding/provisioning without Day 2 support) The following questions must be answered and investigated: @@ -82,77 +42,58 @@ The following questions must be answered and investigated: - How to deliver the various API service versions as per desired workflows? - How to expose the list of available APIs for client consumption (orch-cli)? -## Decomposing the release of API service as a module +### Scenarios to be Supported by the Initial Decomposition -Once the investigation is completed on how the API service is created today, decisions must be made on how the service will be built and released as a module. +Currently planned decomposition tasks is focused on the EIM layer. The following is the list of deployment scenarios: -**Build and Release Strategy:** +**Full EMF** - Full EMF including all existing levels of APIs. +**EIM Only** - EIM installed on its own, includes only the existing EIM APIs. +**EIM vPRO Only** - EIM installed on its own, including only the EIM APIs required to support vPRO use cases. -- The build of the API service itself will depend on the results of "top-to-bottom" and "bottom-to-top" decomposition investigations. -- Individual versions of the API service can be packaged as versioned container images: - - `apiv2-emf:x.x.x` (full EMF with all APIs) - - `apiv2-workflow1:x.x.x` (e.g., onboarding + provisioning only) - - `apiv2-workflow2:x.x.x` (e.g., minimal edge node management) +### About EIM API (apiv2) -**Recommended Approach: Multiple Container Images (One per Scenario)** +In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound API service that exposes the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, the end user is not allowed to call the EIM APIs directly. The API calls reach first the API gateway, external to EIM (Traefik gateway), thay are mapped to EIM internal API endpoints and passed to EIM. +**Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. The API service communicates with Inventory via gRPC, which then manages the SB API interactions. -**Why this is the only viable option:** +**Apiv2** is just one of EIM Resource Managers that talk to one EIM internal component - the Inventory - over gRPC. Similar to other RMs, it updates status of the Inventory resources and retrieves their status allowing user performing operations on the EIM resources for manipulating Edge Nodes. +In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. -`buf generate` doesn't just create OpenAPI specs—it generates the entire Go codebase including: -- Go structs from proto messages -- gRPC client and server code -- HTTP gateway handlers (REST to gRPC proxy) -- Type conversions and validators +#### How NB API is Currently Built -**This means:** You cannot have a single image with "all code" and selectively enable services at runtime. If a service isn't generated by `buf`, the Go code doesn't exist and handlers can't be registered. +Currently, apiv2 (infra-core repository) holds the definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. -**Build Strategy:** -- Build **separate container images per scenario**, each containing only the required API subset - - `apiv2:eim-full-2025.2` (full EMF with all APIs) - - `apiv2:eim-minimal-2025.2` (onboarding + provisioning only) - - `apiv2:eim-provisioning-2025.2` (provisioning workflow only) -- Each image is built from scenario manifests via Makefile -- Each build runs `buf generate` with only the proto files for that scenario's services +The input to protoc-gen-connect-openapi comes from: +- `api/proto/services` directory - one file (services.proto) containing API operations on all the available resources (Service Layer) +- `api/proto/resources` directory - multiple files with data models - separate file with data model per single inventory resource -**Build Process Per Scenario:** -```bash -# For eim-minimal scenario -1. Read scenarios/eim-minimal.yaml → services: [onboarding, provisioning] -2. Run: buf generate api/proto/services/onboarding/v1 api/proto/services/provisioning/v1 -3. Compile Go code (only onboarding and provisioning code exists) -4. Build Docker image: apiv2:eim-minimal-2025.2 -``` +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). -**Pros:** -- ✅ Only compiles and includes needed services (smaller images) -- ✅ Explicit API surface per image -- ✅ Clear separation between scenarios -- ✅ Better security (reduced attack surface—unused code doesn't exist) -- ✅ Faster startup (fewer services to initialize) -- ✅ Aligns with how `buf generate` actually works +#### About Buf -**Cons:** -- Multiple images to build and maintain in CI/CD -- More storage in container registry -- Need to rebuild all images for common code changes +Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. -**Helm Chart:** +In infra-core/apiv2, "buf generate" command is executed within the **make generate** or **make buf-gen** target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. -Single Helm chart for all scenarios will use a flag to use scenariospecific image +Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. + +**Key Items:** +- Input: api/proto/**/*.proto +- Config: buf.gen.yaml, buf.work.yaml, buf.yaml +- Output: openapi.yaml +- Tool: protoc-gen-connect-openapi -**Benefits:** -- Single Helm chart to maintain -- Image selection controlled by tag that includes scenario name -- Easy to switch scenarios by changing one value -- Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/minimal.yaml` sets `eimScenario: eim-minimal` set in deployment configuration) +Based on the content of api/proto/ , buf also generates: +- the Go code ( Go structs, gRPC clients/services) in internal/pbapi +- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) +- documentation: docs/proto.md -### Decomposing the API service +## Decomposing the API service An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavors of the same API service providing different sets of APIs. **Design Principles:** -1. **Single Source of Truth**: The total set of APIs serves as the main source of the API service, and other flavors/subsets are automatically derived from this based on required functionality. This makes maintenance simple and centralized. +1. **Single Source of Truth**: The total set of APIs serves as the main source of the API service, and other API subsets are automatically derived from this based on required functionality. This makes maintenance simple and centralized. 2. **Domain-Level Decomposition**: The API service should be decomposed at the domain level, meaning that all domains or a subset of domains should be available as part of the EMF. - At this level, APIs are already decomposed/modular and deployed as separate services (e.g., EIM APIs, Cluster APIs, App Orchestrator APIs) @@ -162,11 +103,11 @@ An investigation needs to be conducted into how the API service can be decompose - **Example**: Within the EIM domain, we may not want to expose Day 2 functionality for some workflows, even though Day 2 operations are part of the full EIM OpenAPI spec - This allows workflows focused on onboarding/provisioning to omit upgrade, maintenance, and troubleshooting APIs -4. **Resource-Level Decomposition** (Under Investigation): The API service may also need to be decomposed at the individual internal service level. +4. **Resource-Level Decomposition**: The API service may also need to be decomposed at the individual internal service level. - **Example**: Host resource might need different data models across use cases - **Note**: This would require separate data models and may increase complexity significantly -The following are the usual options to decomposing or exposing subsets of APIs. +The following are the investigated options to decomposing or exposing subsets of APIs. - ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. - Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. @@ -176,105 +117,46 @@ The following are the usual options to decomposing or exposing subsets of APIs. - OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give us the automated approach for creating individual OpenAPI specs for workflows based on labels. - Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. -### Consuming the APIs from the CLI - -The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are currently supported by the deployed API service. Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 - -**CLI Workflow:** - -1. **Discovery on Login**: The CLI requests API capability information upon user login -2. **Configuration Caching**: The CLI saves the supported API configuration locally -3. **Command Validation**: Before executing commands, the CLI checks the cached configuration -4. **Graceful Degradation**: If a command maps to an unsupported API, the CLI displays a clear error message - -**Implementation Approach:** - -``` -CLI Command Flow: -┌─────────────────┐ -│ User runs │ -│ orch-cli login │ -└────────┬────────┘ - │ - ▼ -┌─────────────────────────┐ -│ GET /../capabilities│ ← New service endpoint -└────────┬────────────────┘ - │ - ▼ -┌─────────────────────────┐ -│ Response: │ -│ { │ -│ "scenario": "eim-min",│ -│ "apis": [ │ -│ "onboarding", │ -│ "provisioning" │ -│ ] │ -│ } │ -└────────┬────────────────┘ - │ - ▼ -┌─────────────────────────┐ -│ CLI caches config │ -│ in ~/.orch-cli/config │ -└─────────────────────────┘ -``` - -**Command Execution:** -- The CLI checks the capability configuration before executing a command -- If the required API is not in the supported list, display: - ``` - Error: This orchestrator deployment does not support - Available features: onboarding, provisioning - ``` -- For direct curl calls to unsupported endpoints, the API service returns a standard 404 or 501 (Not Implemented) with a descriptive message - -## Summary of Action Items - -### 1. Traefik Gateway Compatibility - -**Status**: Phase 1 retains Traefik for all workflows +### Proposal: Decomposing the release of API service as a module +This section describes how the apiv2 (NB API) service will be built, packaged, and released, enabling scenario-specific variants: -**Action Items**: -- Investigate how the Traefik→EIM mapping behaves when EIM only supports a subset of APIs -- Determine error handling when Traefik routes to non-existent EIM endpoints - -### 2. Data Model and API Compatibility - -**Action Items**: -- Ensure APIs support specific use cases while maintaining compatibility with other workflows -- Review and potentially modify data models to accommodate multiple scenarios -- **Example**: Instance creation requires OS profile for general use case, but this may not be true for self-installed OSes/Edge Nodes - - Make fields conditionally required based on scenario - - Document field requirements per scenario in OpenAPI spec -- Collaborate with teams/ADR owners to establish: - - Required changes at Resource Manager level - - Required changes at Inventory level - - Impact on APIs from these changes - -**Timeline**: Investigation required once the set of services is known per each scenario - -### 3. Scenario Definition and API Mapping - -**Action Items**: -- Define all supported scenarios (e.g., full EMF, minimal onboarding, edge provisioning only) -- For each scenario, document: - - Required services (which resource managers are needed) - - Required API endpoints (which operations are exposed) - - Data model variations (if any) - - Deployment configuration (Helm values, feature flags) -- Create a scenario-to-API mapping matrix -- Define API specifications per scenario +- The build of the API service itself will depend on the results of "top-to-bottom" and "bottom-to-top" decomposition investigations. +- API subsets supported per scenario will be stored in the respective scenario manifest. +- `buf generate` will use only the proto files per services related to the scenario. +- Separate container images will be built per scenario, each supporting only the required API subset and versioned accordingly: + - `apiv2-full:x.x.x` (full EIM with all APIs) + - `apiv2-eim-vPRO:x.x.x` (EIM only for vPRO) +- Single Helm chart for all scenarios will use a specific value to use scenario specific image +- Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/minimal.yaml` sets `eimScenario: eim-vPRO` set in deployment configuration) + +**Recommended Release Approach:** Build and release multiple apiv2 container images - one per scenario. Single Helm chart for all scenarios will use a specific value to use scenario specific image. + +**Justification:** +`buf generate` doesn't just create OpenAPI specs — it generates the entire Go codebase (related to the APIs defined in the spec) including: +- Go structs based on proto definitions +- gRPC client and server code +- HTTP gateway handlers (REST to gRPC proxy) +- Type conversions and validators -**Status**: Investigation in progress +**Pros:** +- ✅ Only compiles and includes needed services per scenario (smaller images) +- ✅ Explicit APIs subset per image +- ✅ Clear separation between scenarios +- ✅ Better security (reduced attack surface — unused code doesn't exist) +- ✅ Single Helm chart to maintain +- ✅ Image selection in Helm chart controlled by value that includes scenario name +- ✅ Easy to switch scenarios by changing one Helm chart value -### Building REST API Spec per Scenario +**Cons:** +- Multiple images to build and maintain in CI/CD +- More storage in container registry +- Need to rebuild all images for common code changes -The following is the proposed solution (draft) to the requirement for decomposition of EMF, where the exposed REST API is limited to support a specific scenario and maintains compatibility with other scenarios. +### Proposal: How to Build the EIM API Service per Scenario -#### Implementation Steps +The Apiv2 service built per scenario will expose only the required APIs, while preserving compatibility across scenarios. -##### Step 1: Restructure Proto Definitions +#### Restructure Proto Definitions Split the monolithic `services.proto` file into multiple folders/files per service: @@ -294,7 +176,7 @@ api/proto/services/ └── telemetry_service.proto ``` -##### Step 2: Define Scenario Manifests +#### Define Scenario Manifests Maintain scenario manifests that list the REST API services supported by each scenario. @@ -325,44 +207,112 @@ services: - Each scenario gets its own container image - Clear, declarative configuration -##### Step 3: Expose API Capabilities Endpoint +#### Modify Build Process -Add a new service that lists supported services in the current scenario as part of other ADR. +Modify **buf-gen** make target to read the manifests and build the openapi spec as per scenario manifest. +Example of "buf generate" command to generate code supporting onboarding and provisioning services: -##### Step 4: Modify Build Process - -Modify "buf-gen" make target to build the openapi spec for suported services as per scenario manifest. (Later Tag the image with scenario name and version). +```bash + buf generate api/proto/services/onboarding/v1 api/proto/services/provisioning/v1 +``` -##### Step 5: Generate Scenario-Specific OpenAPI Specs +The generated `openapi.yaml` file will contain only the services supported by the particular scenario. The output file can be named per scenario. The build will also generate the corresponding Go types, gRPC/gateway code, and handlers for those APIs. An image will be built per scenario and pushed seperately. -Step 4 generates the `openapi.yaml` file containing only the services supported by the particular scenario. The output file can be named per scenario. An image is build per scenario and pushed seperately. +## Consuming the Scenario Specific APIs from the CLI -##### Step 6: CLI Integration +### Proposal -The CLI handles scenario variations through dynamic capability discovery: +The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are currently supported by the deployed API service. Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 . Development of such service is outside of this ADR's scope. +**CLI Workflow:** 1. **Build**: CLI is built based on the full REST API spec (generated with `SCENARIO=eim-full`) -2. **Runtime**: CLI queries the `/api/v2/capabilities` endpoint on login -3. **Validation**: Before executing commands, CLI checks if the required service is in the `supported_services` list -4. **Error Handling**: +2. **Capability Discovery on Login**: The CLI queries the new capabilities service endpoint, upon user login, to request API capability information. +3. **Configuration Caching**: The CLI saves the supported API configuration locally +4. **Command Validation**: Before executing commands, the CLI checks the cached configuration and executes only the commands supported by the currently deployed scenario. +5. **Error Handling**: - For CLI commands: Display user-friendly error message - For direct curl calls: API returns HTTP 404 or 501 with descriptive message -### Summary of Current Requirements -- Provide scenario-based API exposure for EIM (full and subsets like onboarding/provisioning). -- Deliver per-scenario OpenAPI specs and container images or runtime-config flags. -- Add a capabilities endpoint that advertises supported services and scenario. -- Ensure Traefik routing and error handling work for missing endpoints. -- Maintain single source of truth for API definitions with automated generation. -- Keep CLI operable against any scenario via discovery, caching, and validation. -- Preserve compatibility with Inventory and SB APIs; no 1:1 mapping changes required. -- Support Helm-driven configuration (image/tag, feature flags, scenario selection). -- Include SPDX headers and follow EMF Mage/ArgoCD workflows. +``` +CLI Login Command Flow: + +┌─────────────────┐ +│ User runs │ +│ orch-cli login │ +└────────┬────────┘ + │ + ▼ +┌─────────────────────────┐ +│ GET /../capabilities│ ← Example of the new service endpoint +└────────┬────────────────┘ + │ + ▼ +┌──────────────────────────┐ +│ Response: │ +│ { │ +│ "scenario": "eim-vpro",│ +│ "apis": [ ← Example of the new service responce +│ "onboarding", │ +│ "provisioning" │ +│ ] │ +│ } │ +└────────┬─────────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ CLI caches config │ +│ in ~/.orch-cli/config │ +└─────────────────────────┘ +``` + +## Summary of Action Items + +### 1. Traefik Gateway Compatibility + +**Status**: Traefik gateway will be removed for all workflows. User API calls will access EIM internal enpoints directly. + +**Action Items**: + - Investigate the impact + +### 2. Data Model and API Compatibility + +**Action Items**: +- Ensure APIs support specific use cases while maintaining compatibility with other workflows +- Review and potentially modify data models to accommodate multiple scenarios +- **Example**: Instance creation requires OS profile for general use case, but this may not be true for self-installed OSes/Edge Nodes +- Collaborate with teams/ADR owners to establish: + - Required changes at Resource Manager level + - Required changes at Inventory level + - Impact on APIs from these changes + +**Timeline**: Investigation required once the set of services is known per each scenario + +### 3. Scenario Definition and API Mapping + +**Action Items**: +- Define all supported scenarios (e.g., full EMF, EIM only, EIM only vPRO) +- For each scenario, document: + - Required services (which resource managers are needed) + - Required API endpoints (which operations are exposed) + - Data model variations (if any) + - Deployment configuration (Helm values, profiles) + +**Status**: Investigation in progress + +## Summary of Current Requirements +- Provide scenario-based API exposure for EIM (full and subsets). +- Deliver per-scenario OpenAPI specs and container images. +- Error handling for missing APIs per scenario. +- Maintain single source of truth for API definitions with automated generation of scenario specific API specs. +- Keep CLI operable against any scenario via discovery, caching, and command validation. +- Preserve compatibility with Inventory. +- Support Helm-driven configuration (image/tag, scenario selection). (to be confirmed) +- Support API selection per scenario through Mage/ArgoCD. (to be confirmed) -### Rationale -The approach aims to narrow the operational surface to the specific workflows being targeted, while ensuring the full EMF remains available for comprehensive deployments. Rather than maintaining multiple divergent API specifications, it generates tailored subsets from a single master definition to minimize duplication and drift. User experience is improved by enabling the CLI to detect capabilities at login and prevent unsupported commands upfront. Deployment remains flexible through GitOps profiles, ArgoCD application values, and Helm overrides so teams can switch scenarios without rebuilding. This enables incremental decomposition that can be adopted progressively without breaking existing integrations or workflows. +## Rationale +The approach aims to narrow the operational APIs surface to the specific scenarios being targeted, while ensuring the full EMF remains available for deployments. The proposed solution to APIs decomposition enables incremental decomposition that can be adopted progressively without breaking existing integrations or workflows. -### Investigation Needed +## Investigation Needed The following investigation tasks will drive validation of the decomposition approach: @@ -371,94 +321,63 @@ The following investigation tasks will drive validation of the decomposition app - Confirm deployment pipeline changes (mage targets) and ArgoCD app configs integration. - Measure impact on gRPC gateway generation and handler registration per scenario. -### Implementation Plan for Orch CLI +## Implementation Plan for Orch CLI -A concise plan to enable scenario-aware CLI behavior with capability discovery and graceful command handling. +1. Add login-time scenario discovery: retrieve scenario supporetd APIs from the new service. +2. Cache discovered capabilities in orch-cli config. +3. Validate user commands against supported APIs +4. Implement error handling for unsupported APIs. +4. Adjust help to hide unsupported commands/options. +5. Define E2E tests targeting all scenarios. -- Add login-time discovery: GET capabilities (new service) to retrieve scenario, version, supported_services. -- Cache capabilities in ~/.orch-cli/config with TTL and manual refresh. -- Validate commands against supported_services; show clear errors and available features. -- Hide unsupported help entries when possible. -- Ensure full-spec build while runtime limits features based on capabilities. -- Add E2E tests targeting all scenarios. +## Implementation Plan for EIM API -### Implementation Plan for EIM API - -Here is a short plan to implement scenario-based EIM API decomposition: - -1. **Restructure Proto Files** +1. Restructure Proto Files - Split monolithic `services.proto` into service-scoped folders (onboarding, provisioning, maintenance, telemetry) - - Each service in its own directory: `api/proto/services//v1/_service.proto` + - Each service in its own directory: `api/proto/services//v1/.proto` -2. **Create Scenario Manifests** +2. Create Scenario Manifests - Add `scenarios/` directory with YAML files for each scenario - - Define service lists per scenario (eim-full, eim-minimal, etc.) - -3. **Update Makefile Build Process** - - Modify `buf-gen` target to read scenario manifest and generate only specified services - - Run `buf generate` with only the proto paths for services in that scenario - - Build per-scenario images with `docker-build` target - - Add `build-all-scenarios` target to build all images (with `clean` between each) - - Image naming convention: `apiv2:-` - - Critical: Must clean generated code between scenarios to avoid conflicts - -4. **Implement Capabilities Service** - - Add `CapabilitiesService` protobuf definition - - Implement handler to return scenario name, supported services, and version - - Endpoint reads scenario from build-time embedded configuration - -5. **Handler Registration** - - Conditionally register gRPC and HTTP gateway handlers based on compiled services - - Only services included in buf-gen for that scenario will have gRPC handlers - -6. **OpenAPI Generation** - - Produce per-scenario OpenAPI outputs: `api/openapi/-openapi.yaml` - - Each image embeds its own OpenAPI spec - -7. **Update Helm Chart** + - Define service lists per scenario + +3. Update Makefile Build Process + - Modify `buf-gen` target to read scenario manifest and generate only specified services. + - Modify Makefile to allow building per-scenario images + - Add `docker-build-all` target to build images for all scenarios. + - Modify image naming convention. + +4. Update Helm Chart - Use single, common Helm chart for all scenarios - - Add `image.tag` value to select which scenario image to deploy - - Add `scenario.name` value for documentation/validation - - Expose capabilities endpoint in service definition + - Add a new value to select which scenario image to deploy (e.g.: `image.tag`) -8. **ArgoCD Integration** +5. ArgoCD Integration (to be confirmed) - Update ArgoCD application templates to use scenario-based image tags - - Add `argo.eimScenario` value to cluster configs + - Add `argo.eimScenario` value to cluster configs - Profiles specify which scenario to deploy (e.g., minimal profile uses `eim-minimal`) -9. **CI/CD Pipeline** +6. CI/CD Pipeline (to be confirmed) - Build all scenario images in CI - Tag with both scenario name and version - Push all images to registry -### Test plan +## Test plan -This section describes a practical test plan to verify EIM’s scenario-based APIs. It checks that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear and safe. +Tests will verify that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear. -- Integration: REST->gRPC gateway for included services; absence returns 404/501 with descriptive messages. +- CLI integration: can disover supported service services; absence returns 404/501 with descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. -- Traefik: Route correctness per scenario; behavior for missing endpoints. -- Data model: Conditional field requirements validated per scenario. -- Profiles: Deploy each scenario via mage deploy:kindPreset; verify openapi and endpoints. -- Regression: Ensure full EMF scenario parity with current API suite. - -### Open Issues -- Strategy for OpenAPI proto-level splitting of services into seperate files - modified directory structure and buf-gen make target implementation. -- Long-term plan post-Traefik gateway removal and impacts. -- Handling version compatibility between CLI and the proposed capabilities service (what happens when the service does not exist and CLI expects it to exist?). +- Deployment E2E: Deploy each scenario via mage and verify that expected endpoints exist and work. +- Regression: Verify the full EMF scenario behaves identically to pre-decomposition. + +## Open Issues +- Post-Traefik gateway removal and impacts. +- What happens when the service does not exist and CLI expects it to exist?). - Detailed scenario definitions on the Inventory level - NB APIs should be alligned with the Inventory resource availability in each scenario. -- Managing cross-domain scenarios when EIM-only vs multi-domain APIs are required. - Managing apiv2 image version used by infra-core argo application - deployment level. +- Scenario deployment through argocd/mage - is it in the scope of this ADR? +- What will be the Image naming convention (per scenario)? (example: `apiv2-:` or `apiv2:-`) -### Uncertainties +## Uncertainties - How does potential removal of the API gateway affect the exposure of APIs to the client? (In relation to ADR: https://jira.devtools.intel.com/browse/ITEP-79422) -- How will the decomposition and availability of APIs within the API service be mapped back to the Inventory and the set of South Bound (SB) APIs? - Which approach to exposing the set of operational EMF services/features is accepted (In relation to ADR: https://github.com/open-edge-platform/edge-manageability-framework/pull/1106) - - - - - - - From 9573b83b0f2dac9352b59e73d6239b85b216eb72 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 17 Dec 2025 05:01:42 -0800 Subject: [PATCH 10/17] Cleanup --- design-proposals/eim-nbapi-cli-decomposition.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 6d2192fcc..ca057360b 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -46,9 +46,9 @@ The following questions must be answered and investigated: Currently planned decomposition tasks is focused on the EIM layer. The following is the list of deployment scenarios: -**Full EMF** - Full EMF including all existing levels of APIs. -**EIM Only** - EIM installed on its own, includes only the existing EIM APIs. -**EIM vPRO Only** - EIM installed on its own, including only the EIM APIs required to support vPRO use cases. +- **Full EMF** - Full EMF including all existing levels of APIs. +- **EIM Only** - EIM installed on its own, includes only the existing EIM APIs. +- **EIM vPRO Only** - EIM installed on its own, including only the EIM APIs required to support vPRO use cases. ### About EIM API (apiv2) From 61e1085970599c26e7331391101ed24f02cbaca4 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 17 Dec 2025 10:01:15 -0800 Subject: [PATCH 11/17] Cleanup --- .../eim-nbapi-cli-decomposition.md | 82 +++++++------------ 1 file changed, 31 insertions(+), 51 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index ca057360b..0436c5b0a 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -118,6 +118,7 @@ The following are the investigated options to decomposing or exposing subsets of - Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. ### Proposal: Decomposing the release of API service as a module + This section describes how the apiv2 (NB API) service will be built, packaged, and released, enabling scenario-specific variants: - The build of the API service itself will depend on the results of "top-to-bottom" and "bottom-to-top" decomposition investigations. @@ -125,7 +126,7 @@ This section describes how the apiv2 (NB API) service will be built, packaged, a - `buf generate` will use only the proto files per services related to the scenario. - Separate container images will be built per scenario, each supporting only the required API subset and versioned accordingly: - `apiv2-full:x.x.x` (full EIM with all APIs) - - `apiv2-eim-vPRO:x.x.x` (EIM only for vPRO) + - `apiv2-eim-vpro:x.x.x` (EIM only for vPRO) - Single Helm chart for all scenarios will use a specific value to use scenario specific image - Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/minimal.yaml` sets `eimScenario: eim-vPRO` set in deployment configuration) @@ -142,14 +143,12 @@ This section describes how the apiv2 (NB API) service will be built, packaged, a - ✅ Only compiles and includes needed services per scenario (smaller images) - ✅ Explicit APIs subset per image - ✅ Clear separation between scenarios -- ✅ Better security (reduced attack surface — unused code doesn't exist) +- ✅ Better security (unused code doesn't exist) - ✅ Single Helm chart to maintain - ✅ Image selection in Helm chart controlled by value that includes scenario name -- ✅ Easy to switch scenarios by changing one Helm chart value **Cons:** - Multiple images to build and maintain in CI/CD -- More storage in container registry - Need to rebuild all images for common code changes ### Proposal: How to Build the EIM API Service per Scenario @@ -164,16 +163,16 @@ Split the monolithic `services.proto` file into multiple folders/files per servi api/proto/services/ ├── onboarding/ │ └── v1/ -│ └── onboarding_service.proto +│ └── service1.proto ├── provisioning/ │ └── v1/ -│ └── provisioning_service.proto +│ └── service2.proto ├── maintenance/ │ └── v1/ -│ └── maintenance_service.proto +│ └── service3.proto └── telemetry/ └── v1/ - └── telemetry_service.proto + └── service4.proto ``` #### Define Scenario Manifests @@ -203,9 +202,7 @@ services: **Why manifest files:** - Makefile-driven builds read the manifest to determine which services to compile - Version controlled in git repository -- No runtime database dependencies -- Each scenario gets its own container image -- Clear, declarative configuration +- No database dependencies #### Modify Build Process @@ -231,7 +228,7 @@ The best approach would be for the EMF to provide a service that communicates wh 4. **Command Validation**: Before executing commands, the CLI checks the cached configuration and executes only the commands supported by the currently deployed scenario. 5. **Error Handling**: - For CLI commands: Display user-friendly error message - - For direct curl calls: API returns HTTP 404 or 501 with descriptive message + - For direct curl calls: API returns HTTP 404 (endpoint not found) or 501 (HTTP method not implemented) ``` CLI Login Command Flow: @@ -269,27 +266,18 @@ CLI Login Command Flow: ### 1. Traefik Gateway Compatibility -**Status**: Traefik gateway will be removed for all workflows. User API calls will access EIM internal enpoints directly. - -**Action Items**: - - Investigate the impact +- Traefik gateway will be removed for all workflows. User API calls will access EIM internal enpoints directly. +- Investigate the impact -### 2. Data Model and API Compatibility +### 2. Data Model Changes -**Action Items**: -- Ensure APIs support specific use cases while maintaining compatibility with other workflows -- Review and potentially modify data models to accommodate multiple scenarios -- **Example**: Instance creation requires OS profile for general use case, but this may not be true for self-installed OSes/Edge Nodes -- Collaborate with teams/ADR owners to establish: - - Required changes at Resource Manager level +- Collaborate with teams/ADR owners to establish (per scenario): + - Required changes at Resource Managers level - Required changes at Inventory level - Impact on APIs from these changes -**Timeline**: Investigation required once the set of services is known per each scenario - ### 3. Scenario Definition and API Mapping -**Action Items**: - Define all supported scenarios (e.g., full EMF, EIM only, EIM only vPRO) - For each scenario, document: - Required services (which resource managers are needed) @@ -297,29 +285,28 @@ CLI Login Command Flow: - Data model variations (if any) - Deployment configuration (Helm values, profiles) -**Status**: Investigation in progress - ## Summary of Current Requirements -- Provide scenario-based API exposure for EIM (full and subsets). +- Provide scenario-based EIM API sets (full and subsets). +- Preserve APIs compatibility with Inventory. - Deliver per-scenario OpenAPI specs and container images. -- Error handling for missing APIs per scenario. - Maintain single source of truth for API definitions with automated generation of scenario specific API specs. - Keep CLI operable against any scenario via discovery, caching, and command validation. -- Preserve compatibility with Inventory. -- Support Helm-driven configuration (image/tag, scenario selection). (to be confirmed) -- Support API selection per scenario through Mage/ArgoCD. (to be confirmed) +- Provide error handling for missing APIs per scenario. +- Support Helm-driven configuration (image/tag, scenario selection). +- Support API selection per scenario through Mage/ArgoCD. ## Rationale + The approach aims to narrow the operational APIs surface to the specific scenarios being targeted, while ensuring the full EMF remains available for deployments. The proposed solution to APIs decomposition enables incremental decomposition that can be adopted progressively without breaking existing integrations or workflows. ## Investigation Needed The following investigation tasks will drive validation of the decomposition approach: -- Validate feasibility of splitting services.proto and generating per-scenario specs via buf/protoc-gen-connect-openapi. -- Evaluate Inventory data model variations and conditional field requirements per scenario. -- Confirm deployment pipeline changes (mage targets) and ArgoCD app configs integration. -- Measure impact on gRPC gateway generation and handler registration per scenario. +1. Validate feasibility of splitting services.proto and generating per-scenario specs via buf/protoc-gen-connect-openapi. +2. Evaluate Inventory data model variations per scenario. +3. Verify impact of **1** and **2** on gRPC gateway generation and handler registration per scenario (buf code generation). +4. Validate Argo CD application configs or Mage targets for scenario-specific deployments. ## Implementation Plan for Orch CLI @@ -333,7 +320,7 @@ The following investigation tasks will drive validation of the decomposition app ## Implementation Plan for EIM API 1. Restructure Proto Files - - Split monolithic `services.proto` into service-scoped folders (onboarding, provisioning, maintenance, telemetry) + - Split monolithic `services.proto` into service-scoped folders (e.g.: onboarding, provisioning, maintenance, telemetry) - Each service in its own directory: `api/proto/services//v1/.proto` 2. Create Scenario Manifests @@ -350,12 +337,9 @@ The following investigation tasks will drive validation of the decomposition app - Use single, common Helm chart for all scenarios - Add a new value to select which scenario image to deploy (e.g.: `image.tag`) -5. ArgoCD Integration (to be confirmed) - - Update ArgoCD application templates to use scenario-based image tags - - Add `argo.eimScenario` value to cluster configs - - Profiles specify which scenario to deploy (e.g., minimal profile uses `eim-minimal`) +5. ArgoCD Integration -6. CI/CD Pipeline (to be confirmed) +6. CI/CD Pipeline - Build all scenario images in CI - Tag with both scenario name and version - Push all images to registry @@ -364,20 +348,16 @@ The following investigation tasks will drive validation of the decomposition app Tests will verify that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear. -- CLI integration: can disover supported service services; absence returns 404/501 with descriptive messages. +- CLI integration: CLI can discover supported services; absence returns 404/501 with descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. - Deployment E2E: Deploy each scenario via mage and verify that expected endpoints exist and work. - Regression: Verify the full EMF scenario behaves identically to pre-decomposition. ## Open Issues + - Post-Traefik gateway removal and impacts. -- What happens when the service does not exist and CLI expects it to exist?). +- What happens when the service does not exist and CLI expects it to exist?. - Detailed scenario definitions on the Inventory level - NB APIs should be alligned with the Inventory resource availability in each scenario. - Managing apiv2 image version used by infra-core argo application - deployment level. -- Scenario deployment through argocd/mage - is it in the scope of this ADR? +- Scenario deployment through argocd/mage - What will be the Image naming convention (per scenario)? (example: `apiv2-:` or `apiv2:-`) - -## Uncertainties - -- How does potential removal of the API gateway affect the exposure of APIs to the client? (In relation to ADR: https://jira.devtools.intel.com/browse/ITEP-79422) -- Which approach to exposing the set of operational EMF services/features is accepted (In relation to ADR: https://github.com/open-edge-platform/edge-manageability-framework/pull/1106) From 48a607d6a1753eb6965ad4b63646ad67149001a1 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 17 Dec 2025 10:04:14 -0800 Subject: [PATCH 12/17] Cleanup --- design-proposals/eim-nbapi-cli-decomposition.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 0436c5b0a..7f66ebf67 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -348,7 +348,7 @@ The following investigation tasks will drive validation of the decomposition app Tests will verify that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear. -- CLI integration: CLI can discover supported services; absence returns 404/501 with descriptive messages. +- CLI integration: CLI can discover supported services; absence returns descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. - Deployment E2E: Deploy each scenario via mage and verify that expected endpoints exist and work. - Regression: Verify the full EMF scenario behaves identically to pre-decomposition. From cc95fa972d57909c397b753bb8e91e479e348923 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Thu, 18 Dec 2025 03:02:54 -0800 Subject: [PATCH 13/17] Lint fix --- .../eim-nbapi-cli-decomposition.md | 325 +++++++++++------- 1 file changed, 200 insertions(+), 125 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 7f66ebf67..d4cca4fc1 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -1,4 +1,4 @@ -# Design Proposal: Exposing only the required North Bound APIs and CLI commands for the workflow as part of EIM decomposition +# Design Proposal: Scenario-Specific Northbound APIs and CLI Commands for EIM Decomposition Author(s) Edge Infrastructure Manager Team @@ -6,12 +6,21 @@ Last updated: 17/12/25 ## Abstract -In the context of EIM decomposition, the North Bound API service should be treated as an independent interchangeable module. -The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) calls out a need for exposing both a full set of EIM APIs, and a need for exposing only a subset of EIM APIs as required by individual workflows taking advantage of a modular architecture. This proposal explores how the exposed APIs can be decomposed and adjusted to reflect only the supported EIM services per particular scenario. It defines how different scenarios can be supported by API versions that match only the services and features required per scenario, while keeping the full API support in place. +In the context of EIM decomposition, the North Bound API service should be treated as an +independent interchangeable module. +The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) +calls out a need for exposing both a full set of EIM APIs, and a need for exposing only a subset of EIM API +as required by individual workflows taking advantage of a modular architecture. +This proposal explores how the exposed APIs can be decomposed +and adjusted to reflect only the supported EIM services per particular scenario. +It defines how different scenarios can be supported by API versions that match only the services +and features required per scenario, while keeping the full API support in place. ## Background and Context -There are multiple levels of APIs currently available within EMF, with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated). +There are multiple levels of APIs currently available within EMF, with individual specs available for +each domain in +[orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated). The list of domain APIs includes: @@ -25,13 +34,17 @@ The list of domain APIs includes: There are two levels to the API decomposition: -- **Cross-domain decomposition**: Separation of the above domain-level APIs (e.g., only exposing EIM APIs - without Cluster APIs, App Orchestrator APIs and others) -- **Intra-domain decomposition**: Separation within a domain (e.g., at the EIM domain level, where the overall set of APIs may include onboarding/provisioning/Day 2 APIs, but another workflow may support only onboarding/provisioning without Day 2 support) +- **Cross-domain decomposition**: Separation of the above domain-level APIs +(e.g., only exposing EIM APIs - without Cluster APIs, App Orchestrator APIs and others). +- **Intra-domain decomposition**: Separation within a domain (e.g., at the EIM domain level, +where the overall set of APIs may include onboarding/provisioning/Day 2 APIs, +but another workflow may support only onboarding/provisioning without Day 2 support). The following questions must be answered and investigated: - How is the API service built currently? - - It is built from a proto definition and code is autogenerated by the "buf" tool - [See How NB API is Currently Built](#how-nb-api-is-currently-built) + - It is built from a proto definition and code is autogenerated by the "buf" tool - + [See How NB API is Currently Built](#how-nb-api-is-currently-built) - How is the API service container image built currently? - How are the API service Helm charts built currently? - What level of decomposition is needed for the required workflows? @@ -52,94 +65,137 @@ Currently planned decomposition tasks is focused on the EIM layer. The following ### About EIM API (apiv2) -In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound API service that exposes the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, the end user is not allowed to call the EIM APIs directly. The API calls reach first the API gateway, external to EIM (Traefik gateway), thay are mapped to EIM internal API endpoints and passed to EIM. -**Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. The API service communicates with Inventory via gRPC, which then manages the SB API interactions. +In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound API service that exposes +the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, +the end user is not allowed to call the EIM APIs directly. The API calls reach first the API gateway, external +to EIM (Traefik gateway), thay are mapped to EIM internal API endpoints and passed to EIM. +**Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. +The API service communicates with Inventory via gRPC, which then manages the SB API interactions. -**Apiv2** is just one of EIM Resource Managers that talk to one EIM internal component - the Inventory - over gRPC. Similar to other RMs, it updates status of the Inventory resources and retrieves their status allowing user performing operations on the EIM resources for manipulating Edge Nodes. -In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. +**Apiv2** is just one of EIM Resource Managers that talk to one EIM internal component - the Inventory - over gRPC. +Similar to other RMs, it updates status of the Inventory resources and retrieves their status allowing user +performing operations on the EIM resources for manipulating Edge Nodes. +In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. +The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. #### How NB API is Currently Built -Currently, apiv2 (infra-core repository) holds the definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. +Currently, apiv2 (infra-core repository) holds the definition of REST API services in protocol buffer files +(.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. The input to protoc-gen-connect-openapi comes from: - `api/proto/services` directory - one file (services.proto) containing API operations on all the available resources (Service Layer) - `api/proto/resources` directory - multiple files with data models - separate file with data model per single inventory resource -Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. +It is configured as a plugin within buf (buf.gen.yaml). #### About Buf -Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. +Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with +.proto files easier as it replaces messy protoc commands with clean config file. +It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. -In infra-core/apiv2, "buf generate" command is executed within the **make generate** or **make buf-gen** target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. +In infra-core/apiv2, "buf generate" command is executed within the **make generate** or +**make buf-gen** target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. -Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. +Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services +(services.proto) and outputs the openapi spec in api/openapi. **Key Items:** + - Input: api/proto/**/*.proto - Config: buf.gen.yaml, buf.work.yaml, buf.yaml - Output: openapi.yaml - Tool: protoc-gen-connect-openapi Based on the content of api/proto/ , buf also generates: -- the Go code ( Go structs, gRPC clients/services) in internal/pbapi -- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) -- documentation: docs/proto.md + +- The Go code ( Go structs, gRPC clients/services) in internal/pbapi. +- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go). +- Documentation: docs/proto.md. ## Decomposing the API service -An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavors of the same API service providing different sets of APIs. +An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various +flavors of the same API service providing different sets of APIs. **Design Principles:** -1. **Single Source of Truth**: The total set of APIs serves as the main source of the API service, and other API subsets are automatically derived from this based on required functionality. This makes maintenance simple and centralized. +1. **Single Source of Truth**: The total set of APIs serves as the main source of the API service, +and other API subsets are automatically derived from this based on required functionality. +This makes maintenance simple and centralized. -2. **Domain-Level Decomposition**: The API service should be decomposed at the domain level, meaning that all domains or a subset of domains should be available as part of the EMF. - - At this level, APIs are already decomposed/modular and deployed as separate services (e.g., EIM APIs, Cluster APIs, App Orchestrator APIs) - - **For EIM-focused scenarios**: Only the EIM domain APIs would be included +2. **Domain-Level Decomposition**: The API service should be decomposed at the domain level, +meaning that all domains or a subset of domains should be available as part of the EMF. + - At this level, APIs are already decomposed/modular and deployed as separate services + (e.g., EIM APIs, Cluster APIs, App Orchestrator APIs). + - **For EIM-focused scenarios**: Only the EIM domain APIs would be included. -3. **Intra-Domain Decomposition**: The API service should be decomposed within the domain level, meaning that only a subset of available APIs may need to be released and/or exposed at the API service level. - - **Example**: Within the EIM domain, we may not want to expose Day 2 functionality for some workflows, even though Day 2 operations are part of the full EIM OpenAPI spec - - This allows workflows focused on onboarding/provisioning to omit upgrade, maintenance, and troubleshooting APIs +3. **Intra-Domain Decomposition**: The API service should be decomposed within the domain level, meaning +that only a subset of available APIs may need to be released and/or exposed at the API service level. + - **Example**: Within the EIM domain, we may not want to expose Day 2 functionality for some workflows, + even though Day 2 operations are part of the full EIM OpenAPI spec. + - This allows workflows focused on onboarding/provisioning to omit upgrade, maintenance, and troubleshooting APIs. 4. **Resource-Level Decomposition**: The API service may also need to be decomposed at the individual internal service level. - - **Example**: Host resource might need different data models across use cases - - **Note**: This would require separate data models and may increase complexity significantly + - **Example**: Host resource might need different data models across use cases. + - **Note**: This would require separate data models and may increase complexity significantly. The following are the investigated options to decomposing or exposing subsets of APIs. -- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. -- Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. -- ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. -- ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. -- ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases -- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give us the automated approach for creating individual OpenAPI specs for workflows based on labels. -- Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. +- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan +to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. +- Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, +the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in +terms of auto generating multiple specs from top spec. +- ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the +end users of the EMF, and we want to provide tailored modular product for each workflow. +- ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead +without benefits similar to maintaining multiple OpenAPI specs. +- ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases. +- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) +to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, +operations, or schemas. This approach is worth investigating to see if it can give us the automated approach for +creating individual OpenAPI specs for workflows based on labels. +- Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how +the API service can be build conditionally using same spec. ### Proposal: Decomposing the release of API service as a module -This section describes how the apiv2 (NB API) service will be built, packaged, and released, enabling scenario-specific variants: +This section describes how the apiv2 (NB API) service will be built, packaged, and released, +enabling scenario-specific variants: -- The build of the API service itself will depend on the results of "top-to-bottom" and "bottom-to-top" decomposition investigations. +- The build of the API service itself will depend on the results of "top-to-bottom" +and "bottom-to-top" decomposition investigations. - API subsets supported per scenario will be stored in the respective scenario manifest. - `buf generate` will use only the proto files per services related to the scenario. -- Separate container images will be built per scenario, each supporting only the required API subset and versioned accordingly: - - `apiv2-full:x.x.x` (full EIM with all APIs) - - `apiv2-eim-vpro:x.x.x` (EIM only for vPRO) +- Separate container images will be built per scenario, each supporting only +the required API subset and versioned accordingly: + - `apiv2:x.x.x` (full EMF) + - `apiv2:eim-x.x.x` (full EIM only) + - `apiv2:eim-vpro-x.x.x` (EIM only for vPRO) - Single Helm chart for all scenarios will use a specific value to use scenario specific image -- Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/minimal.yaml` sets `eimScenario: eim-vPRO` set in deployment configuration) +- Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/eim-only-vpro.yaml` +sets `eimScenario: eim-only-vpro` set in deployment configuration) -**Recommended Release Approach:** Build and release multiple apiv2 container images - one per scenario. Single Helm chart for all scenarios will use a specific value to use scenario specific image. +**Recommended Release Approach:** + +- Build and release multiple apiv2 container images - one per scenario. +Single Helm chart for all scenarios will use a specific value to use scenario specific image. **Justification:** -`buf generate` doesn't just create OpenAPI specs — it generates the entire Go codebase (related to the APIs defined in the spec) including: + +`buf generate` doesn't just create OpenAPI specs — it generates the entire +Go codebase (related to the APIs defined in the spec) including: + - Go structs based on proto definitions - gRPC client and server code -- HTTP gateway handlers (REST to gRPC proxy) +- HTTP gateway handlers (REST to gRPC) - Type conversions and validators **Pros:** + - ✅ Only compiles and includes needed services per scenario (smaller images) - ✅ Explicit APIs subset per image - ✅ Clear separation between scenarios @@ -148,119 +204,130 @@ This section describes how the apiv2 (NB API) service will be built, packaged, a - ✅ Image selection in Helm chart controlled by value that includes scenario name **Cons:** + - Multiple images to build and maintain in CI/CD - Need to rebuild all images for common code changes ### Proposal: How to Build the EIM API Service per Scenario -The Apiv2 service built per scenario will expose only the required APIs, while preserving compatibility across scenarios. +The Apiv2 service built per scenario will expose only the required APIs, +while preserving compatibility across scenarios. #### Restructure Proto Definitions Split the monolithic `services.proto` file into multiple folders/files per service: -``` -api/proto/services/ -├── onboarding/ -│ └── v1/ -│ └── service1.proto -├── provisioning/ -│ └── v1/ -│ └── service2.proto -├── maintenance/ -│ └── v1/ -│ └── service3.proto -└── telemetry/ - └── v1/ - └── service4.proto +```bash + api/proto/services/ + ├── onboarding/ + │ └── v1/ + │ └── service1.proto + ├── provisioning/ + │ └── v1/ + │ └── service2.proto + ├── maintenance/ + │ └── v1/ + │ └── service3.proto + └── telemetry/ + └── v1/ + └── service4.proto ``` #### Define Scenario Manifests Maintain scenario manifests that list the REST API services supported by each scenario. -**Recommended Approach: Scenario manifest files in repository** +**Recommended Approach:** Scenario manifest files in repository ```yaml -# scenarios/eim-minimal.yaml -name: eim-minimal -description: Minimal EIM for onboarding and provisioning only -services: - - onboarding - - provisioning - -# scenarios/eim-full.yaml -name: eim-full -description: Full EIM with all capabilities -services: - - onboarding - - provisioning - - maintenance - - telemetry + # scenarios/eim-minimal.yaml + name: eim-minimal + description: Minimal EIM for onboarding and provisioning only + services: + - onboarding + - provisioning + + # scenarios/eim-full.yaml + name: eim-full + description: Full EIM with all capabilities + services: + - onboarding + - provisioning + - maintenance + - telemetry ``` **Why manifest files:** + - Makefile-driven builds read the manifest to determine which services to compile - Version controlled in git repository - No database dependencies #### Modify Build Process -Modify **buf-gen** make target to read the manifests and build the openapi spec as per scenario manifest. -Example of "buf generate" command to generate code supporting onboarding and provisioning services: +Modify **buf-gen** make target to read the manifests and build the openapi spec as per scenario manifest. +Example of "buf generate" command to generate code supporting onboarding and provisioning services: ```bash buf generate api/proto/services/onboarding/v1 api/proto/services/provisioning/v1 ``` -The generated `openapi.yaml` file will contain only the services supported by the particular scenario. The output file can be named per scenario. The build will also generate the corresponding Go types, gRPC/gateway code, and handlers for those APIs. An image will be built per scenario and pushed seperately. +The generated `openapi.yaml` file will contain only the services supported by the particular scenario. +The output file can be named per scenario. The build will also generate the corresponding Go types, +gRPC gateway code, and handlers for those APIs. An image will be built per scenario and pushed seperately. ## Consuming the Scenario Specific APIs from the CLI ### Proposal -The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are currently supported by the deployed API service. Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 . Development of such service is outside of this ADR's scope. +The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are +currently supported by the deployed API service. +Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 . +Development of such service is outside of this ADR's scope. **CLI Workflow:** -1. **Build**: CLI is built based on the full REST API spec (generated with `SCENARIO=eim-full`) -2. **Capability Discovery on Login**: The CLI queries the new capabilities service endpoint, upon user login, to request API capability information. -3. **Configuration Caching**: The CLI saves the supported API configuration locally -4. **Command Validation**: Before executing commands, the CLI checks the cached configuration and executes only the commands supported by the currently deployed scenario. -5. **Error Handling**: - - For CLI commands: Display user-friendly error message - - For direct curl calls: API returns HTTP 404 (endpoint not found) or 501 (HTTP method not implemented) +1. **Build**: CLI is built based on the full REST API spec (generated with `SCENARIO=eim-full`). +2. **Capability Discovery on Login**: The CLI queries the new capabilities service endpoint, upon user login, +to request API capability information. +3. **Configuration Caching**: The CLI saves the supported API configuration locally. +4. **Command Validation**: Before executing commands, the CLI checks the cached configuration and executes +only the commands supported by the currently deployed scenario. +5. **Error Handling**: + - For CLI commands: Display user-friendly error message. + - For direct curl calls: API returns HTTP 404 (endpoint not found) or 501 (HTTP method not implemented). + +**CLI Login Command Flow** -``` -CLI Login Command Flow: - -┌─────────────────┐ -│ User runs │ -│ orch-cli login │ -└────────┬────────┘ - │ - ▼ -┌─────────────────────────┐ -│ GET /../capabilities│ ← Example of the new service endpoint -└────────┬────────────────┘ - │ - ▼ -┌──────────────────────────┐ -│ Response: │ -│ { │ -│ "scenario": "eim-vpro",│ -│ "apis": [ ← Example of the new service responce -│ "onboarding", │ -│ "provisioning" │ -│ ] │ -│ } │ -└────────┬─────────────────┘ - │ - ▼ -┌─────────────────────────┐ -│ CLI caches config │ -│ in ~/.orch-cli/config │ -└─────────────────────────┘ -``` +```bash + + ┌─────────────────┐ + │ User runs │ + │ orch-cli login │ + └────────┬────────┘ + │ + ▼ + ┌─────────────────────────┐ + │ GET /../capabilities│ ← Example of the new service endpoint + └────────┬────────────────┘ + │ + ▼ + ┌──────────────────────────┐ + │ Response: │ + │ { │ + │ "scenario": "eim-vpro",│ + │ "apis": [ ← Example of the new service responce + │ "onboarding", │ + │ "provisioning" │ + │ ] │ + │ } │ + └────────┬─────────────────┘ + │ + ▼ + ┌─────────────────────────┐ + │ CLI caches config │ + │ in ~/.orch-cli/config │ + └─────────────────────────┘ + ``` ## Summary of Action Items @@ -286,6 +353,7 @@ CLI Login Command Flow: - Deployment configuration (Helm values, profiles) ## Summary of Current Requirements + - Provide scenario-based EIM API sets (full and subsets). - Preserve APIs compatibility with Inventory. - Deliver per-scenario OpenAPI specs and container images. @@ -297,7 +365,10 @@ CLI Login Command Flow: ## Rationale -The approach aims to narrow the operational APIs surface to the specific scenarios being targeted, while ensuring the full EMF remains available for deployments. The proposed solution to APIs decomposition enables incremental decomposition that can be adopted progressively without breaking existing integrations or workflows. +The approach aims to narrow the operational APIs surface to the specific scenarios being targeted, +while ensuring the full EMF remains available for deployments. +The proposed solution to APIs decomposition enables incremental decomposition that can be adopted +progressively without breaking existing integrations or workflows. ## Investigation Needed @@ -314,13 +385,14 @@ The following investigation tasks will drive validation of the decomposition app 2. Cache discovered capabilities in orch-cli config. 3. Validate user commands against supported APIs 4. Implement error handling for unsupported APIs. -4. Adjust help to hide unsupported commands/options. -5. Define E2E tests targeting all scenarios. +5. Adjust help to hide unsupported commands/options. +6. Define E2E tests targeting all scenarios. ## Implementation Plan for EIM API 1. Restructure Proto Files - - Split monolithic `services.proto` into service-scoped folders (e.g.: onboarding, provisioning, maintenance, telemetry) + - Split monolithic `services.proto` into service-scoped folders + (e.g.: onboarding, provisioning, maintenance, telemetry) - Each service in its own directory: `api/proto/services//v1/.proto` 2. Create Scenario Manifests @@ -346,7 +418,8 @@ The following investigation tasks will drive validation of the decomposition app ## Test plan -Tests will verify that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear. +Tests will verify that minimal and full deployments work as expected, that clients can discover +supported features, and that errors are clear. - CLI integration: CLI can discover supported services; absence returns descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. @@ -357,7 +430,9 @@ Tests will verify that minimal and full deployments work as expected, that clien - Post-Traefik gateway removal and impacts. - What happens when the service does not exist and CLI expects it to exist?. -- Detailed scenario definitions on the Inventory level - NB APIs should be alligned with the Inventory resource availability in each scenario. +- Detailed scenario definitions on the Inventory level - NB APIs should be alligned +with the Inventoryresource availability in each scenario. - Managing apiv2 image version used by infra-core argo application - deployment level. - Scenario deployment through argocd/mage -- What will be the Image naming convention (per scenario)? (example: `apiv2-:` or `apiv2:-`) +- What will be the Image naming convention (per scenario)? +(example: `apiv2-:` or `apiv2:-`) From 190d15c2d2dc9c40e10b8c535fcba74451dda72b Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Thu, 18 Dec 2025 03:58:15 -0800 Subject: [PATCH 14/17] Lint fix --- .../eim-nbapi-cli-decomposition.md | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index d4cca4fc1..9fbc82f45 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -84,11 +84,14 @@ Currently, apiv2 (infra-core repository) holds the definition of REST API servic (.proto) and uses protoc-gen-connect-openapi to autogenerate the OpenAPI spec - openapi.yaml. The input to protoc-gen-connect-openapi comes from: -- `api/proto/services` directory - one file (services.proto) containing API operations on all the available resources (Service Layer) -- `api/proto/resources` directory - multiple files with data models - separate file with data model per single inventory resource + +- `api/proto/services` directory - one file (services.proto) containing API operations on +all the available resources (Service Layer) +- `api/proto/resources` directory - multiple files with data models - separate file with data +model per single inventory resource Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. -It is configured as a plugin within buf (buf.gen.yaml). +It is configured as a plugin within buf (buf.gen.yaml). #### About Buf @@ -144,7 +147,7 @@ that only a subset of available APIs may need to be released and/or exposed at t The following are the investigated options to decomposing or exposing subsets of APIs. -- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan +- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan. to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. - Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in @@ -282,11 +285,12 @@ gRPC gateway code, and handlers for those APIs. An image will be built per scena The best approach would be for the EMF to provide a service that communicates which endpoints/APIs are currently supported by the deployed API service. -Proposed in ADR https://github.com/open-edge-platform/edge-manageability-framework/pull/1106 . -Development of such service is outside of this ADR's scope. +Proposed in [Design Proposal: Orchestrator Component Status Service](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/platform-component-status-service.md). +Development of such service is outside of this ADR's scope. + +#### CLI Workflow -**CLI Workflow:** -1. **Build**: CLI is built based on the full REST API spec (generated with `SCENARIO=eim-full`). +1. **Build**: CLI is built based on the full REST API spec. 2. **Capability Discovery on Login**: The CLI queries the new capabilities service endpoint, upon user login, to request API capability information. 3. **Configuration Caching**: The CLI saves the supported API configuration locally. @@ -296,10 +300,9 @@ only the commands supported by the currently deployed scenario. - For CLI commands: Display user-friendly error message. - For direct curl calls: API returns HTTP 404 (endpoint not found) or 501 (HTTP method not implemented). -**CLI Login Command Flow** +#### CLI Login Command Flow ```bash - ┌─────────────────┐ │ User runs │ │ orch-cli login │ @@ -336,21 +339,22 @@ only the commands supported by the currently deployed scenario. - Traefik gateway will be removed for all workflows. User API calls will access EIM internal enpoints directly. - Investigate the impact -### 2. Data Model Changes +### 2. Scenario Definition and API Mapping -- Collaborate with teams/ADR owners to establish (per scenario): - - Required changes at Resource Managers level - - Required changes at Inventory level - - Impact on APIs from these changes +- Define all supported scenarios: + - Full EMF + - EIM-only + - EIM-only vPRO +- For each scenario, document: + - Required services (which resource managers are needed) + - Required API endpoints (which operations are exposed) + - Deployment configuration (Helm values, profiles) -### 3. Scenario Definition and API Mapping +### 3. Data Model Changes -- Define all supported scenarios (e.g., full EMF, EIM only, EIM only vPRO) -- For each scenario, document: - - Required services (which resource managers are needed) - - Required API endpoints (which operations are exposed) - - Data model variations (if any) - - Deployment configuration (Helm values, profiles) +- Collaborate with teams/ADR owners to establish (per scenario): + - Required changes at Inventory level + - Impact on APIs from these changes (changes in data models) ## Summary of Current Requirements @@ -419,7 +423,7 @@ The following investigation tasks will drive validation of the decomposition app ## Test plan Tests will verify that minimal and full deployments work as expected, that clients can discover -supported features, and that errors are clear. +supported features, and that errors are clear. - CLI integration: CLI can discover supported services; absence returns descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. From 8d0c0d1690a6a8beb8fff5744aef73a504d1eec9 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Thu, 18 Dec 2025 04:16:54 -0800 Subject: [PATCH 15/17] Lint fix --- .../eim-nbapi-cli-decomposition.md | 39 +++++++++---------- 1 file changed, 18 insertions(+), 21 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 9fbc82f45..4eebe0404 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -105,7 +105,7 @@ In infra-core/apiv2, "buf generate" command is executed within the **make genera Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. -**Key Items:** +Key Items: - Input: api/proto/**/*.proto - Config: buf.gen.yaml, buf.work.yaml, buf.yaml @@ -182,9 +182,7 @@ the required API subset and versioned accordingly: - Argo profiles can specify different scenarios (e.g., `orch-configs/profiles/eim-only-vpro.yaml` sets `eimScenario: eim-only-vpro` set in deployment configuration) -**Recommended Release Approach:** - -- Build and release multiple apiv2 container images - one per scenario. +**Recommended Release Approach:** Build and release multiple apiv2 container images - one per scenario. Single Helm chart for all scenarios will use a specific value to use scenario specific image. **Justification:** @@ -239,25 +237,24 @@ Split the monolithic `services.proto` file into multiple folders/files per servi #### Define Scenario Manifests Maintain scenario manifests that list the REST API services supported by each scenario. - -**Recommended Approach:** Scenario manifest files in repository +Scenario manifest files will be kept in repository. The following are the examples of the manifests: ```yaml - # scenarios/eim-minimal.yaml - name: eim-minimal - description: Minimal EIM for onboarding and provisioning only + # scenarios/eim-only.yaml + name: eim-only + description: Only EIM services: - onboarding - provisioning - - # scenarios/eim-full.yaml - name: eim-full - description: Full EIM with all capabilities - services: - - onboarding - provisioning - maintenance - telemetry + + # scenarios/eim-vpro-only.yaml + name: eim-vpro + description: EIM vPRO Only + services: + - onboarding ``` **Why manifest files:** @@ -342,13 +339,13 @@ only the commands supported by the currently deployed scenario. ### 2. Scenario Definition and API Mapping - Define all supported scenarios: - - Full EMF - - EIM-only - - EIM-only vPRO + - Full EMF + - EIM-only + - EIM-only vPRO - For each scenario, document: - - Required services (which resource managers are needed) - - Required API endpoints (which operations are exposed) - - Deployment configuration (Helm values, profiles) + - Required services (which resource managers are needed) + - Required API endpoints (which operations are exposed) + - Deployment configuration (Helm values, profiles) ### 3. Data Model Changes From c0a27880f869a33bab4bf2c86864f9f4e434b3d4 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Thu, 18 Dec 2025 04:23:56 -0800 Subject: [PATCH 16/17] Fix lint errors in other documents --- README.md | 6 +++--- design-proposals/app-orch-deploy-applications.md | 3 ++- design-proposals/eim-nbapi-cli-decomposition.md | 6 +++--- design-proposals/eim-pxe-with-managed-emf.md | 3 ++- design-proposals/platform-installer-simplification.md | 6 +++--- design-proposals/vpro-device.md | 3 ++- 6 files changed, 15 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index d9634c46c..bb6cdee56 100644 --- a/README.md +++ b/README.md @@ -48,9 +48,9 @@ distributed edges - [UI](https://github.com/open-edge-platform/orch-ui): The web user interface for the Edge Orchestrator, allowing the user to manage most of the features of the product in an intuitive, visual, manner without having to trigger a series of APIs individually. -- [CLI](https://github.com/open-edge-platform/orch-cli): The command line interface for the Edge Orchestrator, allowing the -user to manage most of the features of the product in an intuitive, text-based manner without having to trigger a series -of APIs individually. +- [CLI](https://github.com/open-edge-platform/orch-cli): The command line interface for the Edge Orchestrator, +allowing the user to manage most of the features of the product in an intuitive, +text-based manner without having to trigger a series of APIs individually. - [Observability](https://docs.openedgeplatform.intel.com/edge-manage-docs/main/developer_guide/observability/index.html): A modular observability stack that provides visibility into the health and performance of the system, including logging, reporting, alerts, and SRE data from Edge Orchestrator components and Edge Nodes. diff --git a/design-proposals/app-orch-deploy-applications.md b/design-proposals/app-orch-deploy-applications.md index fff3ca2f4..ee430f0d7 100755 --- a/design-proposals/app-orch-deploy-applications.md +++ b/design-proposals/app-orch-deploy-applications.md @@ -197,7 +197,8 @@ like Profiles and Parameter Templates are lost. - Update the `Edit Deployment` page similar to the changes to `Create Deployment` - Update the `Deployments` list page to support linkage to both Apps and DP - - Update the `Application` page to have a deployment link and display the `is_deployed` field in both list and detail view. + - Update the `Application` page to have a deployment link and display the `is_deployed` + field in both list and detail view. - Update any status tables and dashboards as necessary to support these changes. diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 4eebe0404..77aae3bf6 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -2,7 +2,7 @@ Author(s) Edge Infrastructure Manager Team -Last updated: 17/12/25 +Last updated: 18/12/25 ## Abstract @@ -350,8 +350,8 @@ only the commands supported by the currently deployed scenario. ### 3. Data Model Changes - Collaborate with teams/ADR owners to establish (per scenario): - - Required changes at Inventory level - - Impact on APIs from these changes (changes in data models) + - Required changes at Inventory level + - Impact on APIs from these changes (changes in data models) ## Summary of Current Requirements diff --git a/design-proposals/eim-pxe-with-managed-emf.md b/design-proposals/eim-pxe-with-managed-emf.md index 9bcbe9beb..40afd304e 100644 --- a/design-proposals/eim-pxe-with-managed-emf.md +++ b/design-proposals/eim-pxe-with-managed-emf.md @@ -30,7 +30,8 @@ Given its small footprint it is possible to deploy PXE server on site using seve In this solution, the PXE server only stores the `ipxe.efi` binary (that is downloaded from the remote orchestrator), and serves it to local Edge Nodes attempting the PXE boot. During the PXE boot, ENs download `ipxe.efi` and boot into it. -The iPXE script includes a logic to fetch IP address from a local DHCP server and download Micro-OS from the remote EMF orchestrator. +The iPXE script includes a logic to fetch IP address from a local DHCP server and download Micro-OS +from the remote EMF orchestrator. Once booted into Micro-OS, the provisioning process is taken over by the cloud-based orchestrator. From now on, ENs communicate with the remote EMF orchestrator to complete OS provisioning. The secure channel is ensured by using HTTPS communication with JWT authorization. diff --git a/design-proposals/platform-installer-simplification.md b/design-proposals/platform-installer-simplification.md index 0f3f4f9a2..c1bdd1508 100644 --- a/design-proposals/platform-installer-simplification.md +++ b/design-proposals/platform-installer-simplification.md @@ -449,9 +449,9 @@ deployment is delayed. #### Eliminate ArgoCD -Once the syncwaves have been reduced or eliminated, then it is feasible to eliminate ArgoCD in favor of a simpler tool. We -will explore alternatives such as umbrella charts, the helmfile tool, or other opensource solutions. We may explore repo -and/or chart consolidation to make the helm chart structure simpler. +Once the syncwaves have been reduced or eliminated, then it is feasible to eliminate ArgoCD in favor of a +simpler tool. We will explore alternatives such as umbrella charts, the helmfile tool, or other opensource +solutions. We may explore repo and/or chart consolidation to make the helm chart structure simpler. Eliminating argocd will allow the following pods to be eliminated from the platform: diff --git a/design-proposals/vpro-device.md b/design-proposals/vpro-device.md index 8c22912f4..04eb05482 100644 --- a/design-proposals/vpro-device.md +++ b/design-proposals/vpro-device.md @@ -313,4 +313,5 @@ had access to DMT capabilities. This provided critical recovery mechanisms including the ability to remotely reboot the device if provisioning got stuck, access the device out-of-band for troubleshooting, and recover from provisioning failures without requiring physical access to the device. -By moving activation to post-OS deployment, we lose all these recovery capabilities during the critical OS provisioning phase. +By moving activation to post-OS deployment, we lose all these recovery capabilities during the critical +OS provisioning phase. From 57f234a08eb83b320b31cd8db34c7c7ea4837973 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Thu, 18 Dec 2025 04:59:07 -0800 Subject: [PATCH 17/17] Minor fixes --- .../eim-nbapi-cli-decomposition.md | 27 ++++++++++--------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 77aae3bf6..69d923db1 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -69,13 +69,14 @@ In Edge Infratructure Manager (EIM) the apiv2 service represents the North Bound the EIM operations to the end user, who uses Web UI, Orch-CLI or direct API calls. Currently, the end user is not allowed to call the EIM APIs directly. The API calls reach first the API gateway, external to EIM (Traefik gateway), thay are mapped to EIM internal API endpoints and passed to EIM. + **Note**: The current mapping of external APIs to internal APIs is 1:1, with no direct mapping to SB APIs. The API service communicates with Inventory via gRPC, which then manages the SB API interactions. **Apiv2** is just one of EIM Resource Managers that talk to one EIM internal component - the Inventory - over gRPC. Similar to other RMs, it updates status of the Inventory resources and retrieves their status allowing user performing operations on the EIM resources for manipulating Edge Nodes. -In EMF 2025.2 the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. +In EMF 2025.2, the apiv2 service is deployed via a helm chart deployed by Argo CD as one of its applications. The apiv2 service is run and deployed in a container kick-started from the apiv2 service container image. #### How NB API is Currently Built @@ -86,9 +87,9 @@ Currently, apiv2 (infra-core repository) holds the definition of REST API servic The input to protoc-gen-connect-openapi comes from: - `api/proto/services` directory - one file (services.proto) containing API operations on -all the available resources (Service Layer) +all the available resources (Service Layer). - `api/proto/resources` directory - multiple files with data models - separate file with data -model per single inventory resource +model per single inventory resource. Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). @@ -99,7 +100,7 @@ Buf is a replacement for protoc (the standard Protocol Buffers compiler). It mak .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. -In infra-core/apiv2, "buf generate" command is executed within the **make generate** or +In infra-core/apiv2, **buf generate** command is executed within the **make generate** or **make buf-gen** target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services @@ -219,7 +220,7 @@ while preserving compatibility across scenarios. Split the monolithic `services.proto` file into multiple folders/files per service: ```bash - api/proto/services/ + infra-core/apiv2/api/proto/services/ ├── onboarding/ │ └── v1/ │ └── service1.proto @@ -237,7 +238,7 @@ Split the monolithic `services.proto` file into multiple folders/files per servi #### Define Scenario Manifests Maintain scenario manifests that list the REST API services supported by each scenario. -Scenario manifest files will be kept in repository. The following are the examples of the manifests: +Scenario manifest files will be kept in `infra-core/apiv2`. The following are the examples of the manifests: ```yaml # scenarios/eim-only.yaml @@ -259,14 +260,14 @@ Scenario manifest files will be kept in repository. The following are the exampl **Why manifest files:** -- Makefile-driven builds read the manifest to determine which services to compile -- Version controlled in git repository -- No database dependencies +- Makefile-driven builds read the manifest to determine which services to compile. +- Version controlled in git repository. +- No database dependencies. #### Modify Build Process Modify **buf-gen** make target to read the manifests and build the openapi spec as per scenario manifest. -Example of "buf generate" command to generate code supporting onboarding and provisioning services: +Example of **buf generate** command to generate code supporting onboarding and provisioning services: ```bash buf generate api/proto/services/onboarding/v1 api/proto/services/provisioning/v1 @@ -422,17 +423,17 @@ The following investigation tasks will drive validation of the decomposition app Tests will verify that minimal and full deployments work as expected, that clients can discover supported features, and that errors are clear. -- CLI integration: CLI can discover supported services; absence returns descriptive messages. +- CLI integration with new service: CLI can discover supported services; absence returns descriptive messages. - CLI E2E: Login discovery, caching, command blocking, error messaging. - Deployment E2E: Deploy each scenario via mage and verify that expected endpoints exist and work. - Regression: Verify the full EMF scenario behaves identically to pre-decomposition. ## Open Issues -- Post-Traefik gateway removal and impacts. +- Traefik gateway removal and impacts. - What happens when the service does not exist and CLI expects it to exist?. - Detailed scenario definitions on the Inventory level - NB APIs should be alligned -with the Inventoryresource availability in each scenario. +with the Inventory resource availability in each scenario. - Managing apiv2 image version used by infra-core argo application - deployment level. - Scenario deployment through argocd/mage - What will be the Image naming convention (per scenario)?