You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 001-streamshub-mcp-strimzi.md
+11-27Lines changed: 11 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -421,15 +421,10 @@ This way a user can only see Strimzi resources in namespaces their MCP server in
421
421
See [Log Handling](#log-handling) for details.
422
422
423
423
**MCP server authentication and authorization**:
424
-
The MCP protocol (since the [2025-06-18 spec](https://modelcontextprotocol.io/specification/2025-11-25)) supports OAuth 2.1 for both authentication and authorization.
425
-
We can control who can connect to the MCP server and what tools they can call using OAuth scopes.
426
-
If a client tries to call a tool it doesn't have the right scope for, the server responds with 403 and tells the client what scopes it needs (step-up authorization).
427
-
428
-
Quarkus provides OIDC support for this ([example](https://quarkus.io/blog/secure-mcp-oidc-client/)).
429
-
430
-
Authorization works at two levels:
431
-
- **MCP level**: OAuth scopes control which tools a user can call (e.g., a read-only scope for logs vs. a scope that also allows metrics access).
432
-
- **Kubernetes level**: The MCP server's Service Account and namespace-scoped RoleBindings control which Strimzi resources it can see (described in the authorization model above).
424
+
The MCP protocol supports OAuth 2.1 for authentication and authorization ([spec](https://modelcontextprotocol.io/specification/2025-11-25), [Quarkus OIDC example](https://quarkus.io/blog/secure-mcp-oidc-client/)).
425
+
This is out of scope for the initial implementation.
426
+
For now, we rely on Kubernetes RBAC (the Service Account and RoleBindings described above) to control access.
427
+
MCP level auth/authz can be proposed separately later as it will be reused by other MCP servers in StreamsHub-MCP
433
428
434
429
#### Prompt Injection Protection
435
430
@@ -457,29 +452,18 @@ Log retrieval needs some care to avoid returning too much data or leaking sensit
457
452
458
453
#### Metrics Strategy
459
454
460
-
The MCP server supports two approaches for accessing metrics, configured at deployment time:
461
-
462
-
**Direct pod scraping**:
463
-
Read raw metrics from Strimzi pod Prometheus endpoints (e.g., the Kafka broker's JMX exporter endpoint).
464
-
- Useful when no Prometheus instance is deployed.
465
-
- Simpler setup, only needs pod access via the MCP server's Service Account.
466
-
- Adds scraping load on the pods (on top of any existing Prometheus scraping).
467
-
- Limited to point-in-time metrics, no historical data.
455
+
The initial implementation will read metrics directly from Kafka broker pods, either from the [Strimzi Metrics Reporter](https://github.com/strimzi/metrics-reporter) or from the Kafka JMX exporter endpoint, depending on what the user has configured.
456
+
This only needs pod access via the MCP server's Service Account.
468
457
469
-
**Prometheus integration**:
470
-
Query an existing Prometheus API for Strimzi related metrics.
471
-
Requires a Prometheus instance to be deployed and scraping Strimzi pods.
472
-
- Uses existing monitoring infrastructure.
473
-
- Gives historical data and pre-aggregated metrics.
474
-
- Needs the Prometheus API endpoint and any auth credentials to be configured.
475
-
- Avoids duplicating scraping load.
458
+
Prometheus API integration (querying an existing Prometheus instance for historical and aggregated metrics) is out of scope for this proposal and can be added in a separate proposal later.
476
459
477
460
**Caveats**:
478
-
- Custom metric names: Strimzi and Kafka metric names can be customized by users, so the MCP server supports configurable metric name mappings.
461
+
- Custom metric names: Strimzi and Kafka metric names can be customized by users, so the MCP server will need configurable metric name mappings.
479
462
- RBAC: Direct pod scraping needs `get` on `pods/metrics` or port-forwarding access.
480
-
Prometheus integration needs access to the Prometheus API, which may have its own auth setup.
463
+
- Direct pod scraping only gives point-in-time metrics, no historical data.
464
+
For historical data, Prometheus integration would be needed.
481
465
482
-
**What metrics answer**: Metrics are mostly useful during ad-hoc incident investigation, for example "Is the broker under heavy load?", "What's the replication lag?", "Are there under-replicated partitions?".
466
+
Metrics are mostly useful during ad-hoc incident investigation, for example "Is the broker under heavy load?", "What's the replication lag?", "Are there under-replicated partitions?".
483
467
For ongoing monitoring, existing alerting infrastructure (Prometheus alerts, Grafana) is still the primary tool.
0 commit comments