Skip to content

Commit 5334589

Browse files
feat: update helm charts with mcp support and fix Google ADA issue (#1568)
* migrated to more actively maintained mcp golang lib and added AI explain support for mcp mode Signed-off-by: Umesh Kaul <umeshkaul@gmail.com> * added a makefile option to create local docker image for testing Signed-off-by: Umesh Kaul <umeshkaul@gmail.com> * fixed linter errors and made anonymize as an arg Signed-off-by: Umesh Kaul <umeshkaul@gmail.com> * added mcp support for helm chart and fixed google adk support issue Signed-off-by: Umesh Kaul <umeshkaul@gmail.com> --------- Signed-off-by: Umesh Kaul <umeshkaul@gmail.com> Co-authored-by: Alex Jones <1235925+AlexsJones@users.noreply.github.com>
1 parent 7e33276 commit 5334589

7 files changed

Lines changed: 79 additions & 2 deletions

File tree

README.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -399,6 +399,26 @@ _Serve mode_
399399
k8sgpt serve
400400
```
401401

402+
_Serve mode with MCP (Model Context Protocol)_
403+
404+
```
405+
# Enable MCP server on default port 8089
406+
k8sgpt serve --mcp --mcp-http
407+
408+
# Enable MCP server on custom port
409+
k8sgpt serve --mcp --mcp-http --mcp-port 8089
410+
411+
# Full serve mode with MCP
412+
k8sgpt serve --mcp --mcp-http --port 8080 --metrics-port 8081 --mcp-port 8089
413+
```
414+
415+
The MCP server enables integration with tools like Claude Desktop and other MCP-compatible clients. It runs on port 8089 by default and provides:
416+
- Kubernetes cluster analysis via MCP protocol
417+
- Resource information and health status
418+
- AI-powered issue explanations and recommendations
419+
420+
For Helm chart deployment with MCP support, see the `charts/k8sgpt/values-mcp-example.yaml` file.
421+
402422
_Analysis with serve mode_
403423

404424
```

charts/k8sgpt/Chart.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
apiVersion: v2
2-
appVersion: v0.3.0 #x-release-please-version
2+
appVersion: v0.4.23 #x-release-please-version
33
description: A Helm chart for K8SGPT
44
name: k8sgpt
55
type: application

charts/k8sgpt/templates/deployment.yaml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,13 @@ spec:
3232
image: {{ .Values.deployment.image.repository }}:{{ .Values.deployment.image.tag | default .Chart.AppVersion }}
3333
ports:
3434
- containerPort: 8080
35-
args: ["serve"]
35+
{{- if .Values.deployment.mcp.enabled }}
36+
- containerPort: {{ .Values.deployment.mcp.port | int }}
37+
{{- end }}
38+
args: ["serve"
39+
{{- if .Values.deployment.mcp.enabled }}, "--mcp", "-v","--mcp-http", "--mcp-port", {{ .Values.deployment.mcp.port | quote }}
40+
{{- end }}
41+
]
3642
{{- if .Values.deployment.resources }}
3743
resources:
3844
{{- toYaml .Values.deployment.resources | nindent 10 }}

charts/k8sgpt/templates/service.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,9 @@ spec:
1919
- name: metrics
2020
port: 8081
2121
targetPort: 8081
22+
{{- if .Values.deployment.mcp.enabled }}
23+
- name: mcp
24+
port: {{ .Values.deployment.mcp.port | int }}
25+
targetPort: {{ .Values.deployment.mcp.port | int }}
26+
{{- end }}
2227
type: {{ .Values.service.type }}
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Example values file to enable MCP (Model Context Protocol) service
2+
# Copy this file and modify as needed, then use: helm install -f values-mcp-example.yaml
3+
4+
deployment:
5+
# Enable MCP server
6+
mcp:
7+
enabled: true
8+
port: "8089" # Port for MCP server (default: 8089)
9+
http: true # Enable HTTP mode for MCP server
10+
11+
# Other deployment settings remain the same
12+
image:
13+
repository: ghcr.io/k8sgpt-ai/k8sgpt
14+
tag: "" # defaults to Chart.appVersion if unspecified
15+
imagePullPolicy: Always
16+
env:
17+
model: "gpt-3.5-turbo"
18+
backend: "openai"
19+
resources:
20+
limits:
21+
cpu: "1"
22+
memory: "512Mi"
23+
requests:
24+
cpu: "0.2"
25+
memory: "156Mi"
26+
27+
# Service configuration
28+
service:
29+
type: ClusterIP
30+
annotations: {}
31+
32+
# Secret configuration for AI backend
33+
secret:
34+
secretKey: "" # base64 encoded OpenAI token
35+
36+
# ServiceMonitor for Prometheus metrics
37+
serviceMonitor:
38+
enabled: false
39+
additionalLabels: {}

charts/k8sgpt/values.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,11 @@ deployment:
77
env:
88
model: "gpt-3.5-turbo"
99
backend: "openai" # one of: [ openai | llama ]
10+
# MCP (Model Context Protocol) server configuration
11+
mcp:
12+
enabled: false # Enable MCP server
13+
port: "8089" # Port for MCP server
14+
http: true # Enable HTTP mode for MCP server
1015
resources:
1116
limits:
1217
cpu: "1"

pkg/server/mcp.go

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,8 @@ func (s *K8sGptMCPServer) registerToolsAndResources() error {
141141
),
142142
mcp.WithArray("filters",
143143
mcp.Description("Provide filters to narrow down the analysis (e.g. ['Pods', 'Deployments'])"),
144+
// without below line MCP server fails with Google Agent Development Kit (ADK), interestingly works fine with mcpinspector
145+
mcp.WithStringItems(),
144146
),
145147
)
146148
s.server.AddTool(analyzeTool, s.handleAnalyze)

0 commit comments

Comments
 (0)