Skip to content

Add MCP Tools for OVS layer#13

Merged
tssurya merged 1 commit intoovn-kubernetes:mainfrom
pperiyasamy:ovs_mcp
Dec 12, 2025
Merged

Add MCP Tools for OVS layer#13
tssurya merged 1 commit intoovn-kubernetes:mainfrom
pperiyasamy:ovs_mcp

Conversation

@pperiyasamy
Copy link
Contributor

@pperiyasamy pperiyasamy commented Nov 18, 2025

This PR introduces four new MCP tools for inspecting and debugging OVS in ovn-kubernetes clusters:

  • ovs-list-br: List all OVS bridges on a pod
  • ovs-list-ports: List all ports attached to an OVS bridge
  • ovs-list-ifaces: List all interfaces attached to an OVS bridge
  • ovs-vsctl-show: Display comprehensive OVS configuration overview
  • ovs-ofctl-dump-flows: Dump and analyze OpenFlow rules from OVS bridges
    with optional filtering by pattern
  • ovs-appctl-dump-conntrack: Inspect connection tracking (conntrack)
    entries from the OVS datapath to debug stateful firewall rules and NAT
  • ovs-appctl-ofproto-trace: Simulate packet processing through the
    OpenFlow pipeline to troubleshoot flow rules and forwarding decisions

These tools enable AI-assisted debugging of OVS networking issues without requiring direct cluster access.

Signed-off-by: Periyasamy Palanisamy pepalani@redhat.com
Co-Authored-By: Claude noreply@anthropic.com

Example output:

● I'll help you list the bridges from every node by querying each ovnkube-node pod. Let me first get all the ovnkube-node pods in the ovn-kubernetes namespace.

● ovn-kubernetes - resource-list (MCP)(version: "v1", kind: "Pod", namespace: "ovn-kubernetes", labelSelector: "app=ovnkube-node")
  ⎿  {                             
       "resources": [
         {
     … +19 lines (ctrl+o to expand)

● Perfect! I found 3 ovnkube-node pods. Now let me list the bridges from each node:

● ovn-kubernetes - ovs-list-br (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld")
  ⎿  {                            
       "bridges": [
         "br-int",
     … +3 lines (ctrl+o to expand)

● ovn-kubernetes - ovs-list-br (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-kh724")
  ⎿  {                            
       "bridges": [
         "br-int",
     … +3 lines (ctrl+o to expand)

● ovn-kubernetes - ovs-list-br (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-qgn29")
  ⎿  {                            
       "bridges": [
         "br-int",
     … +3 lines (ctrl+o to expand)

● Here are the OVS bridges from each node:

  Node: ovnkube-node-g96ld
  - br-int
  - breth0

  Node: ovnkube-node-kh724
  - br-int
  - breth0

  Node: ovnkube-node-qgn29
  - br-int
  - breth0

  All three nodes have the same bridge configuration:
  - br-int: The OVN integration bridge (used for pod networking)
  - breth0: The external bridge (used for external connectivity)

  This is a typical OVN-Kubernetes setup. Would you like me to get more details about these bridges, such as their ports, interfaces, or OpenFlow rules?

> can you get ports and interfaces from br-int bridge from ovnkube-node-g96ld pod ? 

● I'll get both the ports and interfaces from the br-int bridge on the ovnkube-node-g96ld pod.

● ovn-kubernetes - ovs-list-ports (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld", bridge: "br-int")
  ⎿  {                            
       "ports": [
         "ovn-23cf3c-0",
     … +5 lines (ctrl+o to expand)

● ovn-kubernetes - ovs-list-ifaces (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld", bridge: "br-int")
  ⎿  {                            
       "interfaces": [
         "ovn-23cf3c-0",
     … +5 lines (ctrl+o to expand)

● Here are the ports and interfaces from the br-int bridge on ovnkube-node-g96ld:

  Ports:
  - ovn-23cf3c-0
  - ovn-c32ec4-0
  - ovn-k8s-mp0
  - patch-br-int-to-breth0_ovn-control-plane

  Interfaces:
  - ovn-23cf3c-0
  - ovn-c32ec4-0
  - ovn-k8s-mp0
  - patch-br-int-to-breth0_ovn-control-plane

  The ports and interfaces are the same, which is typical for OVS. Here's what these represent:
  - ovn-23cf3c-0 and ovn-c32ec4-0: Pod interfaces (connected to containers)
  - ovn-k8s-mp0: OVN-Kubernetes management port (used for node-to-pod communication)
  - patch-br-int-to-breth0_ovn-control-plane: Patch port connecting br-int to breth0 for external connectivity

  Would you like more details about any of these interfaces or see the OpenFlow flows on this bridge?

> what about from breth0 bridge ? 

● I'll get the ports and interfaces from the breth0 bridge on the ovnkube-node-g96ld pod.

● ovn-kubernetes - ovs-list-ports (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld", bridge: "breth0")
  ⎿  {                            
       "ports": [
         "eth0",
     … +3 lines (ctrl+o to expand)

● ovn-kubernetes - ovs-list-ifaces (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld", bridge: "breth0")
  ⎿  {                            
       "interfaces": [
         "eth0",
     … +3 lines (ctrl+o to expand)

● Here are the ports and interfaces from the breth0 bridge on ovnkube-node-g96ld:

  Ports:
  - eth0
  - patch-breth0_ovn-control-plane-to-br-int

  Interfaces:
  - eth0
  - patch-breth0_ovn-control-plane-to-br-int

  This shows the typical external bridge configuration:
  - eth0: The physical network interface of the node
  - patch-breth0_ovn-control-plane-to-br-int: Patch port connecting back to br-int (the other end of the patch we saw on br-int)

  The breth0 bridge acts as a gateway between the OVN overlay network (br-int) and the physical network (eth0), enabling external connectivity for pods.

  Would you like to see the OpenFlow flows on this bridge or get more detailed information about the OVS configuration?

> can you get ovs-vsctl show ? 

● I'll get the comprehensive OVS configuration using ovs-vsctl show from the ovnkube-node-g96ld pod.

● ovn-kubernetes - ovs-vsctl-show (MCP)(namespace: "ovn-kubernetes", name: "ovnkube-node-g96ld")
  ⎿ {                                                                                                                                                                                                                             
      "output": "12fd7edb-63b5-4b7f-89df-93385120112f\nBridge br-int\nfail_mode: secure\ndatapath_type: system\nPort ovn-c32ec4-0\nInterface ovn-c32ec4-0\ntype: geneve\noptions: {csum=\"true\", key=flow, local_ip=\"172.18.0.4\
    ", remote_ip=\"172.18.0.3\"}\nPort br-int\nInterface br-int\ntype: internal\nPort ovn-k8s-mp0\nInterface ovn-k8s-mp0\ntype: internal\nPort ovn-23cf3c-0\nInterface ovn-23cf3c-0\ntype: geneve\noptions: {csum=\"true\", key=fl
    … +4 lines (ctrl+o to expand)



● Here's the complete OVS configuration from ovnkube-node-g96ld:

  12fd7edb-63b5-4b7f-89df-93385120112f
  Bridge br-int
      fail_mode: secure
      datapath_type: system
      Port ovn-c32ec4-0
          Interface ovn-c32ec4-0
              type: geneve
              options: {csum="true", key=flow, local_ip="172.18.0.4", remote_ip="172.18.0.3"}
      Port br-int
          Interface br-int
              type: internal
      Port ovn-k8s-mp0
          Interface ovn-k8s-mp0
              type: internal
      Port ovn-23cf3c-0
          Interface ovn-23cf3c-0
              type: geneve
              options: {csum="true", key=flow, local_ip="172.18.0.4", remote_ip="172.18.0.2"}
      Port patch-br-int-to-breth0_ovn-control-plane
          Interface patch-br-int-to-breth0_ovn-control-plane
              type: patch
              options: {peer=patch-breth0_ovn-control-plane-to-br-int}
  Bridge breth0
      fail_mode: standalone
      Port eth0
          Interface eth0
      Port patch-breth0_ovn-control-plane-to-br-int
          Interface patch-breth0_ovn-control-plane-to-br-int
              type: patch
              options: {peer=patch-br-int-to-breth0_ovn-control-plane}
      Port breth0
          Interface breth0
              type: internal
  ovs_version: "3.4.1-1.fc42"

  Key observations:
  - OVS Version: 3.4.1-1.fc42
  - br-int has fail_mode: secure (drops packets when controller is disconnected)
  - breth0 has fail_mode: standalone (acts as normal switch when controller is disconnected)
  - Geneve tunnels: Two tunnels from local IP 172.18.0.4 to remote IPs 172.18.0.2 and 172.18.0.3 (connections to other nodes)
  - Patch ports: Connecting br-int and breth0 together

  Would you like to see OpenFlow flows or trace a packet through the pipeline?

Comment on lines 34 to 35
k8sMcpServer, err := kubernetesmcp.NewMCPServer(serverCfg.Kubernetes)
clientSet, err := client.NewOVNKMCPServerClientSet(serverCfg.Kubernetes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pperiyasamy When I worked on the k8s layer, the idea was to provide only the necessary tools to the other layers and not expose the entire k8s client. The tools that are added in the k8s mcp package should be enough for the other layers like OVS, OVN, etc. to interact with the k8s cluster.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the tools in pkg/kubernetes/mcp/pods.go, specifically ExecPod, should be enough for this layer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok @arkadeepsen , i didn't know this. good idea. changed the ovs implementation accordingly.

// OVNKMCPServerClientSet is a client set for the OVN Kubernetes MCP server.
type OVNKMCPServerClientSet struct {
clientSet kubernetes.Interface
ClientSet kubernetes.Interface
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned earlier, the k8s client is not exported because the other layers can simply use the ExecPod tool.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, done.

}

// AddTools registers OVS tools with the MCP server
func (s *MCPServer) AddTools(server *mcp.Server) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are other tools also mentioned in the OKEP for the OVS layer. I think those are also needed to be added as tools.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it has relevant ovs tools now except retis collect because ovnkube-node pod doesn't have retis installed with it, can we rely on any other pod to implement this ? or are we ok to ignore this for now ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can skip retis for now. As the collect might need an extended period of running it which we are not currently supporting in pod exec or node debug. Also this tool might be a better fit for external tools rather than OVS layer. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, right, let's do it separately for retis.

if len(ovsInstance.Containers) > 0 {
// Get list of bridges
bridgeNames, err := s.runOVSCommand(ctx, pod.Name, in.Namespace,
ovsInstance.Containers[0], []string{"ovs-vsctl", "list-br"})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it always expected that whichever container gets chosen here, will be able to execute the command? If so, a comment here explaining it would be good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

container dependency is not needed, removed it.

}

// ListOVSInstances lists running OVS instances.
func (s *MCPServer) ListOVSInstances(ctx context.Context, req *mcp.CallToolRequest,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can simply provide tools for ovs-vsctl list-br, ovs-vsctl list-ports, ovs-vsctl list-ifaces, and ovs-ofctl dump-flows? The LLM can call these tools indivudally and would be able to get the results.

The OKEP actually talks about providing tools for running the various list, show and dump commands for the OVS layer. Here we are doing more than that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, you are correct. ovs layer needn't not have complex logic, let LLM correlate it accordingly. removed ListOVSInstances tool now from the implementation.

@arkadeepsen
Copy link
Contributor

Hey @pperiyasamy, I have added a few comments. PTAL.

@pperiyasamy pperiyasamy force-pushed the ovs_mcp branch 2 times, most recently from 8f64961 to 496eedc Compare November 27, 2025 17:26
Copy link
Contributor

@arkadeepsen arkadeepsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The list of tools looks good for now. We may have to revisit them for adding support for other read-only commands if they are needed for debugging.

Comment on lines 20 to 24
func NewMCPServer(k8sMcpServer *kubernetesmcp.MCPServer) (*MCPServer, error) {
return &MCPServer{
k8sMcpServer: k8sMcpServer,
}, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here error returned is always nil. Do we need the error as return type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

}

// AddTools registers OVS tools with the MCP server
func (s *MCPServer) AddTools(server *mcp.Server) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can skip retis for now. As the collect might need an extended period of running it which we are not currently supporting in pod exec or node debug. Also this tool might be a better fit for external tools rather than OVS layer. WDYT?

Runs 'ovs-vsctl list-br' command and returns the names of all configured bridges.

Parameters:
- namespace (optional): Kubernetes namespace of the OVS pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might not be correct as namespace is needed for running the exec pod command.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to all the tools.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, changed it.

Comment on lines +61 to +92
// validateBridgeName validates that a bridge name is safe and non-empty.
// Bridge names should only contain alphanumeric characters, hyphens, and underscores.
func validateBridgeName(bridge string) error {
if bridge == "" {
return fmt.Errorf("bridge name cannot be empty")
}

// OVS bridge names typically follow naming conventions: alphanumeric, hyphens, underscores
validBridgeName := regexp.MustCompile(`^[a-zA-Z0-9_-]+$`)
if !validBridgeName.MatchString(bridge) {
return fmt.Errorf("invalid bridge name %q: must contain only alphanumeric characters, hyphens, and underscores", bridge)
}

return nil
}

// validateFlowSpec validates that a flow specification is safe and non-empty.
func validateFlowSpec(flow string) error {
if flow == "" {
return fmt.Errorf("flow specification cannot be empty")
}

// Check for potentially dangerous characters that shouldn't appear in flow specs
// Flow specs should contain: alphanumeric, commas, equals, colons, periods, slashes, parentheses, brackets
// We explicitly block: semicolons, pipes, backticks, dollar signs, and other shell metacharacters
dangerousChars := regexp.MustCompile(`[;&|$` + "`" + `<>\\]`)
if dangerousChars.MatchString(flow) {
return fmt.Errorf("invalid flow specification: contains potentially dangerous characters")
}

return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice validations to prevent command injection.

}

// Limit to MaxLines if specified
flows = limitLines(flows, in.MaxLines)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shold we validate MaxLines is not negative? Since providing zero allows the option to get all the lines from the command execution, does it make sense to not allow negative values at all?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to the other tools with MaxLines option.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, now restricting output with only 100 lines when in.MaxLines <= 0, i think that make sense (as same as sos commands).

Comment on lines +313 to +315
if len(in.AdditionalParams) > 0 {
cmd = append(cmd, in.AdditionalParams...)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the AdditionalParams are not validated which could result in command injection.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, done.

@arkadeepsen
Copy link
Contributor

@arghosh93 will also review this PR


func (s *MCPServer) runCommand(ctx context.Context, req *mcp.CallToolRequest, namespacedName k8stypes.NamespacedNameParams,
commands []string) ([]string, error) {
_, result, err := s.k8sMcpServer.ExecPod(ctx, req, k8stypes.ExecPodParams{NamespacedNameParams: namespacedName, Command: commands})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The container name is not taken as a parameter for the different ovs commands. Will it always be able to get the result from the 1st container in the pod?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, container name is not passed from here, seems like mcp pod client ensures it always run on the 1st container https://github.com/ovn-kubernetes/ovn-kubernetes-mcp/blob/main/pkg/kubernetes/client/pods.go#L51.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct that the k8s layer is always choosing the 1st container, but the question is that will it always work? What I mean is that will any of the container be able to provide the response or should container name be an optional parameter for the LLM to provide?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah! ok, we could run ovs commands on any container for ovnkube-node pod, so i don't see a need for taking container name as an optional parameter for now.

This commit introduces new MCP tools for inspecting and debugging
Open vSwitch (OVS) in ovn-kubernetes clusters:

- ovs-list-br: List all OVS bridges on a pod
- ovs-list-ports: List all ports attached to an OVS bridge
- ovs-list-ifaces: List all interfaces attached to an OVS bridge
- ovs-vsctl-show: Display comprehensive OVS configuration overview
- ovs-ofctl-dump-flows: Dump and analyze OpenFlow rules from OVS bridges
  with optional filtering by pattern
- ovs-appctl-dump-conntrack: Inspect connection tracking (conntrack)
  entries from the OVS datapath to debug stateful firewall rules and NAT
- ovs-appctl-ofproto-trace: Simulate packet processing through the
  OpenFlow pipeline to troubleshoot flow rules and forwarding decisions

Assisted-By: Claude <noreply@anthropic.com>
Signed-off-by: Periyasamy Palanisamy <pepalani@redhat.com>

mcp.AddTool(server,
&mcp.Tool{
Name: "ovs-list-ports",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are adding too many tools. We may end up having too many tools if we create separate tools for all kind of list operations.
In my opinion we can have 3 sets of tools. one each for ovs-vsctl, ovs-ofctl and ovs-appctl. Then we can pass separate commands to list or dump separate ovs entities. In this way, adding more capabilities would be easier in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these tools are implemented as per design: https://github.com/ovn-kubernetes/ovn-kubernetes/blob/master/docs/okeps/okep-5494-ovn-kubernetes-mcp-server.md#example-workflow-for-an-end-user. I think it doesn't harm anything, but if there is a need for minimizing these commands, we can revisit it later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay

Runs 'ovs-vsctl list-br' command and returns the names of all configured bridges.

Parameters:
- namespace: Kubernetes namespace of the OVS pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think instead of taking namespace and name of the POD, we can take name of the node as parameter. Then we can exec to the OVN POD running on that node and extract OVS data.
relying on name and namespace might need use of another tool in future to get name of the ovs POD for a certain node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This knowledge can be simply built within LLM Model itself and able to to run ovs command on the desired pod. so not seeing a necessity for taking node name as parameter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, makes sense.

if err != nil {
return nil, err
}
if result.Stderr != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any kind of warnings may also be considered as error returned by the command. I think filtering warnings is a good idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean warning messages from result.Stderr ? can you explain it a bit more ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hit an issue while working on a PR for kernel layer where CLI tool was returning a harmless warning along with stdout. But since the warning was being received by stderr, the whole function execution was failing and the LLM was not returning any data.

Copy link
Contributor Author

@pperiyasamy pperiyasamy Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok @arghosh93 , i saw those handling with kernel layer, but i don't see that is happening with ovs commands which runs on ovnkube-node pod.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay

- name: Name of the pod running OVS
- bridge: Name of the OVS bridge (e.g., "br-int")
- flow: Flow specification describing the packet to trace (e.g., "in_port=1,ip,nw_src=10.244.0.5,nw_dst=10.96.0.1")
- filter (optional): Regex pattern to filter trace output lines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we expect user to provide a fully working regex? or instead we should take import, src/dst IP and port as parameter and make a regex to filter going forward?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently it takes regex string as input and throws an error if it is invalid, i hope LLM itself can help to build regex string.

)

// BridgeResult contains the list of OVS bridges found on a node.
type BridgeResult struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the results are sets of string data. Can't we use a single type for all similar kinds of results?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

though it has list of strings in the result, having them in separate fields provide API clarity, semantic JSON field names etc.

Copy link
Contributor

@arghosh93 arghosh93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@arkadeepsen arkadeepsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@tssurya tssurya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has two lgtms
merging even if I didn't review it myself.

@tssurya tssurya merged commit 4a7f9b9 into ovn-kubernetes:main Dec 12, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants