add openflow sync refer to ovn-kubernetes#6115
add openflow sync refer to ovn-kubernetes#6115changluyi wants to merge 2 commits intorelease-1.14from
Conversation
Summary of ChangesHello @changluyi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a robust OpenFlow synchronization mechanism, inspired by Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the OpenFlow rule management by introducing a synchronization loop, similar to the approach in ovn-kubernetes. Instead of applying OpenFlow rules directly, changes are now cached and then atomically applied to OVS using ovs-ofctl replace-flows. This improves robustness by avoiding inconsistent states. The changes are well-structured, introducing new files for flow rule management and the synchronization logic. My review focuses on improving the maintainability and robustness of the new implementation.
| func extractFlowCookie(flow string) (uint64, bool) { | ||
| idx := strings.Index(flow, "cookie=") | ||
| if idx == -1 { | ||
| return 0, false | ||
| } | ||
| cookieField := flow[idx+len("cookie="):] | ||
| if comma := strings.Index(cookieField, ","); comma != -1 { | ||
| cookieField = cookieField[:comma] | ||
| } | ||
| if slash := strings.Index(cookieField, "/"); slash != -1 { | ||
| cookieField = cookieField[:slash] | ||
| } | ||
| cookieField = strings.TrimSpace(cookieField) | ||
| if cookieField == "" { | ||
| return 0, false | ||
| } | ||
|
|
||
| cookie, err := parseHexUint64(cookieField) | ||
| if err != nil { | ||
| return 0, false | ||
| } | ||
| return cookie, true | ||
| } |
There was a problem hiding this comment.
The current implementation of extractFlowCookie relies on strings.Index to find the cookie= field. The order of fields in ovs-ofctl dump-flows output is not guaranteed, so cookie= might not be at the beginning. This could lead to incorrect parsing if another field contains cookie=. A more robust approach is to parse the flow string as a set of key-value pairs by splitting by comma.
func extractFlowCookie(flow string) (uint64, bool) {
for _, field := range strings.Split(flow, ",") {
field = strings.TrimSpace(field)
if !strings.HasPrefix(field, "cookie=") {
continue
}
cookieField := strings.TrimPrefix(field, "cookie=")
if slash := strings.Index(cookieField, "/"); slash != -1 {
cookieField = cookieField[:slash]
}
cookie, err := parseHexUint64(cookieField)
if err != nil {
return 0, false
}
return cookie, true
}
return 0, false
}| func appendFlowCache(dst map[string][]string, src map[string]map[string][]string) { | ||
| for bridgeName, entries := range src { | ||
| if len(entries) == 0 { | ||
| if _, ok := dst[bridgeName]; !ok { | ||
| dst[bridgeName] = nil | ||
| } | ||
| continue | ||
| } | ||
| for _, flows := range entries { | ||
| if len(flows) == 0 { | ||
| if _, ok := dst[bridgeName]; !ok { | ||
| dst[bridgeName] = nil | ||
| } | ||
| continue | ||
| } | ||
| dst[bridgeName] = append(dst[bridgeName], flows...) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The logic in appendFlowCache can be simplified for better readability and maintainability. The current implementation has some redundant checks. A more concise version can achieve the same result by ensuring the bridge key exists in the destination map and then appending flows.
func appendFlowCache(dst map[string][]string, src map[string]map[string][]string) {
for bridgeName, entries := range src {
// Ensure key exists for each bridge in src, initializing to nil if not present.
if _, ok := dst[bridgeName]; !ok {
dst[bridgeName] = nil
}
for _, flows := range entries {
dst[bridgeName] = append(dst[bridgeName], flows...)
}
}
}
Pull Request Test Coverage Report for Build 20589963075Details
💛 - Coveralls |
Signed-off-by: clyi <clyi@alauda.io>
294915c to
3881989
Compare
Pull Request
What type of this PR
for clearer layering and naming.
Behavior