-
Notifications
You must be signed in to change notification settings - Fork 2
override improvements #224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Gitar automatically fixes CI failures and addresses comments starting with ⚙️ Options:
🔄 To revert changes, post a comment:
* Gitar never force pushes to your branch |
return fmt.Errorf("sandbox %s has conflicting middleware argument %q", sbName, argName) | ||
} | ||
|
||
func getOverrideMWs(mws []*models.SandboxesMiddleware, oName string) []*models.SandboxesMiddleware { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this returns []*models.SandboxesMiddleware
instead of *models.SandboxesMiddleware
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to make the code resilient to change, the same for observed and desired, and I don't believe there any guarantee from the criteria that there is only 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@scott-cotton, check this case:
$ signadot sandbox apply -f - <<EOF
name: test1
spec:
cluster: test
virtual:
- name: location
workload:
kind: Deployment
namespace: hotrod-devmesh
name: location
EOF
$ signadot local override --sandbox=test1 --workload=location --port=8081 --to=localhost:18081
Waiting (up to --wait-timeout=3m0s) for sandbox to be ready...
✓ Sandbox status: Ready: All desired workloads are available.
# Re apply the original sandbox
$ signadot sandbox apply -f - <<EOF
name: test1
spec:
cluster: test
virtual:
- name: location
workload:
kind: Deployment
namespace: hotrod-devmesh
name: location
EOF
# the local overrides panic
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e6eae8]
goroutine 1 [running]:
github.com/signadot/cli/internal/builder.(*SandboxBuilder).DeleteOverrideMiddleware(0xc0005bf710, {0xc000387d00, 0xa})
/home/ddv/git/signadot/cli/internal/builder/sandbox.go:217 +0x68
github.com/signadot/cli/internal/command/local/override.deleteMiddlewareFromSandbox(0xc00046e930, 0xc0000a8048?, {0xc000387d00, 0xa})
/home/ddv/git/signadot/cli/internal/command/local/override/instrument.go:59 +0x265
github.com/signadot/cli/internal/command/local/override.createSandboxWithMiddleware.mkUnedit.func4({0x2a9eda0, 0xc0000a8048})
/home/ddv/git/signadot/cli/internal/command/local/override/instrument.go:87 +0xc5
github.com/signadot/cli/internal/command/local/override.runOverride({0x2a9eda0, 0xc0000a8040}, {0x2a9eda0, 0xc0000a8048}, 0xc00046e930)
/home/ddv/git/signadot/cli/internal/command/local/override/command.go:169 +0x958
github.com/signadot/cli/internal/command/local/override.New.func1(0xc000025808, {0x2719e74?, 0x4?, 0x2719e78?})
/home/ddv/git/signadot/cli/internal/command/local/override/command.go:66 +0x53
github.com/spf13/cobra.(*Command).execute(0xc000025808, {0xc00061b940, 0x4, 0x4})
/home/ddv/go/pkg/mod/github.com/spf13/[email protected]/command.go:985 +0xad4
github.com/spf13/cobra.(*Command).ExecuteC(0xc00062e308)
/home/ddv/go/pkg/mod/github.com/spf13/[email protected]/command.go:1117 +0x44f
github.com/spf13/cobra.(*Command).Execute(0xc000002380?)
/home/ddv/go/pkg/mod/github.com/spf13/[email protected]/command.go:1041 +0x13
main.main()
/home/ddv/git/signadot/cli/cmd/signadot/main.go:11 +0x18
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see e7c8c8d
func readyLoop(ctx context.Context, cancel func(), readiness poll.Readiness, w io.Writer) { | ||
ticker := time.NewTicker(time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned about having these long-running loops polling every second (both this one and the traffic watch). This could put significant load on our control plane and the agent tunnel. I wonder if we should consider increasing the polling interval to something like 10 or 15 seconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.
Some things to consider:
sandbox apply's frequency is 100ms.
For simplicity, the poll.Readiness is asynchronous from the checks themselves, whose frequency is defined in NewPoll().Readiness around line 150 of command.go. There it is 5sec -- that's the frequency at which it calls the API for these longer sessions -- 50x sandbox apply.
The choice of 1 sec as opposed to 5 in the calling of the poll.Readiness interface was to reduce the additive delay of the 2 processes, since the poll.Readiness is asynchronous explicitly so the caller need not bother with synchronising to the underlying frequency. I don't think it has any affect.
I guess one could break down the frequency of the fetches and
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see, I got this wrong earlier (I thought we were calling the API every 1 second). A 5 second interval looks much more reasonable to me. The difference compared to sandbox apply is that in that case there's a limit, whereas here there isn’t.
… into override-improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Will be tracking override improvements here.