-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve watch mechanisms #28
Conversation
Summary:
|
Summary:
|
Summary:
|
Summary:
|
Summary:
|
@hebestreit ready to test/review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specify metav1.ListOptions{ResourceVersion: softwarecomposition.ResourceVersionFullSpec}
for all summary's List()
method calls.
api/api.go
Outdated
return vulnsummary, nil | ||
|
||
// VulnerabilitySummaries is a virtual resource, it has to be enabled in the storage | ||
return sc.clientset.SpdxV1beta1().VulnerabilitySummaries("").List(context.Background(), metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently all severity counts are 0 and I guess here it's necessary to fetch the fullSpec as well, right?
metav1.ListOptions{ResourceVersion: softwarecomposition.ResourceVersionFullSpec}
Requires: kubescape/storage#194
api/api.go
Outdated
return configscan, nil | ||
|
||
// ConfigScanSummaries is a virtual resource, it has to be enabled in the storage | ||
return sc.clientset.SpdxV1beta1().ConfigurationScanSummaries("").List(context.Background(), metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently all severity counts are 0 and I guess here it's necessary to fetch the fullSpec as well, right?
metav1.ListOptions{ResourceVersion: softwarecomposition.ResourceVersionFullSpec}
Requires: kubescape/storage#194
Summary:
|
Signed-off-by: Matthias Bertschy <[email protected]>
Signed-off-by: Matthias Bertschy <[email protected]>
Signed-off-by: Matthias Bertschy <[email protected]>
Signed-off-by: Matthias Bertschy <[email protected]>
Signed-off-by: Matthias Bertschy <[email protected]>
Summary:
|
} | ||
|
||
if event.Type == watch.Deleted { | ||
metrics.DeleteVulnWorkloadMetric(item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried to simulate the deletion of a VulnerabilityManifestSummary by restarting the storage component which results in an error.
Create a new pod and wait until the VulnerabilityManifestSummary resource has been created.
kubectl run nginx --image=nginx
Next delete the nginx pod again and then restart the storage pod. The VulnerabilityManifestSummary resource will disappear but it won't reach this line of code. Instead following error are thrown in the logs:
[warning] error watching workload configuration scan summaries. error: watch error: &Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:Failure,Message:an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer") has prevented the request from succeeding,Reason:InternalError,Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer,Field:,},StatusCause{Type:ClientWatchDecoding,Message:unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer,Field:,},},RetryAfterSeconds:0,UID:,},Code:500,}; retry-after: 885.510553ms
[warning] error watching workload vulnerability scan summaries. error: watch error: &Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:Failure,Message:an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR; received from peer") has prevented the request from succeeding,Reason:InternalError,Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR; received from peer,Field:,},StatusCause{Type:ClientWatchDecoding,Message:unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR; received from peer,Field:,},},RetryAfterSeconds:0,UID:,},Code:500,}; retry-after: 853.90771ms
Is this behavior expected or an exception? This would mean that the exported metrics deviate from the state of the Kubernetes cluster.
When deleting the VulnerabilityManifestSummary manually via k9s it works as expected so I assume a deletion under normal circumstances during the daily clean up will also work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, I see, this is an issue I will:
- make sure the operator cleans up the VulnerabilityManifestSummary
- try to send watch events when the storage cleans up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect to resume working on this in the next few days.
cc @hebestreit
only works from https://github.com/kubescape/storage/releases/tag/v0.0.155