-
Notifications
You must be signed in to change notification settings - Fork 131
feat/gql-proxy: new approach for GraphQL local scale out #3074
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
A previous attempt at this was made in #2588 How is this different?
Note: Pagination is still pending |
log log.Logger, | ||
gqlSchema *ast.Schema, | ||
) func(handler http.Handler) http.Handler { | ||
proxyFunctionalityMap := map[string]GqlScaleOutHandlerFunc{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rchincha this map statically stores which GQL operation needs which kind of handler.
There will be a couple of generic handlers such as fanout and some specific handlers if any of the operations need custom behavior.
What do you think about this approach?
It's better than last time since we don't need to maintain a separate handler for each operation type, but I'm open to more ideas on making this better.
} | ||
} | ||
|
||
func deepMergeMaps(a, b map[string]any) map[string]any { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rchincha this is the new approach for aggregating the data. Since the response is a JSON, we can aggregate the data as a map type with individual logic for the embedded types - nested map, numeric types, and arrays.
This should be common across all handlers and may change when pagination comes into picture.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds good. Only concern is whether we can validate schema so that we never emit garbage out.
a5dbb43
to
ac85801
Compare
{ | ||
"distSpecVersion": "1.1.0", | ||
"storage": { | ||
"rootDirectory": "./workspace/zot/data/mem1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to figure out a scheme to append a member path.
Would like to have a single zot configuration that folks don't have to tweak.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. The reason for this is mostly because I was starting 2 binaries on the same host for development (so I had to change the path and port). For an actual deployment, the config files would be identical.
Signed-off-by: Vishwas Rajashekar <[email protected]>
Latest analysis of GQL queries: CVEListForImage
CVEDiffListForImages
ImageListForCVE
ImageListWithCVEFixed
ImageListForDigest
RepoListWithNewestImage
ImageList
ExpandedRepoInfo
GlobalSearch
DerivedImageList
BaseImageList
Image
Referrers
StarredRepos
BookmarkedRepos
|
ac85801
to
13f6084
Compare
Signed-off-by: Vishwas Rajashekar <[email protected]>
Signed-off-by: Vishwas Rajashekar <[email protected]>
…poInfo Signed-off-by: Vishwas Rajashekar <[email protected]>
13f6084
to
c991fe4
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3074 +/- ##
==========================================
- Coverage 90.79% 90.64% -0.15%
==========================================
Files 172 177 +5
Lines 32385 32584 +199
==========================================
+ Hits 29404 29536 +132
- Misses 2242 2298 +56
- Partials 739 750 +11 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
||
for _, targetMember := range config.Cluster.Members { | ||
proxyResponse, err := proxy.ProxyHTTPRequest(request.Context(), request, targetMember, config) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to return failure even if just one member fails?
I think if one member responds, we should return that and swallow errors maybe with some indicator somewhere. Maybe logs? HTTP status - 206 Partial Content could work also since this is our own API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like a good idea. One thought that I had in mind is that instead of swallowing the error, perhaps we could append an error to the Errors list key in the GQL response and send it to the client so there is awareness of some error in the system.
The client can choose to ignore the error and use the valid data in the response, or ideally, show both the valid data as well as indicate that there were some errors in processing. With this approach status 206 could be the return status as you've suggested.
What do you think?
What type of PR is this?
feature
Which issue does this PR fix:
Towards #2434
What does this PR do / Why do we need it:
Previously, only dist-spec APIs were supported for scale-out as in a shared storage environment, the metadata was shared and any instance could correctly respond to the GQL queries as all the data is available.
In a local scale-out cluster deployment, the metadata store, in addition to the file storage is isolated to each member in the cluster. Due to this, there is a need to proxy the GQL queries as well for UI and client requests to work as expected.
This change introduces a new GQL proxy + a generic fan-out handler for GQL requests.
Testing done on this change:
Just manual testing with local setup for now. Real testing TODO.
Will this break upgrades or downgrades?
No
Does this PR introduce any user-facing change?:
TODO
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.