-
Notifications
You must be signed in to change notification settings - Fork 1.4k
syz-cluster: initial code #5620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dbf79ae to
0a8c2df
Compare
dvyukov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first patch is not questionable. Let's review/merge it separately.
0a8c2df to
3ef5a48
Compare
3ef5a48 to
2a65760
Compare
0129842 to
14daace
Compare
|
FTR:
|
2ea42cf to
60046ef
Compare
Please give more context. Do you need it for the local development or testing? |
60046ef to
5c4d2af
Compare
For dev environment, it already exists - there's I've just added a syz-cluster/pkg/db: add support for running under syz-env commit that runs the Spanner emulator binary for our Go tests. Seems to be quite simple, actually. For some other external dependencies (e.g. Argo workflows or the blob storage), there are already mocks in the code, but for the DB I fear that there will just be too much boilerplate code. If we can just run a db emulator for the unit tests, why not do it? Doing so lets us test not just the higher level logic, but also all SQL commands that are used in the implementations. |
5c4d2af to
ee912f9
Compare
|
Unit testing allows to test some component in the isolated environment where you can simulate any kind of the external service behavior (inclulding failure). Mocks give you the required flexibility and are the faster way to emulate external dependencies. Emulator may be used to test interaction. In our case we can just use real service instead of the emulator that will always differ from it. |
ee912f9 to
e3ab12c
Compare
|
Let's add the testing and error processing sections to the design doc? |
64c431a to
2b105fc
Compare
The basic code of a K8S-based cluster that: * Aggregates new LKML patch series. * Determines the kernel trees to apply them to. * Builds the basic and the patched kernel. * Displays the results on a web dashboard. This is a very rudimentary version with a lot of TODOs that provides a skeleton for further work. The project makes use of Argo workflows and Spanner DB. Bootstrap is used for the web interface. Overall structure: * syz-cluster/dashboard: a web dashboard listing patch series and their test results. * syz-cluster/series-tracker: polls Lore archives and submits the new patch series to the DB. * syz-cluster/controller: schedules workflows and provides API for them. * syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date. * syz-cluster/workflow/*: workflow steps. For the DB structure see syz-cluster/pkg/db/migrations/*.
Start a spanner emulator binary (if it's present in the image).
Findings are crashes and build/boot/test errors that happened during the patch series processing.
It's not necessary - submit the results from the individual steps instead. Report patched kernel build failures as findings.
Adjust its options to allow the uploading of big files (that is necessary for Argo workflow artifacts).
Run a smoke test on the base kernel build and report back the results.
Report the findings only for the boot test of the patched kernel.
Configure the number of patch series processed in parallel via an env variable.
555448b to
75cdcf3
Compare
The basic code of a k8s-based cluster that:
This is a very rudimentary version with a lot of TODOs that provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
syz-cluster/dashboard: a web dashboard listing patch series and their test results.syz-cluster/series-tracker: polls Lore archives and submits the new patch series to the DB.syz-cluster/controller: schedules workflows and provides API for them.syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.syz-cluster/workflow/*: workflow steps.For the DB structure see
syz-cluster/pkg/db/migrations/*.TODO (for this PR):
Figure out whyThat isgo.modbumped to1.23.1go: github.com/argoproj/argo-workflows/v3@v3.6.2 requires go >= 1.23.1 (running go 1.22.7)v3.5.13still only requiresGo 1.21.go: module k8s.io/api@v0.32.0 requires go >= 1.23.0; switching to go1.23.4.v0.31.4requiresGo 1.22.syz-clustertests on GitHub CI (we'll need at least some local Spanner emulator binary).STORAGE_EMULATOR_HOSTto the argo server.Questions/thoughts out loud:
1. What to do with the vendor folder
Ourvendor/folder is quite big, and this PR adds even more modules on top of that. Do we still want to keep that code in our repostory? Is it possible to only keep the modules needed by other components but notsyz-cluster(since I dogo mod downloadin the Docker containers anyway).Filed #5645
2. Some pkg/ packages are too eager to depend on prog/ and sys/
It has made some Dockerfiles more complicated that they could have been.
Should we strive to remove these dependencies?