ci: Update rpm testsuite#91
Conversation
|
GH action running inside of my fork: The problem is the container based CI likely runs with the rpm-sequoia from repositories instead of the one that was just built. Will need to figure some way how to pass it in. |
|
Thanks for working on this. I'm probably not telling you anything new, but there's got to be some cmake magic involved :D |
cmake magic is handled. The containers magic is not :) |
|
Yeah, In the meantime, we'll have to somehow inject the rpm-sequoia artifacts into the test image. There are various ways and I'm happy to help with that. |
|
One thing you may try is just run the Edit: Alternatively, you could call the The script is used by Edit 2: I used the word "just" at least 3 times here. Everything is so easy, you can "just" do it. /s 😅 |
Even deeper magic... |
|
Easy as a pie :) Now I see that with this we can skip the cmake altogether, including most of the dependencies. But the container build will still take some more time now, unless we would be able to cache it somehow. The working run is here: https://github.com/Jakuje/rpm-sequoia/actions/runs/15902674765/job/44849388679 Trying to play around and optimize it yet. But good thing is that I can run the tests locally now and verify if my changes to rpm-sequoia does something with the rpm testsuite :) Its also question whether you might be interested in keeping the tests running also against the old RPM version instead of replacing it with the new one. |
|
Can either of you approve the CI run in this PR? Any thoughts if this is something useful to have now before @dmnks will be able to put together something more robust? |
I'm part of this GH org but I don't work on this repo, and am not sure how to approve this. @nwalfield?
LGTM! That said, we'll have to see if it actually works, once CI is approved. |
Oh, sorry, completely missed this comment 😄 Good to know! |
I already approved it. |
|
Seems like the culprit then isn't a missing approval, but rather this: |
|
It still probably needs some more love. I am trying to see if I can address some of the issues from #87, but this approach does not seem to be correctly picking up the so object. |
Did it a more cleaner way by building a container on top of the rpm one, and adding the so objects. Now it looks like it does the right thing and I am able to verify the test "file digests sha3", which was previously skipped is now working. Again my local run: https://github.com/Jakuje/rpm-sequoia/actions/runs/15907032593/job/44864676853 Yes, this PR is mess now. What would be your preference to get things merged? Should I split code change and CI to separate PRs? Or any other thoughts? |
nwalfield
left a comment
There was a problem hiding this comment.
This looks good, thanks for working on it! I just have a few minor comments.
|
|
||
| echo "::group::make check" | ||
| cd rpm | ||
| patch -p1 < ../tests/rpm.patch |
There was a problem hiding this comment.
Can you please explain why this is necessary. Is this intended to a short-term workaround for an issue in RPM?
There was a problem hiding this comment.
This is patching of the rpm testsuite to run the sha3 tests, which are skipped now with sequoia. This is here mostly to confirm the whole thing works. Not intended for production :)
There was a problem hiding this comment.
Does that mean that this commit should be dropped before merging or that we need to carry this for a while?
There was a problem hiding this comment.
Whatever works for you. Either it can be carried until this is enabled in rpm testsuite (after the rpm-sequoia release) or it can be removed now and we will just believe it works :)
| with: | ||
| repository: rpm-software-management/rpm.git | ||
| ref: rpm-4.18.x | ||
| ref: master |
There was a problem hiding this comment.
I'd prefer to use the latest released version.
There was a problem hiding this comment.
The problem is the latest released is 4.20.x now, which, to my understanding, does not have a lot of stuff we need and that will be in 6.x (which is now in alpha).
So for the long term, I think we should switch to 6.x branch, but for now, I would prefer to keep following master.
There was a problem hiding this comment.
Yup, 4.20 does not and will not have most of the signature/key related work going into 6.x. We'll branch 6.x in a couple of months, but would rather not do so just yet because that'd just cause unnecessary busywork in this situation.
There was a problem hiding this comment.
The problem is the latest released is 4.20.x now, which, to my understanding, does not have a lot of stuff we need and that will be in 6.x (which is now in alpha).
So for the long term, I think we should switch to 6.x branch, but for now, I would prefer to keep following master.
Thanks for explaining. After we merge this PR, please open an issue so we don't lose track of this.
| fetch-depth: 1 | ||
| path: rpm-pristine | ||
|
|
||
| - name: Setup | Build Cache rpm |
There was a problem hiding this comment.
Does the build cache not help anymore?
There was a problem hiding this comment.
Previously with autotools, the built took few stages -- clone of the repo, copying, calling configure and building with make in different directory resulting in tree directories.
Now, the build happens completely in containers (the only dependency installed is podman now). So we could cache the containers somehow (would have to figure out how), but otherwise I do not think there are any built artifacts left in the working directory that could be cached.
There was a problem hiding this comment.
TIL that GitHub Actions allows for caching stuff. That does sound useful (for our purposes in rpm.git, too).
But yes, you could cache the base image (podman build --target base) as that's the thing that doesn't change across CI jobs, at least not normally.
There was a problem hiding this comment.
Going even further, you could indeed cache rpm's cmake directory, but that'd require even more (ugly) workarounds because of all this container inception fun.
There was a problem hiding this comment.
Previously with autotools, the built took few stages -- clone of the repo, copying, calling configure and building with make in different directory resulting in tree directories. Now, the build happens completely in containers (the only dependency installed is podman now). So we could cache the containers somehow (would have to figure out how), but otherwise I do not think there are any built artifacts left in the working directory that could be cached.
Thanks for clarifying. Let's first see how long it takes without caching and only explore caching if the performance is unacceptable.
There was a problem hiding this comment.
But I do not run the cmake at all and build the container directly from the source tree. Ritght now it takes 3m for the job to finish, which I still consider doable.
Generally, the containers should be "cached" by pushing them into some registry, rather than directly, but there will likely be some GH action for that.
There was a problem hiding this comment.
But I do not run the cmake at all and build the container directly from the source tree.
Yep, I know. I'm just noting that a cmake build could be cached if really necessary, however as Neal pointed out, let's see how bad it really is in practice if we don't cache anything. For the record, in our CI, we don't cache anything right now, it rebuilds the whole thing every time.
Generally, the containers should be "cached" by pushing them into some registry, rather than directly, but there will likely be some GH action for that.
That's what I like to believe as well. It would be nice to have these (intermediate) images cached, and we'd like to do that at some point, too.
4867bc3 to
9553921
Compare
Ooh, nice @Jakuje . With that, the commented out bits in rpm's "openpgp v6 keys and signatures" test should pass. |
Unfortunately, it does not. I am getting the right fingerprint, but there is still some issue with signature verification: (I hope this is relevant snippet of the debug log) Did not get to look into that in detail yet as I got pulled into another meeting. |
|
What I'm seeing with this applied is no longer "digest prefix mismatch" (I got that before the keyid lookup was fixed), now using my own local testkey (outside the test-suite) I get:
But, it's progress! |
|
Lets see what will jump on us in the CI: |
|
Fails, but shows that I had wrong logic to print the test logs so it does not show anything useful. I will fix it in the other iterations. |
|
Would be great if we can figure out what is difference of your test keys and the testsuite keys/process. Do you sign with the main key or a subkey? On a side note, is there way to pass the RPM_TRACE to be effective in the test container? The If I see right, in my case (testsuite) the Moreover, the digest prefix does not match in my case, which means either the signature is invalid? But Panu mentioned in #87 the binary signatures created by sequoia verified on themselves. Is there some simple way to verify this without doing too much surgery operations on the binary rpms? |
Set it in the bwrap call in snapshot() of tests/atlocal.in |
https://github.com/rpm-software-management/rpm/tree/master/tests#writing-tests (see |
|
Looking closer, my "local" test-case is exactly the one from the test-suite: the key is the one in data/keys and signed with sq using 036824F0AC60AED6F1A3256F88190469F6D7255E3D8E41C577233AA03E0BB9D3 as %_openpgp_sign_id As for the failure message, I was comparing apples and oranges though. In the test-suite, the first test is about testing behavior without the key, so NOKEY is the expectation, and there I do get that "digest prefix mismatch" with these patches too. With the key imported, I get the "message has been manipulated" error. So rpm does find the key (which it didn't previously) but something is still off. |
|
Here are some rough "tools" for extracting the header and a signature out of an rpm, the python snippet is called "immut.py" below: The package here is hello-2.0-1.x86_64.rpm from the rpm-testsuite, signed with the v6 key in the test-suite:
Extract the signature from the package (this only works for a single signature in a package, but that's enough in this case):
Extract the header of the package with the python script above:
Verify with sq:
|
Also updates the GH actions versions to latest ones. Signed-off-by: Jakub Jelen <jjelen@redhat.com>
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
Formed the same way as the v4 ones. Signed-off-by: Jakub Jelen <jjelen@redhat.com>
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
to match the expected one Signed-off-by: Jakub Jelen <jjelen@redhat.com>
|
The current version worked for me locally (after slight adjustment of the outputs -- @pmatilai please take a look it makes sense. The CI run in my fork is available here: https://github.com/Jakuje/rpm-sequoia/actions/runs/15983457845/job/45082932652 (green) Long story short, the v6 signatures are salted, which means the input to the digest needs to be prefixed with salt, which is unique for each signature: https://datatracker.ietf.org/doc/html/rfc9580#signature-salt-rationale This does not fit into the current rpm's gpg subsystem design, which reuses and duplicates single hash context for a rpm and passes it to us already with the RPM header. From here, there are two possible ways:
The current code is using the second approach, which I agree that is not clean, but probably only way to achieve successful verification of v6 signatures without extensive changes to rpm. Reviews, thoughts, comments welcomed. |
|
Confirming the latest version working for me locally too, great job @Jakuje ! 🥳 That v6 salt is bit of a curveball. Good thing you found a way to handle it entirely within rpm-sequoia, that's plenty good enough for starters. But rpm does support calculating multiple digests identifiable by an arbitrary id, so this shouldn't be too much of an obstacle to support natively: if rpm-sequoia adds a way for us to get the salt data, we should be able to add it to the per-signature digest in rpmvsInitRange() and then ... actually it should just work, because the id is already unique. The adjusted test expectation seems correct to me, the commented version in there currently is clearly wrong in places now that I look at it. |
|
Hrm, okay we'll need to add a rpmDigestBundleUpdateID() call to rpm-side, the current expectation is that there's only one stream of data that is hashed from various points, but that should be just ~10 LoC. Then we need something like pgpDigParamsSalt() that takes a pgpDigParams and returns the salt data. And then in rpmvsRange() we check whether there was associated salt in sinfo->sig and if so, call rpmDigestBundleUpdateID() with it after initializing the digest. And then it should just work. ...until we find the next flaw in the plan 😆 |
This sounds fine to me (although I'm unable to comment too much on the rpm side). Adding |
|
For this particular purpose, differentiating an error vs expected empty doesn't matter much, but technically I guess one might want to return an error if a pgpDigParams of a key was passed, whereas calling it on a v3/v4 signature is not an error, there's just no data to return. But, I think we'd want the data as an uint8_t pointer with a separate length item, instead of a string. Caller would check for NULL data/0 length before passing on anyhow. |
|
So I guess it'd be something like this, and return 0 on success. data can be const or not, whatever is easier from Rust POV:
|
- Version 6 signatures include a salt that must be hashed before any
data is hashed.
- Add a function, `pgpDigParamsSalt`, to return a signature's salt.
- See #91.
- See rpm-software-management/rpm#3846
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
- Version 6 signatures include a salt that must be hashed before any
data is hashed.
- Add a function, `pgpDigParamsSalt`, to return a signature's salt.
- See #91.
- See rpm-software-management/rpm#3846
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
|
I'm closing this is favor of #93 . |
Also updates the GH actions versions to latest ones.