Skip to content

Conversation

@zachariahcox
Copy link
Collaborator

@zachariahcox zachariahcox commented Nov 18, 2024

fixes: #1178

The threats.md file is intended to be a one-stop summary of all the threats mitigated by SLSA.
From this page, we should plan to redirect to other parts of the documentation.

Edit: for posterity, the goal of the PR was changed.
The new goal was to fill the /draft folder with content for v1.1.
We'll make a cut there, then move to the next version which includes the source track formally.

@netlify
Copy link

netlify bot commented Nov 18, 2024

Deploy Preview for slsa ready!

Name Link
🔨 Latest commit 7db70c8
🔍 Latest deploy log https://app.netlify.com/sites/slsa/deploys/67d0a0f35d8e6c0008454f8e
😎 Deploy Preview https://deploy-preview-1236--slsa.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 1 out of 1 changed files in this pull request and generated no suggestions.

Comments skipped due to low confidence (2)

docs/spec/draft/threats.md:86

  • [nitpick] The word 'SHOULD' should not be in all caps unless it is a specific requirement in a specification. Consider changing it to 'should'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.

docs/spec/draft/threats.md:86

  • [nitpick] The phrase 'Trustworthiness scales with transparency' is unclear. Consider rephrasing it to 'Trustworthiness increases with transparency'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.

Tip: Turn on automatic Copilot reviews for this repository to get quick feedback on every pull request. Learn more

Copy link
Contributor

@TomHennen TomHennen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the one comment, otherwise this looks great!

Copy link
Member

@adityasaky adityasaky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't a complete review because I wasn't sure if more changes are coming, so I'm submitting this for visibility for now.

<details><summary>Software producer intentionally submits bad code</summary>
*Mitigation:*
This kind of attack cannot be directly mitigated through SLSA controls.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.

Also, can we point to some specific ways to use scorecard here? I think we want to be careful with what we say can help with something like the xz attack...

@TomHennen
Copy link
Contributor

TomHennen commented Dec 4, 2024

If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)?

@lehors thoughts?

@zachariahcox zachariahcox changed the title content: source track: address org threats content: source track: address provider threats Dec 4, 2024
Comment on lines 71 to 78
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard.
Ultimately, consumers must decide which producers are trusted to produce artifacts and which issuers are trusted to produce VSAs.

We could redirect from here over to "verifying platforms".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for late reply. Why do consumers even need to know which producers are trusted to produce artifacts (source code, I imagine, in this case)? Couldn't they just get away with the Source Track VSA from the issuer, and call it a day so long as its subjects/outputs (e.g., source code) matches the inputs to the next step (e.g., build)?

@zachariahcox zachariahcox changed the title content: source track: address provider threats content: source track: update source threats for 1.1 Dec 4, 2024
@zachariahcox zachariahcox force-pushed the users/zacox/orgthreat branch from 421ff2b to 97094b5 Compare December 4, 2024 22:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link
Contributor

@TomHennen TomHennen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good!

zachariahcox and others added 5 commits March 3, 2025 13:23
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
@zachariahcox
Copy link
Collaborator Author

I'm going to take another crack at the source threats overview. I mostly left it alone today from where it was in december.

Copy link
Contributor

@TomHennen TomHennen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming the idea is that unresolved conversations have been addressed one way or the other. Given that any open threads on my end were optional I think this seems fine.

I did find an additional tweak that we might want to make w.r.t. unique mappings, not sure how important it is. I'll approve and you can address as you see fit.

repository contributors and owners map to unique persons.
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require all accounts with 'write' and 'approval' permissions to be strongly authenticated and ensure they map to unique persons.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It occurred to me that we might have the mapping backwards?

It's not that we want to make sure each account maps to a single person, but that we want to make sure each person maps to a single account.

Copy link
Member

@adityasaky adityasaky Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If two accounts under consideration must map to unique persons, don't we have also have the case that each person is mapped to at most one account? I guess it could be clearer and set 1:1 expectation?

That said, I think it is better to put the emphasis on the persons involved being distinct, as you suggest, rather than the accounts, it would partially address the concern I had in the other thread I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what specific suggestions might be, but I feel like the current wording requiring unique person associations is clear enough.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to tighten this one up.
I feel like the threat, mitigation and example are all good, but the solution needs work.

Copy link
Collaborator Author

@zachariahcox zachariahcox Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could just adjust it -- "the producer must prevent this kind of thing from happening somehow"

repository contributors and owners map to unique persons.
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require all accounts with 'write' and 'approval' permissions to be strongly authenticated and ensure they map to unique persons.
Copy link
Member

@adityasaky adityasaky Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If two accounts under consideration must map to unique persons, don't we have also have the case that each person is mapped to at most one account? I guess it could be clearer and set 1:1 expectation?

That said, I think it is better to put the emphasis on the persons involved being distinct, as you suggest, rather than the accounts, it would partially address the concern I had in the other thread I think.


*Threat:* Software producer intentionally submits "bad" code, following all
proper processes.
- The code is open source and has a sufficiently large user-base that malicious changes are likely to be detected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you trying to say that they would be detected during review before the revision is accepted or during some routine perusal of the repository code?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess both to some extent.
Expert code review may be "required" to approve something
into the main branch, but prs are often reviewed by non-authorized (but maybe informed) reviewers too.
Those extra eyes help.

Maybe this should be a time-to-mitigation claim.
With more sets of eyes, the length of time that a vulnerability exists in the repo is reduced.

Copy link
Collaborator Author

@zachariahcox zachariahcox Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe

The repo is open source with an active user-base. High numbers of engaged users increase the likelihood that bad code is detected during code review and reduce the time-to-detection when bad code makes it in.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think commenting about a reduced time to detection feels right. We can also make an additional comment about an "expert" review as a potential additional mitigation.

How about

Suggested change
- The code is open source and has a sufficiently large user-base that malicious changes are likely to be detected.
- The code is open source and has a large engaged user-base. Time to detect and remediate malicious changes to the code base are reduced.
- The organization requires that all code is vetted by expert reviewers before revisions are accepted.

repository contributors and owners map to unique persons.
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require all accounts with 'write' and 'approval' permissions to be strongly authenticated and ensure they map to unique persons.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what specific suggestions might be, but I feel like the current wording requiring unique person associations is clear enough.

repository contributors and owners map to unique persons.
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require all accounts with 'write' and 'approval' permissions to be strongly authenticated and ensure they map to unique persons.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An account that creates a revision request doesn't necessarily need to have write permissions. If an account without write permissions opens a PR, that account could be owned by a separate maintainer account with write permissions and not violate this.

Instead, would it be more accurate to state that all accounts involved in creating and reviewing a proposed revision are mapped to unique persons?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the trick is that it should still be possible for anonymous folks to send PRs. They don't need to be unique. But you'd want to unique and trusted people to be involved.

I also wonder how much of this nuance is actually needed. These are just example solutions, they're not really meant to be completely bulletproof (I don't think?).

Another alternative is we just punt and say "SLSA doesn't address this yet". But that seems annoying. :p

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an anonymous person sends a PR then this can be protected against by requiring 2-party reviews. I don't know how to word that though.

You are probably right that this much nuance isn't needed here. This might even be addressing a separate threat?

Copy link
Collaborator Author

@zachariahcox zachariahcox Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An account that creates a revision request doesn't necessarily need to have write permissions.

Not to get tooooooo in the weeds, but in most implementations an open PR requires write permissions to create the proposed merge commits for CI.
Because of this, opening a PR does require write permissions on the git repo.

Comment on lines 219 to 221
*Mitigation:* The producer declares that only the final delta is considered approved.
In this configuration, intermediate revisions are not considered to be approved and are not added to the protected context (e.g. the `main` branch).
With git version control systems this is called a "squash" merge strategy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it is likely harder to detect/enforce, another reasonable mitigation is just to review all commits. There could even be automation which flags content from earlier commits that have been removed/overwritten in order to try to detect this type of attack.

Copy link
Collaborator Author

@zachariahcox zachariahcox Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's technically possible, but I would say that getting that right requires multiple squash merge PRs -- you cannot really do it in one shot.

edit: adding some clarity.

If I have 10 commits on my topic branch, we don't run CI to ensure that all 10 are independently deployable -- we only test "the most recent" one.
This is for both practical and economic reasons.

At a minimum, we can say that it would not be possible to write independent source provenance for each commit in the topic branch without at least some (probably a lot of) extra work.


</details>
<details><summary>Hide bad change behind good one</summary>
<details><summary>Commit graph attacks</summary>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arewm @TomHennen

I rewrote this section based on y'all's feedback.

My goal was to remove the wonky git stuff about squashes and focus on reachable revisions following process.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc: @mlieberman85 we were discussing the topic of squash merges at the standup today. This problem is one of the reasons we should not accept rebase merges at higher slsa levels.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, so one of the things I've done in a previous life was the attestation is actually an attestation of a particular ordering of the commits. so you are attesting to the change not each individual commit so if you roll back a commit it would no longer be valid since you're just attesting to HEAD. with the assumption you're just attesting to old HEAD -> PR/New HEAD

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know some folks that hold this to be a hotly debated and contentious topic. It feels right for it to be required at a higher SLSA level because of the higher protections that it guarantees. Some communities preferences may still prefer to not use this approach preventing the SLSA source level that can be achieved.

Copy link
Member

@arewm arewm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I left one nit comment outside of the review.

Co-authored-by: Andrew McNamara <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
@zachariahcox
Copy link
Collaborator Author

Ok! thanks everyone for your input and patience on this one!
I'm going to merge it -- and we'll focus on any followups in the next pr 👍

@zachariahcox zachariahcox merged commit 027fb2a into slsa-framework:main Mar 11, 2025
6 checks passed
@github-project-automation github-project-automation bot moved this from 🆕 New to ✅ Done in Issue triage Mar 11, 2025
@zachariahcox zachariahcox deleted the users/zacox/orgthreat branch April 12, 2025 20:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.

TODO: Need mitigation description for "Software producer intentionally submits bad code" threat

7 participants