-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine source level in go #15
Conversation
Signed-off-by: Tom Hennen <[email protected]>
Signed-off-by: Tom Hennen <[email protected]>
1. Ensure the ruleset is actively enforced. 2. Ensure it's been active for a user specified number of days. Signed-off-by: Tom Hennen <[email protected]>
Signed-off-by: Tom Hennen <[email protected]>
Signed-off-by: Tom Hennen <[email protected]>
Signed-off-by: Tom Hennen <[email protected]>
Signed-off-by: Tom Hennen <[email protected]>
…were active Signed-off-by: Tom Hennen <[email protected]>
Merging this so we can actually try to use it. :) |
|
||
// Checks to see if the rule meets our requirements. | ||
func checkRule(ctx context.Context, gh_client *github.Client, owner string, repo string, rule *github.RepositoryRule, minTime time.Time) (bool, error) { | ||
ruleset, _, err := gh_client.Repositories.GetRuleset(ctx, owner, repo, rule.RulesetID, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned elsewhere, this only tells you what's configured now, if a new revision were to be created right now -- it doesn't say anything about the revision under consideration.
This kind of logic made sense in previous iterations of the source track where "the entire repo" was marked compliant or not.
For many good reasons we've tried to move to a revision-based model, where only certain revisions are compliant or not.
Unless the repo is part of an organization, the rule_suites apis to learn about a specific revision don't really exist yet. you'd have to do some digging to make it work today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, so I improved how this is done here: https://github.com/slsa-framework/slsa-source-poc/blob/main/sourcetool/pkg/checklevel/checklevel.go#L94
There we check
- The time the rule started being active
- The time the repo 'opted in to SLSA'
- The time the commit occurred
- We check that the commit occurred after the repo opted in to SLSA
- That the rules were enabled at least as long as the repo was opted in to SLSA
I think the net result is that we cover this case?
} | ||
|
||
func commitPushTime(ctx context.Context, gh_client *github.Client, commit string, owner string, repo string, branch string) (time.Time, error) { | ||
// Unfortunately the gh_client doesn't have native support for this...' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbauer is this something we should add?
} | ||
|
||
func commitPushTime(ctx context.Context, gh_client *github.Client, commit string, owner string, repo string, branch string) (time.Time, error) { | ||
// Unfortunately the gh_client doesn't have native support for this...' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should probably enhance this comment.
we need to find a trustworthy timestamp (and we don't trust the commit metadata).
On GitHub if the repo does not require mergeQueue and does require use of a squash merge strategy, then:
* each PR will add exactly one commit to the target branch
* that commit will usually be new (meaning it was produced by the PR application and committed-by GitHub)
* that commit will be the after_sha of the ref update to the target branch
In that situation, you can use the /activity endpoint to get the timestamp when GitHub server received the ref-update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah definitely.
Separately, maybe this tool could require squash merge if that makes things easier?
I think it would be fine for the tool to be more opinionated than the SLSA spec itself.
if err != nil { | ||
return "", err | ||
} | ||
nonFFGood, err := checkRule(ctx, gh_client, owner, repo, nonFastFowardRule, minTime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because this looks at ruleset configuration and not the rules that applied to a revision, it accidentally fail when rulesets are reorganized:
Example scenario: an admin wants to move rule "X" from a repo-level ruleset to an org-level ruleset.
Basic rollout steps might be:
Day 1. create new org-level ruleset with rule X in "evaluate" mode. During this period, both the org- and repo-level rulesets require rule X, but only one is active.
Day 2. enable the org-level ruleset (changing its updated date)). Now both are active, however each revision is only subject to one evaluation of Rule X that satisfies both the org- and repo-level ruleset)
Day 3. disable the repo-level ruleset, leaving only the org-level ruleset to require rule X.
Day 4. delete the repo-level ruleset.
Notes:
- Revisions contributed during that period were always compliant with a requirement to run rule X
- it's important that provenance verification eventually looks at what happened to produce a revision. rulesets themselves can be safely rolled out and may overlap in venn-diagrammy ways such that their updated-at dates may not be sufficient to know what happened.
- Rulesets can be bypassed for specific revisions without disabling or updating the rulesets. This might or might not be important, but seems like something the verification process would want to look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, this is definitely a fragile set up at the moment. That seems 'fine' for a PoC but is something we'd absolutely need to fix 'for real'.
One thought I'd had is that this is where Source Level 3 will help us. We can produce the provenance when the commit is created, noting the rules that were enabled at that time. VSA production can then keep all of that in mind, not relying on the GH API but instead relying on what's encoded in the attestation itself.
I think once I get storage worked out I'll be able to prototype the level 3 stuff.
Does that seem like it would work to you?
This introduces a golang CLI that determines the level instead of using shell scripts.
It also implements the ideas in #13 to ensure the rulesets are active and have been active for a sufficient length of time.
The go program introduced here:
It also starts to introduce support for producing and signing the VSA with Sigstore, but currently stops short because it got too complicated.