Description
Currently Digger supports only 2 types of policies:
- Plan policy (that's checking plan output)
- Access policy (that's checking who can do what, e.g. whether or not an Apply can run)
But what if a malicious actor tries to run plan using a provider containing a data source with arbitrary code?
Checking the required providers
block is not enough; providers can also be referenced by external modules.
More details in the Doordash Engineering blog
Workaround via Inline Policies
Similar to what the Doordash team did with Atlantis - using custom commands and inline policies in the Digger workflow:
workflows:
with-preplan-conftest:
plan:
steps:
- init
- run: conftest test \
--update s3::https://s3.amazonaws.com/bucket/opa-rules \
--namespace terraform.providers .terraform.lock.hcl`
- plan
However, this approach has the same drawbacks as any other use case for inline policies:
- manual scripting of the custom step
- policies need to be updated in the S3 bucket separately
- no way to nicely show the erros to the user
From Digger perspective, inline policies is just a script; it doesn't know anything about what this script does. So this fixes the security issue, but not in the most helpful way.
Proposed solution A: Lockfile Policy
Digger should support a 3rd kind of policy - Lockfile Policy, alongside the Plan and Access policies in the management repo
- Run after
terraform init
but before bothterraform plan
andterraform apply
- OPA will be passed the contents of
.terraform.lock.hcl
aslockfile
input
Proposed solution B: allowed_providers
option in digger.yml
Or maybe policy is an overkill for that; we could just check the .terraform.lock.hcl
file directly without using OPA before every plan and apply. It's going to be faster and easier to set up; provider checking looks like the only realistic use case for this anyways. This would be similar to #1252 (Atlantis-style apply_requirements
) which we are yet to implement
Activity