Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 44530f9

Browse files
committedMar 14, 2025·
added documentation about terraform.lock and backendless functionality
1 parent b663d15 commit 44530f9

File tree

3 files changed

+56
-2
lines changed

3 files changed

+56
-2
lines changed
 

‎docs/ce/howto/backendless-mode.mdx

+22-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@ Digger works best as a 2-piece solution:
1010
You can however still use the most basic features of Digger as a standalone action without a backend. To do that, set the following option in your workflow configuration:
1111

1212
```
13+
# add this to .github/actions/digger_backendless_workflow.yml as an argument to the digger step
1314
no-backend: true
15+
1416
```
1517

1618
You'd also need to add `pull_request` and `issue_comment` workflow triggers:
@@ -27,10 +29,29 @@ on:
2729

2830
# Limitations
2931

30-
Historically this was the original way of running Digger. The initial version called "tfrun" didn't have any backend, it was just a GitHub action. But it quickly became apparent that without some sort of orchestration there's only so much that can be done:
32+
Historically this was the original way of running Digger. The initial version called "tfrun" didn't have any backend, it was just a GitHub action.
33+
But it quickly became apparent that without some sort of orchestration there's only so much that can be done:
3134

3235
- No concurrency; all plans / applies need to run sequentially, which is slow
3336
- Action starts on every push or comment, often just to detect that there are no changes. That's expensive, especially in large repos.
3437
- Clashing applies from other jobs will fail as they cannot be queued
3538
- Buckets / tables for PR-level locks need to be configured manually in your cloud account
3639
- Comments and status checks will be updated with a delay
40+
41+
For many small teams this is more than enough and it is quite easy to setup, if it works for you please don't hesitate to use digger in this manner.
42+
43+
# How it works
44+
45+
In order to function without a backend digger still needs store information about the PR locks so that it does not run "terraform plan"
46+
in 2 different PRs for the same digger project (since that would cause them stepping on top of eachother). In order to achieve that, digger will
47+
create a small resource in your cloud account to store which PR locked which project. The type of resource varies depending on the cloud account, here is what gets created:
48+
49+
| Cloud Provider | Resource Type |
50+
|----------------|-----------------|
51+
| AWS | DynamoDB |
52+
| GCP | GCP Bucket |
53+
| Azure | Storage Tables |
54+
55+
In case of AWS, during the first run digger will create this resource for you. However in case of GCP and azure you need to create it yourself and supply it as an argument.
56+
57+
After the resource is created digger will continue to use it for subsequent runs in order to store information about the locks and function correctly.

‎docs/ce/reference/terraform.lock.mdx

+32
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
title: "handling terraform.lock file"
3+
---
4+
5+
The `.terraform.lock.hcl` file is an important part of Terraform's dependency management system.
6+
Digger does not currently take opinions on how you manage .terraform.lock files however please be aware that since digger runs terraform
7+
within ephemeral jobs that if terraform.lock file gets updated within a job its not going to be commited back to the PR.
8+
Having said that, digger does not pass any `-upgrade` flag during init flag currently. Therefore it is recommended that you use semantic versions
9+
in your terraform providers and commit .terraform.local file manually. Then when updating versions of providers you can run another `terraform init -upgrade` file to update the file again,
10+
then commit it to the PR where you are upgrading provider versions.
11+
12+
Here are some general best practices for working with this file:
13+
14+
## Core Best Practices
15+
16+
1. **Always commit to version control** - The lock file should be committed to your version control system (Git, etc.) to ensure all team members and CI/CD pipelines use exactly the same provider versions.
17+
18+
2. **Don't edit manually** - The lock file is generated and maintained by Terraform. Manual edits can break the hashing verification system.
19+
20+
3. **Use in CI/CD pipelines** - Ensure your automation uses the lock file by not running `terraform init -upgrade` in pipelines.
21+
22+
4. **Update deliberately** - Use `terraform init -upgrade` when you intentionally want to update providers, not as part of regular workflows.
23+
24+
## Additional Recommendations
25+
26+
5. **Review changes** - When the lock file changes after an upgrade, review the differences to understand what provider versions have changed.
27+
28+
6. **Test after updates** - After updating provider versions, thoroughly test your infrastructure code to catch any breaking changes.
29+
30+
7. **Use provider constraints** - In your Terraform configuration, specify version constraints for providers to control which versions can be selected during upgrades.
31+
32+
8. **Understand cross-platform hashes** - The lock file contains hashes for different platforms. If your team uses multiple platforms, you may need to run `terraform providers lock` with the `-platform` flag to add hashes for other platforms.

‎docs/mint.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,8 @@
117117
"pages": [
118118
"ce/reference/digger.yml",
119119
"ce/reference/action-inputs",
120-
"ce/reference/api"
120+
"ce/reference/api",
121+
"ce/reference/terraform.lock"
121122
]
122123
},
123124
{

0 commit comments

Comments
 (0)
Please sign in to comment.