You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/ce/howto/backendless-mode.mdx
+22-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,9 @@ Digger works best as a 2-piece solution:
10
10
You can however still use the most basic features of Digger as a standalone action without a backend. To do that, set the following option in your workflow configuration:
11
11
12
12
```
13
+
# add this to .github/actions/digger_backendless_workflow.yml as an argument to the digger step
13
14
no-backend: true
15
+
14
16
```
15
17
16
18
You'd also need to add `pull_request` and `issue_comment` workflow triggers:
@@ -27,10 +29,29 @@ on:
27
29
28
30
# Limitations
29
31
30
-
Historically this was the original way of running Digger. The initial version called "tfrun" didn't have any backend, it was just a GitHub action. But it quickly became apparent that without some sort of orchestration there's only so much that can be done:
32
+
Historically this was the original way of running Digger. The initial version called "tfrun" didn't have any backend, it was just a GitHub action.
33
+
But it quickly became apparent that without some sort of orchestration there's only so much that can be done:
31
34
32
35
- No concurrency; all plans / applies need to run sequentially, which is slow
33
36
- Action starts on every push or comment, often just to detect that there are no changes. That's expensive, especially in large repos.
34
37
- Clashing applies from other jobs will fail as they cannot be queued
35
38
- Buckets / tables for PR-level locks need to be configured manually in your cloud account
36
39
- Comments and status checks will be updated with a delay
40
+
41
+
For many small teams this is more than enough and it is quite easy to setup, if it works for you please don't hesitate to use digger in this manner.
42
+
43
+
# How it works
44
+
45
+
In order to function without a backend digger still needs store information about the PR locks so that it does not run "terraform plan"
46
+
in 2 different PRs for the same digger project (since that would cause them stepping on top of eachother). In order to achieve that, digger will
47
+
create a small resource in your cloud account to store which PR locked which project. The type of resource varies depending on the cloud account, here is what gets created:
48
+
49
+
| Cloud Provider | Resource Type |
50
+
|----------------|-----------------|
51
+
| AWS | DynamoDB |
52
+
| GCP | GCP Bucket |
53
+
| Azure | Storage Tables |
54
+
55
+
In case of AWS, during the first run digger will create this resource for you. However in case of GCP and azure you need to create it yourself and supply it as an argument.
56
+
57
+
After the resource is created digger will continue to use it for subsequent runs in order to store information about the locks and function correctly.
The `.terraform.lock.hcl` file is an important part of Terraform's dependency management system.
6
+
Digger does not currently take opinions on how you manage .terraform.lock files however please be aware that since digger runs terraform
7
+
within ephemeral jobs that if terraform.lock file gets updated within a job its not going to be commited back to the PR.
8
+
Having said that, digger does not pass any `-upgrade` flag during init flag currently. Therefore it is recommended that you use semantic versions
9
+
in your terraform providers and commit .terraform.local file manually. Then when updating versions of providers you can run another `terraform init -upgrade` file to update the file again,
10
+
then commit it to the PR where you are upgrading provider versions.
11
+
12
+
Here are some general best practices for working with this file:
13
+
14
+
## Core Best Practices
15
+
16
+
1.**Always commit to version control** - The lock file should be committed to your version control system (Git, etc.) to ensure all team members and CI/CD pipelines use exactly the same provider versions.
17
+
18
+
2.**Don't edit manually** - The lock file is generated and maintained by Terraform. Manual edits can break the hashing verification system.
19
+
20
+
3.**Use in CI/CD pipelines** - Ensure your automation uses the lock file by not running `terraform init -upgrade` in pipelines.
21
+
22
+
4.**Update deliberately** - Use `terraform init -upgrade` when you intentionally want to update providers, not as part of regular workflows.
23
+
24
+
## Additional Recommendations
25
+
26
+
5.**Review changes** - When the lock file changes after an upgrade, review the differences to understand what provider versions have changed.
27
+
28
+
6.**Test after updates** - After updating provider versions, thoroughly test your infrastructure code to catch any breaking changes.
29
+
30
+
7.**Use provider constraints** - In your Terraform configuration, specify version constraints for providers to control which versions can be selected during upgrades.
31
+
32
+
8.**Understand cross-platform hashes** - The lock file contains hashes for different platforms. If your team uses multiple platforms, you may need to run `terraform providers lock` with the `-platform` flag to add hashes for other platforms.
0 commit comments