Skip to content

Commit fbfa5ae

Browse files
authored
Merge pull request #308 from dbt-labs/release-0.3.19
2 parents f345654 + 27a93a5 commit fbfa5ae

File tree

19 files changed

+657
-34
lines changed

19 files changed

+657
-34
lines changed

CHANGELOG.md

+13-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,19 @@
22

33
All notable changes to this project will be documented in this file.
44

5-
## [Unreleased](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.18...HEAD)
5+
## [Unreleased](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.19...HEAD)
6+
7+
# [0.3.19](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.18...v0.3.19)
8+
9+
### Fixes
10+
11+
- Allow defining some `dbtcloud_databricks_credential` when using global connections which don't generate an `adapter_id` (seed docs for the resource for more details)
12+
13+
### Changes
14+
15+
- Add the ability to compare changes in a `dbtcloud_job` resource
16+
- Add deprecation notice for `target_name` in `dbtcloud_databricks_credential` as those can't be set in the UI
17+
- Make `versionless` the default version for environments, but can still be changed
618

719
# [0.3.18](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.17...v0.3.18)
820

docs/data-sources/job.md

+1
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ description: |-
2929
- `id` (String) The ID of this resource.
3030
- `job_completion_trigger_condition` (Set of Object) Which other job should trigger this job when it finishes, and on which conditions. (see [below for nested schema](#nestedatt--job_completion_trigger_condition))
3131
- `name` (String) Given name for the job
32+
- `run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code change in the PR.
3233
- `self_deferring` (Boolean) Whether this job defers on a previous run of itself (overrides value in deferring_job_id)
3334
- `timeout_seconds` (Number) Number of seconds before the job times out
3435
- `triggers` (Map of Boolean) Flags for which types of triggers to use, keys of github_webhook, git_provider_webhook, schedule, on_merge

docs/data-sources/jobs.md

+1
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@ Read-Only:
6161
- `job_type` (String) The type of job (e.g. CI, scheduled)
6262
- `name` (String) The name of the job
6363
- `project_id` (Number) The ID of the project
64+
- `run_compare_changes` (Boolean) Whether the job should compare data changes introduced by the code change in the PR
6465
- `run_generate_sources` (Boolean) Whether the job test source freshness
6566
- `schedule` (Attributes) (see [below for nested schema](#nestedatt--jobs--schedule))
6667
- `settings` (Attributes) (see [below for nested schema](#nestedatt--jobs--settings))

docs/resources/databricks_credential.md

+13-5
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,20 @@ description: |-
1313
## Example Usage
1414

1515
```terraform
16-
# when using the Databricks adapter
16+
# when using the Databricks adapter with a new `dbtcloud_global_connection`
17+
# we don't provide an `adapter_id`
18+
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
19+
project_id = dbtcloud_project.dbt_project.id
20+
token = "abcdefgh"
21+
schema = "my_schema"
22+
adapter_type = "databricks"
23+
}
24+
25+
# when using the Databricks adapter with a legacy `dbtcloud_connection`
26+
# we provide an `adapter_id`
1727
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
1828
project_id = dbtcloud_project.dbt_project.id
1929
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
20-
target_name = "prod"
2130
token = "abcdefgh"
2231
schema = "my_schema"
2332
adapter_type = "databricks"
@@ -27,7 +36,6 @@ resource "dbtcloud_databricks_credential" "my_databricks_cred" {
2736
resource "dbtcloud_databricks_credential" "my_spark_cred" {
2837
project_id = dbtcloud_project.dbt_project.id
2938
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
30-
target_name = "prod"
3139
token = "abcdefgh"
3240
schema = "my_schema"
3341
adapter_type = "spark"
@@ -39,16 +47,16 @@ resource "dbtcloud_databricks_credential" "my_spark_cred" {
3947

4048
### Required
4149

42-
- `adapter_id` (Number) Databricks adapter ID for the credential
4350
- `adapter_type` (String) The type of the adapter (databricks or spark)
4451
- `project_id` (Number) Project ID to create the Databricks credential in
4552
- `schema` (String) The schema where to create models
4653
- `token` (String, Sensitive) Token for Databricks user
4754

4855
### Optional
4956

57+
- `adapter_id` (Number) Databricks adapter ID for the credential (do not fill in when using global connections, only to be used for connections created with the legacy connection resource `dbtcloud_connection`)
5058
- `catalog` (String) The catalog where to create models (only for the databricks adapter)
51-
- `target_name` (String) Target name
59+
- `target_name` (String, Deprecated) Target name
5260

5361
### Read-Only
5462

docs/resources/environment.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ This version of the provider has the `connection_id` as an optional field but it
2222

2323
```terraform
2424
resource "dbtcloud_environment" "ci_environment" {
25-
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (Beta on 15 Feb 2024, to always be on the latest dbt version)
25+
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (by default, it is set to versionless if not configured)
2626
dbt_version = "versionless"
2727
name = "CI"
2828
project_id = dbtcloud_project.dbt_project.id
@@ -48,7 +48,7 @@ resource "dbtcloud_environment" "dev_environment" {
4848
name = "Dev"
4949
project_id = dbtcloud_project.dbt_project.id
5050
type = "development"
51-
connection_id = dbtcloud_global_connection.my_other_global_connection
51+
connection_id = dbtcloud_global_connection.my_other_global_connection.id
5252
}
5353
```
5454

@@ -57,7 +57,6 @@ resource "dbtcloud_environment" "dev_environment" {
5757

5858
### Required
5959

60-
- `dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. In a future version of the provider `versionless` will be the default if no version is provided
6160
- `name` (String) Environment name
6261
- `project_id` (Number) Project ID to create the environment in
6362
- `type` (String) The type of environment (must be either development or deployment)
@@ -71,6 +70,7 @@ resource "dbtcloud_environment" "dev_environment" {
7170
- To avoid Terraform state issues, when using this field, the `dbtcloud_project_connection` resource should be removed from the project or you need to make sure that the `connection_id` is the same in `dbtcloud_project_connection` and in the `connection_id` of the Development environment of the project
7271
- `credential_id` (Number) Credential ID to create the environment with. A credential is not required for development environments but is required for deployment environments
7372
- `custom_branch` (String) Which custom branch to use in this environment
73+
- `dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. Defaults to`versionless` if no version is provided
7474
- `deployment_type` (String) The type of environment. Only valid for environments of type 'deployment' and for now can only be 'production', 'staging' or left empty for generic environments
7575
- `extended_attributes_id` (Number) ID of the extended attributes for the environment
7676
- `is_active` (Boolean) Whether the environment is active

docs/resources/job.md

+1
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,7 @@ resource "dbtcloud_job" "downstream_job" {
122122
- `is_active` (Boolean) Should always be set to true as setting it to false is the same as creating a job in a deleted state. To create/keep a job in a 'deactivated' state, check the `triggers` config.
123123
- `job_completion_trigger_condition` (Block Set, Max: 1) Which other job should trigger this job when it finishes, and on which conditions (sometimes referred as 'job chaining'). (see [below for nested schema](#nestedblock--job_completion_trigger_condition))
124124
- `num_threads` (Number) Number of threads to use in the job
125+
- `run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code changes. Requires `deferring_environment_id` to be set. (Advanced CI needs to be activated in the dbt Cloud Account Settings first as well)
125126
- `run_generate_sources` (Boolean) Flag for whether the job should add a `dbt source freshness` step to the job. The difference between manually adding a step with `dbt source freshness` in the job steps or using this flag is that with this flag, a failed freshness will still allow the following steps to run.
126127
- `schedule_cron` (String) Custom cron expression for schedule
127128
- `schedule_days` (List of Number) List of days of week as numbers (0 = Sunday, 7 = Saturday) to execute the job at if running on a schedule

examples/resources/dbtcloud_databricks_credential/resource.tf

+11-3
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,17 @@
1-
# when using the Databricks adapter
1+
# when using the Databricks adapter with a new `dbtcloud_global_connection`
2+
# we don't provide an `adapter_id`
3+
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
4+
project_id = dbtcloud_project.dbt_project.id
5+
token = "abcdefgh"
6+
schema = "my_schema"
7+
adapter_type = "databricks"
8+
}
9+
10+
# when using the Databricks adapter with a legacy `dbtcloud_connection`
11+
# we provide an `adapter_id`
212
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
313
project_id = dbtcloud_project.dbt_project.id
414
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
5-
target_name = "prod"
615
token = "abcdefgh"
716
schema = "my_schema"
817
adapter_type = "databricks"
@@ -12,7 +21,6 @@ resource "dbtcloud_databricks_credential" "my_databricks_cred" {
1221
resource "dbtcloud_databricks_credential" "my_spark_cred" {
1322
project_id = dbtcloud_project.dbt_project.id
1423
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
15-
target_name = "prod"
1624
token = "abcdefgh"
1725
schema = "my_schema"
1826
adapter_type = "spark"

examples/resources/dbtcloud_environment/resource.tf

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
resource "dbtcloud_environment" "ci_environment" {
2-
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (Beta on 15 Feb 2024, to always be on the latest dbt version)
2+
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (by default, it is set to versionless if not configured)
33
dbt_version = "versionless"
44
name = "CI"
55
project_id = dbtcloud_project.dbt_project.id
@@ -25,5 +25,5 @@ resource "dbtcloud_environment" "dev_environment" {
2525
name = "Dev"
2626
project_id = dbtcloud_project.dbt_project.id
2727
type = "development"
28-
connection_id = dbtcloud_global_connection.my_other_global_connection
28+
connection_id = dbtcloud_global_connection.my_other_global_connection.id
2929
}

0 commit comments

Comments
 (0)