You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Allow defining some `dbtcloud_databricks_credential` when using global connections which don't generate an `adapter_id` (seed docs for the resource for more details)
12
+
13
+
### Changes
14
+
15
+
- Add the ability to compare changes in a `dbtcloud_job` resource
16
+
- Add deprecation notice for `target_name` in `dbtcloud_databricks_credential` as those can't be set in the UI
17
+
- Make `versionless` the default version for environments, but can still be changed
Copy file name to clipboardExpand all lines: docs/data-sources/job.md
+1
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ description: |-
29
29
-`id` (String) The ID of this resource.
30
30
-`job_completion_trigger_condition` (Set of Object) Which other job should trigger this job when it finishes, and on which conditions. (see [below for nested schema](#nestedatt--job_completion_trigger_condition))
31
31
-`name` (String) Given name for the job
32
+
-`run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code change in the PR.
32
33
-`self_deferring` (Boolean) Whether this job defers on a previous run of itself (overrides value in deferring_job_id)
33
34
-`timeout_seconds` (Number) Number of seconds before the job times out
34
35
-`triggers` (Map of Boolean) Flags for which types of triggers to use, keys of github_webhook, git_provider_webhook, schedule, on_merge
-`adapter_id` (Number) Databricks adapter ID for the credential
43
50
-`adapter_type` (String) The type of the adapter (databricks or spark)
44
51
-`project_id` (Number) Project ID to create the Databricks credential in
45
52
-`schema` (String) The schema where to create models
46
53
-`token` (String, Sensitive) Token for Databricks user
47
54
48
55
### Optional
49
56
57
+
-`adapter_id` (Number) Databricks adapter ID for the credential (do not fill in when using global connections, only to be used for connections created with the legacy connection resource `dbtcloud_connection`)
50
58
-`catalog` (String) The catalog where to create models (only for the databricks adapter)
-`dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. In a future version of the provider `versionless` will be the default if no version is provided
61
60
-`name` (String) Environment name
62
61
-`project_id` (Number) Project ID to create the environment in
63
62
-`type` (String) The type of environment (must be either development or deployment)
- To avoid Terraform state issues, when using this field, the `dbtcloud_project_connection` resource should be removed from the project or you need to make sure that the `connection_id` is the same in `dbtcloud_project_connection` and in the `connection_id` of the Development environment of the project
72
71
-`credential_id` (Number) Credential ID to create the environment with. A credential is not required for development environments but is required for deployment environments
73
72
-`custom_branch` (String) Which custom branch to use in this environment
73
+
-`dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. Defaults to`versionless` if no version is provided
74
74
-`deployment_type` (String) The type of environment. Only valid for environments of type 'deployment' and for now can only be 'production', 'staging' or left empty for generic environments
75
75
-`extended_attributes_id` (Number) ID of the extended attributes for the environment
76
76
-`is_active` (Boolean) Whether the environment is active
-`is_active` (Boolean) Should always be set to true as setting it to false is the same as creating a job in a deleted state. To create/keep a job in a 'deactivated' state, check the `triggers` config.
123
123
-`job_completion_trigger_condition` (Block Set, Max: 1) Which other job should trigger this job when it finishes, and on which conditions (sometimes referred as 'job chaining'). (see [below for nested schema](#nestedblock--job_completion_trigger_condition))
124
124
-`num_threads` (Number) Number of threads to use in the job
125
+
-`run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code changes. Requires `deferring_environment_id` to be set. (Advanced CI needs to be activated in the dbt Cloud Account Settings first as well)
125
126
-`run_generate_sources` (Boolean) Flag for whether the job should add a `dbt source freshness` step to the job. The difference between manually adding a step with `dbt source freshness` in the job steps or using this flag is that with this flag, a failed freshness will still allow the following steps to run.
126
127
-`schedule_cron` (String) Custom cron expression for schedule
127
128
-`schedule_days` (List of Number) List of days of week as numbers (0 = Sunday, 7 = Saturday) to execute the job at if running on a schedule
0 commit comments