-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
Hello,
I get the following error when trying to import a cluster policy:
2022-05-25 12:49:15 [INFO] │ A reference to a resource type must be followed by at least one attribute
2022-05-25 12:49:15 [INFO] │ access, specifying the resource name.
2022-05-25 12:49:15 [INFO] ╵
2022-05-25 12:49:15 [INFO] ╷
2022-05-25 12:49:15 [INFO] │ Error: Invalid reference
2022-05-25 12:49:15 [INFO] │
2022-05-25 12:49:15 [INFO] │ on databricks_cluster_policy_C9628DA3D2000019.tf.json line 5, in resource.databricks_cluster_policy.databricks_cluster_policy_C9628DA3D2000019:
2022-05-25 12:49:15 [INFO] │ 5: "definition": ""
I also get this error on importing cluster. Note I hav no global init scripts created:
command: terraform validate
2022-05-25 12:18:20 [INFO] ╷
2022-05-25 12:18:20 [INFO] │ Error: Reference to undeclared resource
2022-05-25 12:18:20 [INFO] │
2022-05-25 12:18:20 [INFO] │ on databricks_cluster_0525_145639_efwf0ccj.tf.json line 11, in resource.databricks_cluster.databricks_cluster_0525_145639_efwf0ccj.depends_on:
2022-05-25 12:18:20 [INFO] │ 11: "databricks_global_init_script.databricks_global_init_scripts"
2022-05-25 12:18:20 [INFO] │
2022-05-25 12:18:20 [INFO] │ A managed resource "databricks_global_init_script"
2022-05-25 12:18:20 [INFO] │ "databricks_global_init_scripts" has not been declared in the root module.
2022-05-25 12:18:20 [INFO] ╵
Here is the export config I used:
#Name the configuration set, this can be used to track multiple configurations runs and changes
name: gbx_snapshots
# Add this value if you want all the groups, users and service principals to be parameterized so you can map
# them to another value using tf_vars
parameterize_permissions: true
objects:
notebook:
# Notebook path can be a string, a list or a YAML items collection (multiple subgroups starting with - )
notebook_path: "/Users"
# In workspacse you may have deleted users who leave behind a trail of created notebooks. Enabling this to true
# prevents them from being exported. This is optional and will default to false. Please set to true if you want the
# sync tool to skip them.
exclude_deleted_users: true
# Use Custom map var to setup a new location
# custom_map_vars:
# path: "/Users/%{DATA:variable}/%{GREEDYDATA}"
# Certain patterns can be excluded from being exported via exclude_path field. Make sure to use
# the glob syntax to specify all paths.
# exclude_path:
# - "/Users/**" # Ignore all paths within the users folder
# - "/tmp/**" # Ignore all files in the tmp directory
global_init_script:
# pattern will be implemented in the future - make sure you have "*" in here
patterns:
- "*"
cluster_policy:
# pattern will be implemented in the future - make sure you have "*" in here
patterns:
- "*"
# dbfs_file:
# DBFS path can be a string or a set of YAML items (multiple subgroups starting with - )
# dbfs_path:
# - "dbfs:/tests"
# - "dbfs:/databricks/init_scripts"
# Certain patterns can be excluded from being exported via exclude_path field. Make sure to use
# the glob syntax to specify all paths. Make sure all paths start with / and not dbfs:/.
# exclude_path:
# - "**.whl" # Ignore all wheel files
# - "**.jar" # Ignore all jar files
# - "/tmp/**" # Ignore all files in the tmp directory
instance_pool:
# pattern will be implemented in the future - make sure you have "*" in here
patterns:
- "*"
# secret:
# pattern will be implemented in the future - make sure you have "*" in here
# patterns:
# - "*"
cluster:
# pattern will be implemented in the future - make sure you have "*" in here
patterns:
- "*"
# Use this to choose to pin the first twenty clusters. (This is a limit set by the databricks platform.)
# This can help prevent your clusters from disappearing after 30 days if they are in terminated state.
# pin_first_20: false
# Filter by certain fields using regular expressions on cluster_spec fields to select a set of clusters
# by:
# cluster_name:
# - ".*fun.*"
job:
# pattern will be implemented in the future - make sure you have "*" in here
patterns:
- "*"
## The following options will allow you to set static variables which need to be provided at runtime for
## clusters, instance pools and policies
#convert_existing_cluster_to_var: true
#convert_new_cluster_instance_pool_to_var: true
#convert_new_cluster_cluster_policy_to_var: true
# Filter by certain fields using regular expressions on job settings to select a set of jobs
# by:
# settings.existing_cluster_id:
# - ".*fun.*"
# identity:
# pattern will be implemented in the future - make sure you have "*" in here
# patterns:
# - "*"
# Set this to true or false to set a default to users active field. Omitting this will just use their source value
# set_all_users_active: false
Metadata
Metadata
Assignees
Labels
No labels