subcategory |
---|
Compute |
-> Note If you have a fully automated setup with workspaces created by databricks_mws_workspaces or azurerm_databricks_workspace, please make sure to add depends_on attribute in order to prevent default auth: cannot configure default credentials errors.
Retrieves a list of databricks_cluster ids, that were created by Terraform or manually, with or without databricks_cluster_policy.
Retrieve cluster IDs for all clusters:
data "databricks_clusters" "all" {
}
Retrieve cluster IDs for all clusters having "Shared" in the cluster name:
data "databricks_clusters" "all_shared" {
cluster_name_contains = "shared"
}
cluster_name_contains
- (Optional) Only return databricks_cluster ids that match the given name string.filter_by
- (Optional) Filters to apply to the listed clusters. See filter_by Configuration Block below for details.
The filter_by
block controls the filtering of the listed clusters. It supports the following arguments:
cluster_sources
- (Optional) List of cluster sources to filter by. Possible values areAPI
,JOB
,MODELS
,PIPELINE
,PIPELINE_MAINTENANCE
,SQL
, andUI
.cluster_states
- (Optional) List of cluster states to filter by. Possible values areRUNNING
,PENDING
,RESIZING
,RESTARTING
,TERMINATING
,TERMINATED
,ERROR
, andUNKNOWN
.is_pinned
- (Optional) Whether to filter by pinned clusters.policy_id
- (Optional) Filter by databricks_cluster_policy id.
This data source exports the following attributes:
ids
- list of databricks_cluster ids
The following resources are used in the same context:
- End to end workspace management guide.
- databricks_cluster to create Databricks Clusters.
- databricks_cluster_policy to create a databricks_cluster policy, which limits the ability to create clusters based on a set of rules.
- databricks_instance_pool to manage instance pools to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
- databricks_job to manage Databricks Jobs to run non-interactive code in a databricks_cluster.
- databricks_library to install a library on databricks_cluster.
- databricks_pipeline to deploy Delta Live Tables.