databricks.ClusterPolicy
This resource creates a cluster policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for cluster creation. cluster policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.
This resource can only be used with a workspace-level provider!
Cluster policies let you:
- Limit users to create clusters with prescribed settings.
- Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).
- Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).
Cluster policy permissions limit which policies a user can select in the Policy drop-down when the user creates a cluster:
- If no policies have been created in the workspace, the Policy drop-down does not display.
- A user who has cluster create permission can select the
Free formpolicy and create fully-configurable clusters. - A user who has both cluster create permission and access to cluster policies can select the Free form policy and policies they have access to.
- A user that has access to only cluster policies, can select the policies they have access to.
Example Usage
Let us take a look at an example of how you can manage two teams: Marketing and Data Engineering. In the following scenario we want the marketing team to have a really good query experience, so we enabled delta cache for them. On the other hand we want the data engineering team to be able to utilize bigger clusters so we increased the dbus per hour that they can spend. This strategy allows your marketing users and data engineering users to use Databricks in a self service manner but have a different experience in regards to security and performance. And down the line if you need to add more global settings you can propagate them through the “base cluster policy”.
modules/base-cluster-policy/main.tf could look like:
import * as pulumi from "@pulumi/pulumi";
import * as databricks from "@pulumi/databricks";
import * as std from "@pulumi/std";
const config = new pulumi.Config();
// Team that performs the work
const team = config.requireObject<any>("team");
// Cluster policy overrides
const policyOverrides = config.requireObject<any>("policyOverrides");
const defaultPolicy = {
dbus_per_hour: {
type: "range",
maxValue: 10,
},
autotermination_minutes: {
type: "fixed",
value: 20,
hidden: true,
},
"custom_tags.Team": {
type: "fixed",
value: team,
},
};
const fairUse = new databricks.ClusterPolicy("fair_use", {
name: `${team} cluster policy`,
definition: JSON.stringify(std.merge({
input: [
defaultPolicy,
policyOverrides,
],
}).then(invoke => invoke.result)),
libraries: [
{
pypi: {
"package": "databricks-sdk==0.12.0",
},
},
{
maven: {
coordinates: "com.oracle.database.jdbc:ojdbc8:XXXX",
},
},
],
});
const canUseClusterPolicyinstanceProfile = new databricks.Permissions("can_use_cluster_policyinstance_profile", {
clusterPolicyId: fairUse.id,
accessControls: [{
groupName: team,
permissionLevel: "CAN_USE",
}],
});
import pulumi
import json
import pulumi_databricks as databricks
import pulumi_std as std
config = pulumi.Config()
# Team that performs the work
team = config.require_object("team")
# Cluster policy overrides
policy_overrides = config.require_object("policyOverrides")
default_policy = {
"dbus_per_hour": {
"type": "range",
"maxValue": 10,
},
"autotermination_minutes": {
"type": "fixed",
"value": 20,
"hidden": True,
},
"custom_tags.Team": {
"type": "fixed",
"value": team,
},
}
fair_use = databricks.ClusterPolicy("fair_use",
name=f"{team} cluster policy",
definition=json.dumps(std.merge(input=[
default_policy,
policy_overrides,
]).result),
libraries=[
{
"pypi": {
"package": "databricks-sdk==0.12.0",
},
},
{
"maven": {
"coordinates": "com.oracle.database.jdbc:ojdbc8:XXXX",
},
},
])
can_use_cluster_policyinstance_profile = databricks.Permissions("can_use_cluster_policyinstance_profile",
cluster_policy_id=fair_use.id,
access_controls=[{
"group_name": team,
"permission_level": "CAN_USE",
}])
package main
import (
"encoding/json"
"fmt"
"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
"github.com/pulumi/pulumi-std/sdk/go/std"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
cfg := config.New(ctx, "")
// Team that performs the work
team := cfg.RequireObject("team")
// Cluster policy overrides
policyOverrides := cfg.RequireObject("policyOverrides")
defaultPolicy := map[string]interface{}{
"dbus_per_hour": map[string]interface{}{
"type": "range",
"maxValue": 10,
},
"autotermination_minutes": map[string]interface{}{
"type": "fixed",
"value": 20,
"hidden": true,
},
"custom_tags.Team": map[string]interface{}{
"type": "fixed",
"value": team,
},
}
tmpJSON0, err := json.Marshal(std.Merge(ctx, map[string]interface{}{
"input": []interface{}{
defaultPolicy,
policyOverrides,
},
}, nil).Result)
if err != nil {
return err
}
json0 := string(tmpJSON0)
fairUse, err := databricks.NewClusterPolicy(ctx, "fair_use", &databricks.ClusterPolicyArgs{
Name: pulumi.Sprintf("%v cluster policy", team),
Definition: pulumi.String(json0),
Libraries: databricks.ClusterPolicyLibraryArray{
&databricks.ClusterPolicyLibraryArgs{
Pypi: &databricks.ClusterPolicyLibraryPypiArgs{
Package: pulumi.String("databricks-sdk==0.12.0"),
},
},
&databricks.ClusterPolicyLibraryArgs{
Maven: &databricks.ClusterPolicyLibraryMavenArgs{
Coordinates: pulumi.String("com.oracle.database.jdbc:ojdbc8:XXXX"),
},
},
},
})
if err != nil {
return err
}
_, err = databricks.NewPermissions(ctx, "can_use_cluster_policyinstance_profile", &databricks.PermissionsArgs{
ClusterPolicyId: fairUse.ID(),
AccessControls: databricks.PermissionsAccessControlArray{
&databricks.PermissionsAccessControlArgs{
GroupName: pulumi.Any(team),
PermissionLevel: pulumi.String("CAN_USE"),
},
},
})
if err != nil {
return err
}
return nil
})
}
using System.Collections.Generic;
using System.Linq;
using System.Text.Json;
using Pulumi;
using Databricks = Pulumi.Databricks;
using Std = Pulumi.Std;
return await Deployment.RunAsync(() =>
{
var config = new Config();
// Team that performs the work
var team = config.RequireObject<dynamic>("team");
// Cluster policy overrides
var policyOverrides = config.RequireObject<dynamic>("policyOverrides");
var defaultPolicy =
{
{ "dbus_per_hour",
{
{ "type", "range" },
{ "maxValue", 10 },
} },
{ "autotermination_minutes",
{
{ "type", "fixed" },
{ "value", 20 },
{ "hidden", true },
} },
{ "custom_tags.Team",
{
{ "type", "fixed" },
{ "value", team },
} },
};
var fairUse = new Databricks.ClusterPolicy("fair_use", new()
{
Name = $"{team} cluster policy",
Definition = JsonSerializer.Serialize(Std.Merge.Invoke(new()
{
Input = new[]
{
defaultPolicy,
policyOverrides,
},
}).Apply(invoke => invoke.Result)),
Libraries = new[]
{
new Databricks.Inputs.ClusterPolicyLibraryArgs
{
Pypi = new Databricks.Inputs.ClusterPolicyLibraryPypiArgs
{
Package = "databricks-sdk==0.12.0",
},
},
new Databricks.Inputs.ClusterPolicyLibraryArgs
{
Maven = new Databricks.Inputs.ClusterPolicyLibraryMavenArgs
{
Coordinates = "com.oracle.database.jdbc:ojdbc8:XXXX",
},
},
},
});
var canUseClusterPolicyinstanceProfile = new Databricks.Permissions("can_use_cluster_policyinstance_profile", new()
{
ClusterPolicyId = fairUse.Id,
AccessControls = new[]
{
new Databricks.Inputs.PermissionsAccessControlArgs
{
GroupName = team,
PermissionLevel = "CAN_USE",
},
},
});
});
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.databricks.ClusterPolicy;
import com.pulumi.databricks.ClusterPolicyArgs;
import com.pulumi.databricks.inputs.ClusterPolicyLibraryArgs;
import com.pulumi.databricks.inputs.ClusterPolicyLibraryPypiArgs;
import com.pulumi.databricks.inputs.ClusterPolicyLibraryMavenArgs;
import com.pulumi.std.StdFunctions;
import com.pulumi.std.inputs.MergeArgs;
import com.pulumi.databricks.Permissions;
import com.pulumi.databricks.PermissionsArgs;
import com.pulumi.databricks.inputs.PermissionsAccessControlArgs;
import static com.pulumi.codegen.internal.Serialization.*;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
final var config = ctx.config();
final var team = config.get("team");
final var policyOverrides = config.get("policyOverrides");
final var defaultPolicy = Map.ofEntries(
Map.entry("dbus_per_hour", Map.ofEntries(
Map.entry("type", "range"),
Map.entry("maxValue", 10)
)),
Map.entry("autotermination_minutes", Map.ofEntries(
Map.entry("type", "fixed"),
Map.entry("value", 20),
Map.entry("hidden", true)
)),
Map.entry("custom_tags.Team", Map.ofEntries(
Map.entry("type", "fixed"),
Map.entry("value", team)
))
);
var fairUse = new ClusterPolicy("fairUse", ClusterPolicyArgs.builder()
.name(String.format("%s cluster policy", team))
.definition(serializeJson(
StdFunctions.merge(MergeArgs.builder()
.input(
defaultPolicy,
policyOverrides)
.build()).result()))
.libraries(
ClusterPolicyLibraryArgs.builder()
.pypi(ClusterPolicyLibraryPypiArgs.builder()
.package_("databricks-sdk==0.12.0")
.build())
.build(),
ClusterPolicyLibraryArgs.builder()
.maven(ClusterPolicyLibraryMavenArgs.builder()
.coordinates("com.oracle.database.jdbc:ojdbc8:XXXX")
.build())
.build())
.build());
var canUseClusterPolicyinstanceProfile = new Permissions("canUseClusterPolicyinstanceProfile", PermissionsArgs.builder()
.clusterPolicyId(fairUse.id())
.accessControls(PermissionsAccessControlArgs.builder()
.groupName(team)
.permissionLevel("CAN_USE")
.build())
.build());
}
}
configuration:
team:
type: dynamic
policyOverrides:
type: dynamic
resources:
fairUse:
type: databricks:ClusterPolicy
name: fair_use
properties:
name: ${team} cluster policy
definition:
fn::toJSON:
fn::invoke:
function: std:merge
arguments:
input:
- ${defaultPolicy}
- ${policyOverrides}
return: result
libraries:
- pypi:
package: databricks-sdk==0.12.0
- maven:
coordinates: com.oracle.database.jdbc:ojdbc8:XXXX
canUseClusterPolicyinstanceProfile:
type: databricks:Permissions
name: can_use_cluster_policyinstance_profile
properties:
clusterPolicyId: ${fairUse.id}
accessControls:
- groupName: ${team}
permissionLevel: CAN_USE
variables:
defaultPolicy:
dbus_per_hour:
type: range
maxValue: 10
autotermination_minutes:
type: fixed
value: 20
hidden: true
custom_tags.Team:
type: fixed
value: ${team}
And custom instances of that base policy module for our marketing and data engineering teams would look like:
Overriding the built-in cluster policies
You can override built-in cluster policies by creating a databricks.ClusterPolicy resource with following attributes:
name- the name of the built-in cluster policy.policy_family_id- the ID of the cluster policy family used for built-in cluster policy.policy_family_definition_overrides- settings to override in the built-in cluster policy.
You can obtain the list of defined cluster policies families using the databricks policy-families list command of the new Databricks CLI, or via list policy families REST API.
import * as pulumi from "@pulumi/pulumi";
import * as databricks from "@pulumi/databricks";
const personalVmOverride = {
autotermination_minutes: {
type: "fixed",
value: 220,
hidden: true,
},
"custom_tags.Team": {
type: "fixed",
value: team,
},
};
const personalVm = new databricks.ClusterPolicy("personal_vm", {
policyFamilyId: "personal-vm",
policyFamilyDefinitionOverrides: JSON.stringify(personalVmOverride),
name: "Personal Compute",
});
import pulumi
import json
import pulumi_databricks as databricks
personal_vm_override = {
"autotermination_minutes": {
"type": "fixed",
"value": 220,
"hidden": True,
},
"custom_tags.Team": {
"type": "fixed",
"value": team,
},
}
personal_vm = databricks.ClusterPolicy("personal_vm",
policy_family_id="personal-vm",
policy_family_definition_overrides=json.dumps(personal_vm_override),
name="Personal Compute")
package main
import (
"encoding/json"
"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
personalVmOverride := map[string]interface{}{
"autotermination_minutes": map[string]interface{}{
"type": "fixed",
"value": 220,
"hidden": true,
},
"custom_tags.Team": map[string]interface{}{
"type": "fixed",
"value": team,
},
}
tmpJSON0, err := json.Marshal(personalVmOverride)
if err != nil {
return err
}
json0 := string(tmpJSON0)
_, err = databricks.NewClusterPolicy(ctx, "personal_vm", &databricks.ClusterPolicyArgs{
PolicyFamilyId: pulumi.String("personal-vm"),
PolicyFamilyDefinitionOverrides: pulumi.String(json0),
Name: pulumi.String("Personal Compute"),
})
if err != nil {
return err
}
return nil
})
}
using System.Collections.Generic;
using System.Linq;
using System.Text.Json;
using Pulumi;
using Databricks = Pulumi.Databricks;
return await Deployment.RunAsync(() =>
{
var personalVmOverride =
{
{ "autotermination_minutes",
{
{ "type", "fixed" },
{ "value", 220 },
{ "hidden", true },
} },
{ "custom_tags.Team",
{
{ "type", "fixed" },
{ "value", team },
} },
};
var personalVm = new Databricks.ClusterPolicy("personal_vm", new()
{
PolicyFamilyId = "personal-vm",
PolicyFamilyDefinitionOverrides = JsonSerializer.Serialize(personalVmOverride),
Name = "Personal Compute",
});
});
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.databricks.ClusterPolicy;
import com.pulumi.databricks.ClusterPolicyArgs;
import static com.pulumi.codegen.internal.Serialization.*;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
final var personalVmOverride = Map.ofEntries(
Map.entry("autotermination_minutes", Map.ofEntries(
Map.entry("type", "fixed"),
Map.entry("value", 220),
Map.entry("hidden", true)
)),
Map.entry("custom_tags.Team", Map.ofEntries(
Map.entry("type", "fixed"),
Map.entry("value", team)
))
);
var personalVm = new ClusterPolicy("personalVm", ClusterPolicyArgs.builder()
.policyFamilyId("personal-vm")
.policyFamilyDefinitionOverrides(serializeJson(
personalVmOverride))
.name("Personal Compute")
.build());
}
}
resources:
personalVm:
type: databricks:ClusterPolicy
name: personal_vm
properties:
policyFamilyId: personal-vm
policyFamilyDefinitionOverrides:
fn::toJSON: ${personalVmOverride}
name: Personal Compute
variables:
personalVmOverride:
autotermination_minutes:
type: fixed
value: 220
hidden: true
custom_tags.Team:
type: fixed
value: ${team}
Related Resources
The following resources are often used in the same context:
- Dynamic Passthrough Clusters for a Group guide.
- End to end workspace management guide.
* databricks.getClusters data to retrieve a list of databricks.Cluster ids.
* databricks.Cluster to create Databricks Clusters.
* databricks.getCurrentUser data to retrieve information about databricks.User or databricks_service_principal, that is calling Databricks REST API.
* databricks.GlobalInitScript to manage global init scripts, which are run on all databricks.Cluster and databricks_job.
* databricks.InstancePool to manage instance pools to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
* databricks.InstanceProfile to manage AWS EC2 instance profiles that users can launch databricks.Cluster and access data, like databricks_mount.
* databricks.IpAccessList to allow access from predefined IP ranges.
* databricks.Library to install a library on databricks_cluster.
* databricks.getNodeType data to get the smallest node type for databricks.Cluster that fits search criteria, like amount of RAM or number of cores.
* databricks.Permissions to manage access control in Databricks workspace.
* databricks.getSparkVersion data to get Databricks Runtime (DBR) version that could be used for
spark_versionparameter in databricks.Cluster and other resources. * databricks.UserInstanceProfile to attach databricks.InstanceProfile (AWS) to databricks_user. * databricks.WorkspaceConf to manage workspace configuration for expert usage.
Create ClusterPolicy Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new ClusterPolicy(name: string, args?: ClusterPolicyArgs, opts?: CustomResourceOptions);@overload
def ClusterPolicy(resource_name: str,
args: Optional[ClusterPolicyArgs] = None,
opts: Optional[ResourceOptions] = None)
@overload
def ClusterPolicy(resource_name: str,
opts: Optional[ResourceOptions] = None,
definition: Optional[str] = None,
description: Optional[str] = None,
libraries: Optional[Sequence[ClusterPolicyLibraryArgs]] = None,
max_clusters_per_user: Optional[int] = None,
name: Optional[str] = None,
policy_family_definition_overrides: Optional[str] = None,
policy_family_id: Optional[str] = None)func NewClusterPolicy(ctx *Context, name string, args *ClusterPolicyArgs, opts ...ResourceOption) (*ClusterPolicy, error)public ClusterPolicy(string name, ClusterPolicyArgs? args = null, CustomResourceOptions? opts = null)
public ClusterPolicy(String name, ClusterPolicyArgs args)
public ClusterPolicy(String name, ClusterPolicyArgs args, CustomResourceOptions options)
type: databricks:ClusterPolicy
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ClusterPolicyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ClusterPolicyArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ClusterPolicyArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ClusterPolicyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ClusterPolicyArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var clusterPolicyResource = new Databricks.ClusterPolicy("clusterPolicyResource", new()
{
Definition = "string",
Description = "string",
Libraries = new[]
{
new Databricks.Inputs.ClusterPolicyLibraryArgs
{
Cran = new Databricks.Inputs.ClusterPolicyLibraryCranArgs
{
Package = "string",
Repo = "string",
},
Jar = "string",
Maven = new Databricks.Inputs.ClusterPolicyLibraryMavenArgs
{
Coordinates = "string",
Exclusions = new[]
{
"string",
},
Repo = "string",
},
ProviderConfig = new Databricks.Inputs.ClusterPolicyLibraryProviderConfigArgs
{
WorkspaceId = "string",
},
Pypi = new Databricks.Inputs.ClusterPolicyLibraryPypiArgs
{
Package = "string",
Repo = "string",
},
Requirements = "string",
Whl = "string",
},
},
MaxClustersPerUser = 0,
Name = "string",
PolicyFamilyDefinitionOverrides = "string",
PolicyFamilyId = "string",
});
example, err := databricks.NewClusterPolicy(ctx, "clusterPolicyResource", &databricks.ClusterPolicyArgs{
Definition: pulumi.String("string"),
Description: pulumi.String("string"),
Libraries: databricks.ClusterPolicyLibraryArray{
&databricks.ClusterPolicyLibraryArgs{
Cran: &databricks.ClusterPolicyLibraryCranArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Jar: pulumi.String("string"),
Maven: &databricks.ClusterPolicyLibraryMavenArgs{
Coordinates: pulumi.String("string"),
Exclusions: pulumi.StringArray{
pulumi.String("string"),
},
Repo: pulumi.String("string"),
},
ProviderConfig: &databricks.ClusterPolicyLibraryProviderConfigArgs{
WorkspaceId: pulumi.String("string"),
},
Pypi: &databricks.ClusterPolicyLibraryPypiArgs{
Package: pulumi.String("string"),
Repo: pulumi.String("string"),
},
Requirements: pulumi.String("string"),
Whl: pulumi.String("string"),
},
},
MaxClustersPerUser: pulumi.Int(0),
Name: pulumi.String("string"),
PolicyFamilyDefinitionOverrides: pulumi.String("string"),
PolicyFamilyId: pulumi.String("string"),
})
var clusterPolicyResource = new ClusterPolicy("clusterPolicyResource", ClusterPolicyArgs.builder()
.definition("string")
.description("string")
.libraries(ClusterPolicyLibraryArgs.builder()
.cran(ClusterPolicyLibraryCranArgs.builder()
.package_("string")
.repo("string")
.build())
.jar("string")
.maven(ClusterPolicyLibraryMavenArgs.builder()
.coordinates("string")
.exclusions("string")
.repo("string")
.build())
.providerConfig(ClusterPolicyLibraryProviderConfigArgs.builder()
.workspaceId("string")
.build())
.pypi(ClusterPolicyLibraryPypiArgs.builder()
.package_("string")
.repo("string")
.build())
.requirements("string")
.whl("string")
.build())
.maxClustersPerUser(0)
.name("string")
.policyFamilyDefinitionOverrides("string")
.policyFamilyId("string")
.build());
cluster_policy_resource = databricks.ClusterPolicy("clusterPolicyResource",
definition="string",
description="string",
libraries=[{
"cran": {
"package": "string",
"repo": "string",
},
"jar": "string",
"maven": {
"coordinates": "string",
"exclusions": ["string"],
"repo": "string",
},
"provider_config": {
"workspace_id": "string",
},
"pypi": {
"package": "string",
"repo": "string",
},
"requirements": "string",
"whl": "string",
}],
max_clusters_per_user=0,
name="string",
policy_family_definition_overrides="string",
policy_family_id="string")
const clusterPolicyResource = new databricks.ClusterPolicy("clusterPolicyResource", {
definition: "string",
description: "string",
libraries: [{
cran: {
"package": "string",
repo: "string",
},
jar: "string",
maven: {
coordinates: "string",
exclusions: ["string"],
repo: "string",
},
providerConfig: {
workspaceId: "string",
},
pypi: {
"package": "string",
repo: "string",
},
requirements: "string",
whl: "string",
}],
maxClustersPerUser: 0,
name: "string",
policyFamilyDefinitionOverrides: "string",
policyFamilyId: "string",
});
type: databricks:ClusterPolicy
properties:
definition: string
description: string
libraries:
- cran:
package: string
repo: string
jar: string
maven:
coordinates: string
exclusions:
- string
repo: string
providerConfig:
workspaceId: string
pypi:
package: string
repo: string
requirements: string
whl: string
maxClustersPerUser: 0
name: string
policyFamilyDefinitionOverrides: string
policyFamilyId: string
ClusterPolicy Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The ClusterPolicy resource accepts the following input properties:
- Definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - Description string
- Additional human-readable description of the cluster policy.
- Libraries
List<Cluster
Policy Library> - Max
Clusters intPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- Name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- Policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- Policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
- Definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - Description string
- Additional human-readable description of the cluster policy.
- Libraries
[]Cluster
Policy Library Args - Max
Clusters intPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- Name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- Policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- Policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
- definition String
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description String
- Additional human-readable description of the cluster policy.
- libraries
List<Cluster
Policy Library> - max
Clusters IntegerPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name String
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family StringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family StringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
- definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description string
- Additional human-readable description of the cluster policy.
- libraries
Cluster
Policy Library[] - max
Clusters numberPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
- definition str
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description str
- Additional human-readable description of the cluster policy.
- libraries
Sequence[Cluster
Policy Library Args] - max_
clusters_ intper_ user - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name str
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy_
family_ strdefinition_ overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy_
family_ strid - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
- definition String
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description String
- Additional human-readable description of the cluster policy.
- libraries List<Property Map>
- max
Clusters NumberPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name String
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family StringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family StringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition.
Outputs
All input properties are implicitly available as output properties. Additionally, the ClusterPolicy resource produces the following output properties:
Look up Existing ClusterPolicy Resource
Get an existing ClusterPolicy resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: ClusterPolicyState, opts?: CustomResourceOptions): ClusterPolicy@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
definition: Optional[str] = None,
description: Optional[str] = None,
libraries: Optional[Sequence[ClusterPolicyLibraryArgs]] = None,
max_clusters_per_user: Optional[int] = None,
name: Optional[str] = None,
policy_family_definition_overrides: Optional[str] = None,
policy_family_id: Optional[str] = None,
policy_id: Optional[str] = None) -> ClusterPolicyfunc GetClusterPolicy(ctx *Context, name string, id IDInput, state *ClusterPolicyState, opts ...ResourceOption) (*ClusterPolicy, error)public static ClusterPolicy Get(string name, Input<string> id, ClusterPolicyState? state, CustomResourceOptions? opts = null)public static ClusterPolicy get(String name, Output<String> id, ClusterPolicyState state, CustomResourceOptions options)resources: _: type: databricks:ClusterPolicy get: id: ${id}- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - Description string
- Additional human-readable description of the cluster policy.
- Libraries
List<Cluster
Policy Library> - Max
Clusters intPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- Name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- Policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- Policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - Policy
Id string - Canonical unique identifier for the cluster policy.
- Definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - Description string
- Additional human-readable description of the cluster policy.
- Libraries
[]Cluster
Policy Library Args - Max
Clusters intPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- Name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- Policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- Policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - Policy
Id string - Canonical unique identifier for the cluster policy.
- definition String
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description String
- Additional human-readable description of the cluster policy.
- libraries
List<Cluster
Policy Library> - max
Clusters IntegerPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name String
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family StringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family StringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - policy
Id String - Canonical unique identifier for the cluster policy.
- definition string
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description string
- Additional human-readable description of the cluster policy.
- libraries
Cluster
Policy Library[] - max
Clusters numberPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name string
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family stringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family stringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - policy
Id string - Canonical unique identifier for the cluster policy.
- definition str
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description str
- Additional human-readable description of the cluster policy.
- libraries
Sequence[Cluster
Policy Library Args] - max_
clusters_ intper_ user - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name str
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy_
family_ strdefinition_ overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy_
family_ strid - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - policy_
id str - Canonical unique identifier for the cluster policy.
- definition String
- Policy definition: JSON document expressed in Databricks Policy Definition Language. Cannot be used with
policy_family_id - description String
- Additional human-readable description of the cluster policy.
- libraries List<Property Map>
- max
Clusters NumberPer User - Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.
- name String
- Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
- policy
Family StringDefinition Overrides - Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.
- policy
Family StringId - ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with
definition. Usepolicy_family_definition_overridesinstead to customize the policy definition. - policy
Id String - Canonical unique identifier for the cluster policy.
Supporting Types
ClusterPolicyLibrary, ClusterPolicyLibraryArgs
- cran Property Map
- egg String
- jar String
- maven Property Map
- provider
Config Property Map - pypi Property Map
- requirements String
- whl String
ClusterPolicyLibraryCran, ClusterPolicyLibraryCranArgs
ClusterPolicyLibraryMaven, ClusterPolicyLibraryMavenArgs
- Coordinates string
- Exclusions List<string>
- Repo string
- Coordinates string
- Exclusions []string
- Repo string
- coordinates String
- exclusions List<String>
- repo String
- coordinates string
- exclusions string[]
- repo string
- coordinates str
- exclusions Sequence[str]
- repo str
- coordinates String
- exclusions List<String>
- repo String
ClusterPolicyLibraryProviderConfig, ClusterPolicyLibraryProviderConfigArgs
- Workspace
Id string
- Workspace
Id string
- workspace
Id String
- workspace
Id string
- workspace_
id str
- workspace
Id String
ClusterPolicyLibraryPypi, ClusterPolicyLibraryPypiArgs
Import
The resource cluster policy can be imported using the policy id:
hcl
import {
to = databricks_cluster_policy.this
id = “
}
Alternatively, when using terraform version 1.4 or earlier, import using the pulumi import command:
bash
$ pulumi import databricks:index/clusterPolicy:ClusterPolicy this <cluster-policy-id>
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- databricks pulumi/pulumi-databricks
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
databricksTerraform Provider.
