Skip to content

Commit 35fb60a

Browse files
Backport of Documentation for linking Stacks into v1.10 (#36424)
1 parent b3c5baf commit 35fb60a

File tree

5 files changed

+266
-33
lines changed

5 files changed

+266
-33
lines changed

website/data/language-nav-data.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@
3333
"routes": [
3434
{ "title": "Define configuration", "path": "stacks/deploy/config" },
3535
{ "title": "Set conditions for deployment plans", "path": "stacks/deploy/conditions" },
36-
{ "title": "Authenticate a Stack", "path": "stacks/deploy/authenticate" }
36+
{ "title": "Authenticate a Stack", "path": "stacks/deploy/authenticate" },
37+
{ "title": "Pass data from one Stack to another", "path": "stacks/deploy/pass-data" }
3738
]
3839
},
3940
{
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
---
2+
page_title: Pass data from one Stack to another
3+
description: Learn how to pass data from one Stack to another using `publish_output` blocks to output data from one Stack, and `upstream_input` blocks to input that data into another Stack.
4+
---
5+
6+
# Pass data from one Stack to another
7+
8+
If you have multiple Stacks that do not share a provisioning lifecycle, you can export data from one Stack for another Stack to consume. If the output value of a Stack changes after a run, HCP Terraform automatically triggers runs for any Stacks that depend on those outputs.
9+
10+
11+
## Background
12+
13+
You may need to pass data between different Stacks in your project. For example, one Stack in your organization may manage shared services, such as networking infrastructure, and another Stack may manage application components. Using separate Stacks lets you manage the infrastructure independently, but you may still need to share data from your networking Stack to your application Stack.
14+
15+
To output information from a Stack, declare a `publish_output` block in the deployment configuration of the Stack exporting data. We refer to the Stack that declares a `publish_output` block as the upstream Stack.
16+
17+
To consume the data exported by the upstream Stack, declare an `upstream_input` block in the deployment configuration of a different Stack in the same project. We refer to the Stack that declares an `upstream_input` block as the downstream Stack.
18+
19+
## Requirements
20+
21+
The `publish_output` and `upstream_input` blocks require at least Terraform version `terraform_1.10.0-alpha20241009` or higher. Download the [latest version of Terraform](https://releases.hashicorp.com/terraform/) to use the most up-to-date functionality.
22+
23+
Downstream Stacks must also reside in the same project as their upstream Stacks.
24+
25+
## Declare outputs
26+
27+
You must declare a `publish_output` block in your deployment configuration for each value you want to output from your current Stack.
28+
29+
For example, you can add a `publish_output` block for the `vpc_id` in your upstream Stack’s deployment configuration. You can directly reference a deployment's values with the `deployment.deployment_name` syntax.
30+
31+
<CodeBlockConfig filename="network.tfdeploy.hcl">
32+
33+
```hcl
34+
# Networking Stack deployment configuration
35+
36+
publish_output "vpc_id" {
37+
description = "The networking Stack's VPC's ID."
38+
value = deployment.network.vpc_id
39+
}
40+
```
41+
42+
</CodeBlockConfig>
43+
44+
After applying your configuration, any Stack in the same project can now reference your network deployment's `vpc_id` output by declaring an `upstream_input` block.
45+
46+
Once you apply a Stack configuration version that includes your `publish_output` block, HCP Terraform publishes a snapshot of those values, which allows HCP Terraform to resolve them. Meaning, you must apply your Stack’s deployment configuration before any downstream Stacks can reference your Stack's outputs.
47+
48+
Learn more about the [`publish_output` block](/terraform/language/stacks/reference/tfdeploy#publish_output-block-configuration).
49+
50+
## Consume the output from an upstream Stack
51+
52+
Declare an `upstream_input` block in your Stack’s deployment configuration to read values from another Stack's `publish_output` block. Adding an `upstream_input` block creates a dependency on the upstream Stack.
53+
54+
For example, if you want to use the output `vpc_id` from an upstream Stack in the same project, declare an `upstream_input` block in your deployment configuration.
55+
56+
<CodeBlockConfig filename="application.tfdeploy.hcl">
57+
58+
```hcl
59+
# Application Stack deployment configuration
60+
61+
upstream_input "networking_stack" {
62+
type = "Stack"
63+
source = "app.terraform.io/hashicorp/Default Project/networking-stack"
64+
}
65+
66+
deployment "application" {
67+
inputs = {
68+
vpc_id = upstream_input.network_stack.vpc_id
69+
}
70+
}
71+
```
72+
73+
</CodeBlockConfig>
74+
75+
After pushing your Stack's configuration into HCP Terraform, HCP Terraform searches for the most recently published snapshot of the upstream Stack your configuration references. If no snapshot exists, the downstream Stack's run fails.
76+
77+
If HCP Terraform finds a published snapshot for your referenced upstream Stack, then all of that Stack's outputs are available to this downstream Stack. Add `upstream_input` blocks for every upstream Stack you want to reference. Learn more about the [`upstream_input` block](/terraform/language/stacks/reference/tfdeploy#upstream_input-block-configuration).
78+
79+
80+
## Trigger runs when output values change
81+
82+
If an upstream Stack's published output values change, HCP Terraform automatically triggers runs for any downstream Stacks that rely on those outputs.
83+
84+
In the following example, the `application` deployment depends on the upstream networking Stack.
85+
86+
<CodeBlockConfig filename="application.tfdeploy.hcl">
87+
88+
```hcl
89+
# Application Stack deployment configuration
90+
91+
upstream_input "network_stack" {
92+
type = "Stack"
93+
source = "app.terraform.io/hashicorp/Default Project/networking-stack"
94+
}
95+
96+
deployment "application" {
97+
inputs = {
98+
vpc_id = upstream_input.network_stack.vpc_id
99+
}
100+
}
101+
```
102+
103+
</CodeBlockConfig>
104+
105+
The application Stack depends on the networking Stack’s output, so if the `vpc_id` changes then HCP Terraform triggers a new run for the application Stack. This approach allows you to decouple Stacks that have separate life cycles and ensures that updates in an upstream Stack propagate to downstream Stacks.
106+
107+
## Remove upstream Stack dependencies
108+
109+
To stop depending on an upstream Stack’s outputs, do the following in your downstream Stack's deployment configuration:
110+
111+
1. Remove the upstream Stack's `upstream_input` block
112+
1. Remove any references to the upstream Stack's outputs
113+
1. Push your configuration changes to HCP Terraform and apply the new configuration

website/docs/language/stacks/design.mdx

+3-1
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,13 @@ Before writing your Stack configuration, we recommend assessing your current inf
2727

2828
Each Stack should represent a single system or application with a shared lifecycle. Start by analyzing your current infrastructure and identifying the components HCP Terraform should manage together. Components are typically groups of related resources, such as an application’s backend, frontend, or database layer, deployed and scaled together.
2929

30+
We recommend structuring your Stacks along technical boundaries to keep them modular and manageable. For example, you can create a dedicated Stack for shared services, such as networking infrastructure for VPCs, subnets, or routing tables, and separate Stacks for application components that consume those shared services. This separation allows you to manage shared services independently while passing information between Stacks. For more details, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data).
31+
3032
### Sketch out your configuration
3133

3234
We recommend sticking to technical boundaries when structuring a Stack configuration. A single Stack should represent a single system with a shared lifecycle.
3335

34-
We recommend keeping a Stack as self-contained as possible. However, there are valid cases where outputs from one Stack, like a shared VPC networking service Stack, may need to pass inputs into another Stack, like an application Stack. If there’s a well-defined interface between two parts of your infrastructure, it makes sense to model them as separate Stacks.
36+
While keeping a Stack as self-contained as possible is ideal, there are valid cases where a Stack must consume outputs from another Stack. For example, a shared networking Stack can publish outputs, such as `vpc_id` or subnet IDs, that downstream application Stacks can consume as inputs. This approach ensures modularity and allows you to manage dependencies between Stacks using well-defined interfaces. For more details, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data).
3537

3638
Plan to add a component block to your configuration for every top-level module you want to include in your Stack. After establishing your top-level modules, you can use child modules without adding additional `component` blocks.
3739

website/docs/language/stacks/reference/tfdeploy.mdx

+144-31
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ Each Stack must have at least one `deployment` block, and the label of the `depl
4141
| :---- | :---- | :---- | :---- |
4242
| `inputs` | A mapping of Stack variable names for this deployment. The keys in this map must correspond to the names of variables defined in the Stack. The values must be valid HCL literals meeting the type constraint of those variables. | map | Required |
4343

44+
### Reference
45+
4446
For example, the following `deployment` block accepts inputs for variables named `aws_region` and `instance_count` and creates a new deployment in HCP Terraform named “production”.
4547

4648
```hcl
@@ -83,18 +85,6 @@ There are two orchestration rules to choose from:
8385

8486
HCP Terraform evaluates the `check` blocks within your `orchestrate` block to determine if it should approve a plan. If all of the checks pass, then HCP Terraform approves the plan for you. If one or more `conditions` do not pass, then HCP Terraform shows the `reason` why, and you must manually approve that plan.
8587

86-
For example, the following `orchestrate` block automatically approves deployments if a component has not changed.
87-
88-
```hcl
89-
orchestrate "auto_approve" "no_pet_changes" {
90-
check {
91-
# Check that the pet component has no changes
92-
condition = context.plan.component_changes["component.pet"].total == 0
93-
reason = "Changes proposed to pet component."
94-
}
95-
}
96-
```
97-
9888
By default, each Stack has an `auto_approve` rule named `empty_plan`, which automatically approves a plan if it contains no changes.
9989

10090
### Specification
@@ -159,6 +149,21 @@ The object in the `context.plan.deployment` field includes the following fields.
159149
| :---- | :---- | :---- |
160150
| `deployment_name` | The name of the current deployment HCP Terraform is running this plan on. You can use this field to check which deployment is running this plan. For example, you can check if this plan is on your production deployment: `context.plan.deployment == deployment.production`. | string |
161151

152+
### Reference
153+
154+
For example, the following `orchestrate` block automatically approves deployments if a component has not changed.
155+
156+
```hcl
157+
orchestrate "auto_approve" "no_pet_changes" {
158+
check {
159+
condition = context.plan.component_changes["component.pet"].total == 0
160+
reason = "Changes proposed to pet component."
161+
}
162+
}
163+
```
164+
165+
If nothing changes in the `component.pet` component, HCP Terraform automatically approves the plan.
166+
162167
## `identity_token` block configuration
163168

164169
The `identity_token` block defines a JSON Web Token (JWT) that Terraform generates for a given deployment if that `deployment` block references an `identity_token` in its `inputs`.
@@ -183,6 +188,8 @@ This section provides details about the fields you can configure in the `identit
183188
| :---- | :---- | :---- | :---- |
184189
| `audience` | The audience of your token is the resource(s) that uses this token after Terraform generates it. You specify an audience to ensure that the cloud service authorizing the workload is confident that the token you present is intended for that service. | set of strings | Required |
185190

191+
### Reference
192+
186193
Once defined, you can reference an identity token's `jwt` attribute in a deployment's inputs. For example, below we generate a token for a particular role ARN in AWS.
187194

188195
```hcl
@@ -223,8 +230,6 @@ provider "aws" "this" {
223230
region = var.region
224231
assume_role_with_web_identity {
225232
role_arn = var.aws_role
226-
# Your configuration anticipates the aws_token we created
227-
# with the indentity_token block in your deployments file.
228233
web_identity_token = var.aws_token
229234
}
230235
}
@@ -249,23 +254,6 @@ store "store_type" "store_name" {
249254

250255
A store’s type defines where Terraform should look for that store’s credentials and how to decode the credentials it finds. You cannot share arguments across store types.
251256

252-
For example, if you have an HCP Terraform [variable set](/terraform/cloud-docs/workspaces/variables/managing-variables#variable-sets) that contains a value you want to use in your deployment, you can create a `store` block to access that variable set.
253-
254-
```hcl
255-
store "varset" "tokens" {
256-
id = "<variable_set_id>"
257-
category = "terraform"
258-
}
259-
260-
deployment "main" {
261-
inputs = {
262-
token = store.varset.tokens.example_token
263-
}
264-
}
265-
```
266-
267-
You cannot access an entire store and must specifically access individual keys within that store. Meaning, we can access your example’s `example_token` variable by accessing the store’s `varset` type, `store.varset`, then accessing the specific store, `store.varset.tokens.example_token`.
268-
269257
### `varset` specification and configuration
270258

271259
Use the `varset` store to enable your Stacks to access [variable sets](/terraform/cloud-docs/workspaces/variables/managing-variables#variable-sets) in HCP Terraform. Your Stack must have access to the variable set you are targeting, meaning it must be globally available or assigned to the project containing your Stack.
@@ -309,6 +297,131 @@ deployment "main" {
309297

310298
You can access specific environment variables by key from the `store.varset.available_regions` store, and you can access specific Terraform variables by key using the `store.varset.tokens` store.
311299

300+
### Reference
301+
302+
For example, if you have an HCP Terraform [variable set](/terraform/cloud-docs/workspaces/variables/managing-variables#variable-sets) that contains a value you want to use in your deployment, you can create a `store` block to access that variable set.
303+
304+
```hcl
305+
store "varset" "tokens" {
306+
id = "<variable_set_id>"
307+
category = "terraform"
308+
}
309+
310+
deployment "main" {
311+
inputs = {
312+
token = store.varset.tokens.example_token
313+
}
314+
}
315+
```
316+
317+
You cannot access an entire store and must specifically access individual keys within that store. Meaning, we can access your example’s `example_token` variable by accessing the store’s `varset` type, `store.varset`, then accessing the specific store, `store.varset.tokens.example_token`.
318+
312319
## `locals` block configuration
313320

314321
A local value assigns a name to an expression, so you can use the name multiple times within your Stack configuration instead of repeating the expression. The `locals` block works exactly as it does in traditional Terraform configurations. Learn more about [the `locals` block](/terraform/language/values/locals).
322+
323+
## `publish_output` block configuration
324+
325+
The `publish_output` block requires at least Terraform version `terraform_1.10.0-alpha20241009` or higher. Download [latest version of Terraform](https://releases.hashicorp.com/terraform/) to use the most up-to-date functionality.
326+
327+
Specifies a value to export from your current Stack, which other Stacks in the same project can consume. Declare one `publish_output` block for each value to export your Stack.
328+
329+
### Complete configuration
330+
331+
When every field is defined, a `publish_output` block has the following form:
332+
333+
<CodeBlockConfig hideClipboard>
334+
335+
```hcl
336+
publish_output "output_name" {
337+
description = "Description of the purpose of this output"
338+
value = deployment.deployment_name.some_value
339+
}
340+
```
341+
342+
</CodeBlockConfig>
343+
344+
### Specification
345+
346+
This section provides details about the fields you can configure in the output block.
347+
348+
| Field | Description | Type | Required |
349+
| :---- | :---- | :---- | :---- |
350+
| `value` | The value to output. | any | Required |
351+
| `description` | A human-friendly description for the output. | string | Optional |
352+
353+
### Reference
354+
355+
For example, you could output the VPC ID from your networking deployment, making it available to other Stacks to input.
356+
357+
<CodeBlockConfig filename="network.tfdeploy.hcl">
358+
359+
```hcl
360+
# Network Stack's deployment configuration
361+
362+
publish_output "vpc_id" {
363+
description = "The networking Stack's VPC's ID."
364+
value = deployment.network.vpc_id
365+
}
366+
```
367+
368+
</CodeBlockConfig>
369+
370+
To learn more about passing information between Stacks, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data).
371+
372+
## `upstream_input` block configuration
373+
374+
The `upstream_input` block requires at least Terraform version `terraform_1.10.0-alpha20241009` or higher. Download [latest version of Terraform](https://releases.hashicorp.com/terraform/) to use the most up-to-date functionality.
375+
376+
The `upstream_input` block specifies another Stack in the same project to consume outputs from. Declare an `upstream_input` block for each Stack you want to reference. If an output from a upstream Stack changes, HCP Terraform automatically triggers runs for any Stacks that depend on those outputs.
377+
378+
To learn more about passing information between Stacks, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/link-stacks).
379+
380+
### Complete configuration
381+
382+
When every field is defined, an `upstream_input` block has the following form:
383+
384+
<CodeBlockConfig hideClipboard>
385+
386+
```hcl
387+
upstream_input "upstream_stack_name" {
388+
type = "stack"
389+
source = "app.terraform.io/{organization_name}/{project_name}/{upstream_stack_name}"
390+
}
391+
```
392+
393+
</CodeBlockConfig>
394+
395+
### Specification
396+
397+
This section provides details about the fields you can configure in the `upstream_input` block.
398+
399+
| Field | Description | Type | Required |
400+
| :---- | :---- | :---- | :---- |
401+
| `type` | The only supported type is “stack”. | string | Required |
402+
| `source` | The upstream Stack’s URL, in the format: `"app.terraform.io/{organization_name}/{project_name}/{upstream_stack_name}"` | string | Required |
403+
404+
### Reference
405+
406+
For example, you could input a VPC ID from an upstream Stack that manages your shared networking service. You can use the `upstream_input` block to pass information from your network Stack into your application Stack.
407+
408+
<CodeBlockConfig filename="application.tfdeploy.hcl">
409+
410+
```hcl
411+
# Application Stack's deployment configuration
412+
413+
upstream_input "network_stack" {
414+
type = "stack"
415+
source = "app.terraform.io/hashicorp/Default Project/networking-stack"
416+
}
417+
418+
deployment "application" {
419+
inputs = {
420+
vpc_id = upstream_input.network_stack.vpc_id
421+
}
422+
}
423+
```
424+
425+
</CodeBlockConfig>
426+
427+
Your application Stack can now securely consume and use outputs from your network Stack. To learn more about passing information between Stacks, reference [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data).

0 commit comments

Comments
 (0)