Common issues and their solutions when working with Pulumi.
- Authentication Issues
- State Backend Issues
- Deployment Failures
- Configuration Problems
- Resource Issues
- Performance Problems
Error:
error: get credentials: failed to refresh cached credentials, no EC2 IMDS role found
error: operation error S3: GetObject, get identity: get credentials
Solutions:
- Check environment variables:
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_REGION- If using direnv + 1Password:
# Check .envrc exists and is allowed
direnv allow
# Verify credentials loaded
eval "$(direnv export bash)"
echo $AWS_ACCESS_KEY_ID- Manually set credentials:
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
export AWS_REGION="us-east-1"Error:
error: failed to decrypt encrypted configuration value 'aws:secretKey': incorrect passphrase
Solutions:
- Set passphrase:
export PULUMI_CONFIG_PASSPHRASE="your-passphrase"- If using 1Password:
# Add to .envrc
from_op PULUMI_CONFIG_PASSPHRASE="op://Employee/Pulumi Passphrase/password"
direnv allow- Reset passphrase (if lost):
# Export stack without secrets
uv run pulumi stack export > stack-backup.json
# Create new stack with new passphrase
uv run pulumi stack init new-stack
# Import (you'll lose encrypted secrets)
uv run pulumi stack import --file stack-backup.jsonError:
direnv: error /path/to/.envrc is blocked. Run `direnv allow` to approve its content
Solutions:
- Allow direnv:
direnv allow- Check 1Password CLI:
# Test 1Password access
op whoami
# If not signed in
eval $(op signin)- Check service account token:
echo $OP_SERVICE_ACCOUNT_TOKENError:
error: read ".pulumi/meta.yaml": operation error S3: GetObject
Solutions:
-
Check AWS credentials (see above)
-
Verify S3 bucket exists:
aws s3 ls s3://your-pulumi-state-bucket/- Check bucket permissions:
aws s3api get-bucket-policy --bucket your-pulumi-state-bucket- Switch to local backend temporarily:
export PULUMI_BACKEND_URL="file://~/.pulumi"
uv run pulumi login file://~/.pulumiError:
error: stack 'production' already exists
Solution:
Switch to existing stack instead of creating:
uv run pulumi stack select productionError:
error: no stack named 'staging' found
Solutions:
- List available stacks:
uv run pulumi stack ls- Create the stack if needed:
uv run pulumi stack init staging- Check you're in correct project:
cat Pulumi.yaml # Verify project nameError:
error: resource 'my-bucket' already exists
Solutions:
- Import existing resource:
uv run pulumi import aws:s3/bucket:Bucket my-bucket existing-bucket-name- Use different name:
# Add unique suffix
bucket = aws.s3.Bucket(f"my-bucket-{pulumi.get_stack()}")- Delete existing resource (if safe):
aws s3 rb s3://existing-bucket-name --forceError:
error: AccessDenied: User is not authorized to perform: s3:PutObject
Solutions:
- Check IAM permissions:
aws iam get-user
aws iam list-attached-user-policies --user-name your-user-
Add required permissions to IAM policy
-
Use different credentials with proper permissions
Error:
error: update failed: resource changes are not allowed
Solutions:
- Use replacement instead:
uv run pulumi up --replace urn:pulumi:stack::project::Type::resource- Check for protection:
# Remove protect flag if set
resource = Resource("name",
opts=ResourceOptions(protect=False)
)- Manual intervention required - update resource manually, then refresh:
uv run pulumi refreshError:
error: configuration key 'aws:region' not found
Solution:
Set the required config:
uv run pulumi config set aws:region us-east-1Error:
error: failed to decrypt encrypted configuration value
Solutions:
- Set correct passphrase:
export PULUMI_CONFIG_PASSPHRASE="correct-passphrase"- Re-encrypt with new passphrase:
# Export config without secrets
uv run pulumi config --show-secrets > config.txt
# Change passphrase
export PULUMI_CONFIG_PASSPHRASE="new-passphrase"
# Re-import configs as secrets
uv run pulumi config set key value --secretProblem: Making changes to wrong environment
Solution:
Always verify current stack before deploying:
# Check current stack
uv run pulumi stack --show-name
# Should output: production
# If wrong, switch
uv run pulumi stack select productionBest Practice: Add stack name to shell prompt:
# Add to .bashrc or .zshrc
export PS1="[$(pulumi stack --show-name 2>/dev/null || echo 'no-stack')] $PS1"Error:
error: cycle: a -> b -> c -> a
Solution:
Break the cycle using explicit dependencies:
# Instead of circular dependencies
a = Resource("a", dependency=b.output)
b = Resource("b", dependency=c.output)
c = Resource("c", dependency=a.output)
# Use explicit depends_on
a = Resource("a", opts=ResourceOptions(depends_on=[b]))
b = Resource("b", opts=ResourceOptions(depends_on=[c]))
c = Resource("c") # No dependency on aError:
error: resource still has dependencies
Solutions:
- Delete dependent resources first
- Use targeted destroy:
uv run pulumi destroy --target dependent-resource
uv run pulumi destroy --target main-resource- Force delete (dangerous):
uv run pulumi state delete urn:pulumi:stack::project::Type::resource --forceError:
error: timeout while waiting for resource to reach running state
Solutions:
- Increase timeout:
resource = Resource("name",
opts=ResourceOptions(custom_timeouts=CustomTimeouts(
create="30m",
update="20m",
delete="10m"
))
)- Check resource manually:
# For AWS resources
aws ec2 describe-instances --instance-ids i-xxxxx- Cancel and retry:
uv run pulumi cancel
uv run pulumi upSymptoms: Deployments taking excessively long
Solutions:
- Increase parallelism:
uv run pulumi up --parallel 20-
Split large stacks into smaller ones
-
Use refresh less frequently:
# Skip refresh on preview
uv run pulumi preview --skip-refresh- Enable performance logging:
export PULUMI_DEBUG_PROMISE_LEAKS=true
uv run pulumi upSymptoms: Slow operations, large .pulumi/ directory
Solutions:
- Clean up deleted resources:
uv run pulumi state delete urn:pulumi:stack::project::Type::old-resource --yes-
Split into multiple stacks
-
Export and re-import to compact:
uv run pulumi stack export > stack.json
uv run pulumi stack rm --force
uv run pulumi stack init
uv run pulumi stack import --file stack.jsonError:
JavaScript heap out of memory
Solutions:
- Increase Node.js memory:
export NODE_OPTIONS="--max-old-space-size=4096"- Reduce parallelism:
uv run pulumi up --parallel 5# Maximum verbosity
uv run pulumi up --logtostderr -v=9
# Save to file
uv run pulumi up --logtostderr -v=9 2>&1 | tee pulumi-debug.logpulumi versionUpdate if needed:
brew upgrade pulumi # macOS
# or
curl -fsSL https://get.pulumi.com | sh- Pulumi Slack: https://slack.pulumi.com
- GitHub Issues: https://github.com/pulumi/pulumi/issues
- Documentation: https://www.pulumi.com/docs/
- Examples: https://github.com/pulumi/examples
When reporting issues:
- Pulumi version:
pulumi version - Minimal reproduction case
- Full error output with
-v=9 - Stack export (without secrets)
- Provider versions
# Gather debug info
pulumi version > debug-info.txt
uv run pulumi stack export >> debug-info.txt
uv run pulumi up --logtostderr -v=9 2>&1 | tee pulumi-error.log