This guide provides practical examples for using dtctl to manage your Dynatrace environment. It covers configuration, common workflows, and all resource types with hands-on examples.
Note: This guide assumes dtctl is already installed. If you need to build or install dtctl, see INSTALLATION.md first.
- Configuration
- Workflows
- Dashboards & Notebooks
- DQL Queries
- Service Level Objectives (SLOs)
- Notifications
- Grail Buckets
- Lookup Tables
- OpenPipeline
- Settings API
- App Engine
- EdgeConnect
- Davis AI
- Live Debugger
- Extensions 2.0
- Output Formats
- Azure Monitoring
- GCP Monitoring (Preview)
- Tips & Tricks
- Troubleshooting
Set up your first Dynatrace environment:
The easiest way to authenticate — uses your Dynatrace SSO credentials, no token management needed:
dtctl auth login --context my-env --environment "https://abc12345.apps.dynatrace.com"
# Opens your browser for Dynatrace SSO login
# Tokens are stored securely and refreshed automatically
# Verify your configuration
dtctl doctorTo log out:
dtctl auth logoutIf you prefer API tokens (e.g. for CI/CD or headless environments):
# Create a context with your environment details
dtctl config set-context my-env \
--environment "https://abc12345.apps.dynatrace.com" \
--token-ref my-token
# Store your platform token securely
dtctl config set-credentials my-token \
--token "dt0s16.XXXXXXXXXXXXXXXXXXXXXXXX"
# Verify your configuration
dtctl config viewCreating a Platform Token:
To create a platform token in Dynatrace:
- Navigate to Identity & Access Management > Access Tokens
- Select Generate new token and choose Platform token
- Give it a descriptive name (e.g., "dtctl-token")
- Add the required scopes based on what you'll manage (see Token Scopes)
- Copy the token immediately - it's only shown once!
For detailed instructions, see Dynatrace Platform Tokens documentation.
Required Token Scopes: See TOKEN_SCOPES.md for a complete list of scopes for each safety level and resource type. You can copy-paste scope lists directly from that document.
Manage multiple Dynatrace environments easily:
# Set up dev environment with unrestricted access
dtctl config set-context dev \
--environment "https://dev.apps.dynatrace.com" \
--token-ref dev-token \
--safety-level dangerously-unrestricted \
--description "Development sandbox"
dtctl config set-credentials dev-token \
--token "dt0s16.DEV_TOKEN_HERE"
# Set up prod environment with read-only safety
dtctl config set-context prod \
--environment "https://prod.apps.dynatrace.com" \
--token-ref prod-token \
--safety-level readonly \
--description "Production - read only"
dtctl config set-credentials prod-token \
--token "dt0s16.PROD_TOKEN_HERE"
# List all contexts (shows safety levels)
dtctl config get-contexts
# Switch between environments
dtctl config use-context dev
dtctl config use-context prod
# Or use the ctx shortcut:
dtctl ctx # List contexts
dtctl ctx dev # Switch to dev
dtctl ctx prod # Switch to prod
# Check current context
dtctl config current-context
# Delete a context you no longer need
dtctl config delete-context old-envUse a different context without switching:
# Execute a command in prod while dev is active
dtctl get workflows --context proddtctl supports per-project configuration files for team collaboration and CI/CD workflows.
Use dtctl config init to generate a .dtctl.yaml template:
# Create .dtctl.yaml in current directory
dtctl config init
# Create with custom context name
dtctl config init --context staging
# Overwrite existing file
dtctl config init --forceThis generates a template with environment variable placeholders:
# .dtctl.yaml - per-project configuration
apiVersion: dtctl.io/v1
kind: Config
current-context: production
contexts:
- name: production
context:
environment: ${DT_ENVIRONMENT_URL}
token-ref: my-token
safety-level: readwrite-all
description: Project environment
tokens:
- name: my-token
token: ${DT_API_TOKEN}
preferences:
output: tableConfig files support ${VAR_NAME} syntax for environment variables:
contexts:
- name: ci
context:
environment: ${DT_ENVIRONMENT_URL} # Expanded from env var
token-ref: ci-token
tokens:
- name: ci-token
token: ${DT_API_TOKEN} # Expanded from env varThis allows teams to commit .dtctl.yaml files to repositories without secrets, while each developer or CI system provides tokens via environment variables.
Search Order:
--configflag (explicit path).dtctl.yamlin current directory or any parent directory (walks up to root)- Global config (
~/.config/dtctl/config)
# In a project directory with .dtctl.yaml
cd my-project/
export DT_ENVIRONMENT_URL="https://abc12345.apps.dynatrace.com"
export DT_API_TOKEN="dt0c01.xxx"
dtctl get workflows # Uses .dtctl.yaml with expanded env vars
# Override with global config
dtctl --config ~/.config/dtctl/config get workflowsSafety levels provide client-side protection against accidental destructive operations:
| Level | Description |
|---|---|
readonly |
No modifications allowed |
readwrite-mine |
Modify own resources only |
readwrite-all |
Modify all resources (default) |
dangerously-unrestricted |
All operations including bucket deletion |
# Set safety level when creating a context
dtctl config set-context prod \
--environment "https://prod.apps.dynatrace.com" \
--token-ref prod-token \
--safety-level readonly
# View context details including safety level
dtctl config describe-context prodImportant: Safety levels are client-side only. For actual security, configure your API tokens with minimum required scopes. See Token Scopes for scope requirements and Context Safety Levels for details.
View information about the currently authenticated user:
# View current user info
dtctl auth whoami
# Output:
# User ID: 621321d-1231-dsad-652321829b50
# User Name: John Doe
# Email: john.doe@example.com
# Context: prod
# Environment: https://abc12345.apps.dynatrace.com
# Get just the user ID (useful for scripting)
dtctl auth whoami --id-only
# Output as JSON
dtctl auth whoami -o jsonNote: The whoami command requires the app-engine:apps:run scope for full user details. If that scope is unavailable, it falls back to extracting the user ID from the JWT token.
dtctl supports custom command aliases to create shortcuts for frequently used commands. Aliases can be simple text replacements, parameterized templates, or shell commands.
Create shortcuts for common commands:
# Create a simple alias
dtctl alias set wf "get workflows"
# Use the alias
dtctl wf
# Expands to: dtctl get workflows
# List all aliases
dtctl alias list
# Delete an alias
dtctl alias delete wfUse positional parameters $1-$9 for reusable command templates:
# Create an alias that takes a parameter
dtctl alias set logs-errors "query 'fetch logs | filter status=\$1 | limit 100'"
# Use with parameter
dtctl logs-errors ERROR
# Expands to: dtctl query 'fetch logs | filter status=ERROR | limit 100'
# Multiple parameters
dtctl alias set query-host "query 'fetch logs | filter host=\$1 | limit \$2'"
dtctl query-host server-01 50
# Expands to: dtctl query 'fetch logs | filter host=server-01 | limit 50'Prefix aliases with ! to execute them through the system shell, enabling pipes, redirection, and external tools:
# Create a shell alias with jq for JSON processing
dtctl alias set wf-names "!dtctl get workflows -o json | jq -r '.workflows[].title'"
# Use the shell alias
dtctl wf-names
# Executes through shell: dtctl get workflows -o json | jq -r '.workflows[].title'
# Shell alias with grep
dtctl alias set errors "!dtctl query 'fetch logs' -o json | grep -i error"
dtctl errorsShare aliases with your team by exporting and importing them:
# Export all aliases to a file
dtctl alias export -f team-aliases.yaml
# Import aliases from a file
dtctl alias import -f team-aliases.yaml
# Merge imported aliases (skip conflicts)
dtctl alias import -f team-aliases.yaml --no-overwriteExample alias file (team-aliases.yaml):
wf: get workflows
wfe: get workflow-executions
logs-error: query 'fetch logs | filter status=ERROR | limit 100'
top-errors: "!dtctl query 'fetch logs | filter status=ERROR' -o json | jq -r '.records[] | .message' | sort | uniq -c | sort -rn | head -10"Aliases cannot shadow built-in commands to prevent confusion:
# This will fail - 'get' is a built-in command
dtctl alias set get "query 'fetch logs'"
# Error: alias name "get" conflicts with built-in command
# Use a different name instead
dtctl alias set gl "query 'fetch logs'"# Quick shortcuts for common operations
dtctl alias set w "get workflows"
dtctl alias set d "get dashboards"
dtctl alias set nb "get notebooks"
# Workflow shortcuts
dtctl alias set wf-run "exec workflow \$1 --wait"
dtctl alias set wf-logs "logs workflow-execution \$1 --follow"
# Query templates
dtctl alias set errors "query 'fetch logs | filter status=ERROR | limit \$1'"
dtctl alias set spans-by-trace "query 'fetch spans | filter trace_id=\$1'"
# Shell aliases for complex operations
dtctl alias set workflow-count "!dtctl get workflows -o json | jq '.workflows | length'"
dtctl alias set top-users "!dtctl query 'fetch logs' -o json | jq -r '.records[].user' | sort | uniq -c | sort -rn | head -10"
# Import team-shared aliases
dtctl alias import -f ~/.dtctl-team-aliases.yamlWorkflows automate tasks and integrate with Dynatrace monitoring.
# List all workflows
dtctl get workflows
# List in table format with more details
dtctl get workflows -o wide
# Get a specific workflow by ID
dtctl get workflow workflow-123
# View detailed information
dtctl describe workflow workflow-123
# Describe by name (with fuzzy matching)
dtctl describe workflow "My Workflow"
# Output as JSON for processing
dtctl get workflow workflow-123 -o jsonEdit workflows directly in your preferred editor:
# Edit in YAML format (default)
dtctl edit workflow workflow-123
# Edit by name
dtctl edit workflow "My Workflow"
# Edit in JSON format
dtctl edit workflow workflow-123 --format=json
# Set your preferred editor
export EDITOR=vim
# or
dtctl config set preferences.editor vimCreate new workflows from YAML or JSON files:
# Create from a file
dtctl create workflow -f my-workflow.yaml
# Apply (create or update if exists)
dtctl apply -f my-workflow.yamlExample workflow file (my-workflow.yaml):
title: Daily Health Check
description: Runs a health check every day at 9 AM
trigger:
schedule:
rule: "0 9 * * *"
timezone: "UTC"
tasks:
check_errors:
action: dynatrace.automations:run-javascript
input:
script: |
export default async function () {
console.log("Running health check...");
return { status: "ok" };
}Run workflows on-demand:
# Execute a workflow
dtctl exec workflow workflow-123
# Execute with parameters
dtctl exec workflow workflow-123 \
--params environment=production \
--params severity=high
# Execute and wait for completion
dtctl exec workflow workflow-123 --wait
# Execute with custom timeout
dtctl exec workflow workflow-123 --wait --timeout 10m
# Execute, wait, and print each task's return value when done
dtctl exec workflow workflow-123 --wait --show-resultsMonitor workflow executions:
# List all recent executions
dtctl get workflow-executions
# List executions for a specific workflow
dtctl get workflow-executions -w workflow-123
# Get details of a specific execution
dtctl describe workflow-execution exec-456
# or use short alias
dtctl describe wfe exec-456
# View execution logs
dtctl logs workflow-execution exec-456
# or
dtctl logs wfe exec-456
# Stream logs in real-time
dtctl logs wfe exec-456 --follow
# View logs for all tasks
dtctl logs wfe exec-456 --all
# View logs for a specific task
dtctl logs wfe exec-456 --task check_errorsRetrieve the structured return value of a specific task (distinct from log output):
# Get the return value of a task
dtctl get wfe-task-result exec-456 --task my_task
dtctl get wfe-task-result exec-456 -t my_task
# Output as JSON or YAML
dtctl get wfe-task-result exec-456 --task my_task -o json
dtctl get wfe-task-result exec-456 --task my_task -o yamlMonitor workflows in real-time with watch mode:
# Watch all workflows for changes
dtctl get workflows --watch
# Watch with custom polling interval (default: 2s)
dtctl get workflows --watch --interval 5s
# Watch specific workflow
dtctl get workflow my-workflow --watch
# Watch only your workflows
dtctl get workflows --mine --watch
# Only show changes (skip initial state)
dtctl get workflows --watch --watch-onlyWatch mode features:
+(green) prefix for newly added workflows~(yellow) prefix for modified workflows-(red) prefix for deleted workflows- Graceful shutdown with Ctrl+C
- Automatic retry on transient errors
# Delete by ID
dtctl delete workflow workflow-123
# Delete by name (prompts for confirmation)
dtctl delete workflow "Old Workflow"
# Skip confirmation prompt
dtctl delete workflow "Old Workflow" -yView and restore previous versions of workflows:
# View version history
dtctl history workflow workflow-123
dtctl history workflow "My Workflow"
# Output as JSON
dtctl history workflow workflow-123 -o jsonRestore a workflow to a previous version:
# Restore to a specific version
dtctl restore workflow workflow-123 5
# Restore by name
dtctl restore workflow "My Workflow" 3
# Skip confirmation prompt
dtctl restore workflow "My Workflow" 3 --forceDashboards provide visual monitoring views, while notebooks enable interactive data exploration.
# List all dashboards
dtctl get dashboards
# List all notebooks
dtctl get notebooks
# Filter by name
dtctl get dashboards --name "production"
dtctl get notebooks --name "analysis"
# List only your own dashboards/notebooks
dtctl get dashboards --mine
dtctl get notebooks --mine
# Combine filters
dtctl get dashboards --mine --name "production"
# Get a specific document by ID
dtctl get dashboard dash-123
dtctl get notebook nb-456
# Describe by name
dtctl describe dashboard "Production Overview"
dtctl describe notebook "Weekly Analysis"# Edit a dashboard in YAML (default)
dtctl edit dashboard dash-123
# Edit by name
dtctl edit dashboard "Production Overview"
# Edit in JSON format
dtctl edit notebook nb-456 --format=jsonBoth create and apply work with dashboards and notebooks:
# Create a new dashboard (always creates new)
dtctl create dashboard -f dashboard.yaml
# Apply a dashboard (creates if new, updates if exists)
dtctl apply -f dashboard.yaml
# Both commands show tile count and URL:
# Dashboard "My Dashboard" (abc-123) created successfully [18 tiles]
# URL: https://env.apps.dynatrace.com/ui/apps/dynatrace.dashboards/dashboard/abc-123When to use which:
create: Use when you want to create a new resource. Fails if the ID already exists.apply: Use for declarative management. Creates new resources or updates existing ones based on the ID in the file.
Both commands validate the document structure and warn about issues:
# If structure is wrong, you'll see warnings:
# Warning: dashboard content has no 'tiles' field - dashboard may be emptyExport a dashboard and re-import it (works directly without modifications):
# Export existing dashboard
dtctl get dashboard abc-123 -o yaml > dashboard.yaml
# Re-apply to same or different environment
dtctl apply -f dashboard.yaml
# dtctl automatically handles the content structureExample dashboard (dashboard.yaml):
type: dashboard
name: Production Monitoring
content:
tiles:
- name: Response Time
tileType: DATA_EXPLORER
queries:
- query: "timeseries avg(dt.service.request.response_time)"Share dashboards and notebooks with users and groups:
# Share with a user (read access by default)
dtctl share dashboard dash-123 --user user@example.com
# Share with write access
dtctl share dashboard dash-123 \
--user user@example.com \
--access read-write
# Share with a group
dtctl share notebook nb-456 --group "Platform Team"
# View sharing information
dtctl describe dashboard dash-123
# Remove user access
dtctl unshare dashboard dash-123 --user user@example.com
# Remove all shares
dtctl unshare dashboard dash-123 --allView and restore previous versions of dashboards and notebooks:
# View version history
dtctl history dashboard dash-123
dtctl history notebook nb-456
# View history by name
dtctl history dashboard "Production Overview"
dtctl history notebook "Weekly Analysis"
# Output as JSON
dtctl history dashboard dash-123 -o jsonRestore a document to a previous snapshot version:
# Restore to a specific version
dtctl restore dashboard dash-123 5
dtctl restore notebook nb-456 3
# Restore by name
dtctl restore dashboard "Production Overview" 5
# Skip confirmation prompt
dtctl restore notebook "Weekly Analysis" 3 --forceNotes:
- Snapshots are created when documents are updated with the
create-snapshotoption - Maximum 50 snapshots per document (oldest auto-deleted when exceeded)
- Snapshots auto-delete after 30 days
- Only the document owner can restore snapshots
- Restoring automatically creates a snapshot of the current state before restoring
Monitor dashboards and notebooks for changes in real-time:
# Watch all dashboards
dtctl get dashboards --watch
# Watch your own dashboards
dtctl get dashboards --mine --watch
# Watch notebooks with custom interval
dtctl get notebooks --watch --interval 10s
# Watch with name filter
dtctl get dashboards --name "production" --watch# Delete a dashboard (moves to trash)
dtctl delete dashboard dash-123
# Delete by name
dtctl delete notebook "Old Analysis"
# Skip confirmation
dtctl delete dashboard dash-123 -yNote: Deleted documents are moved to trash and kept for 30 days before permanent deletion. See Trash Management below.
Deleted dashboards and notebooks are moved to trash and kept for 30 days before permanent deletion. You can list, view, restore, or permanently delete items in trash.
# List all trashed documents
dtctl get trash
# List only trashed dashboards
dtctl get trash --type dashboard
# List only trashed notebooks
dtctl get trash --type notebook
# List only documents you deleted
dtctl get trash --mine
# Filter by deletion date
dtctl get trash --deleted-after 2024-01-01
dtctl get trash --deleted-before 2024-12-31
# Output in different formats
dtctl get trash -o json
dtctl get trash -o yamlExample output:
ID TYPE NAME DELETED BY DELETED AT EXPIRES IN
abc123-def456-ghi789-jkl012-mno345 dashboard Prod Overview john.doe 2024-01-15 10:30:00 29 days
xyz987-uvw654-rst321-opq098-lmn765 notebook Debug Session jane.smith 2024-01-20 14:45:00 24 days
# Get detailed information about a trashed document
dtctl describe trash abc-123
# Shows: ID, name, type, owner, deleted by, deletion date, expiration date, size, tags, etc.# Restore a single document
dtctl restore trash abc-123
# Restore multiple documents
dtctl restore trash abc-123 def-456 ghi-789
# Restore with a new name (to avoid conflicts)
dtctl restore trash abc-123 --new-name "Recovered Dashboard"
# Force restore (overwrite if name conflict exists)
dtctl restore trash abc-123 --forceWARNING: Permanent deletion cannot be undone!
# Permanently delete a single document
dtctl delete trash abc-123 --permanent
# Permanently delete multiple documents
dtctl delete trash abc-123 def-456 --permanent -y
# The --permanent flag is required to prevent accidental deletionNotes:
- Documents remain in trash for 30 days before automatic permanent deletion
- You can only restore documents that haven't expired yet
- Trash operations require appropriate permissions (document owner or admin)
- Use
--deleted-byflag to filter by who deleted the documents
Execute Dynatrace Query Language (DQL) queries to fetch logs, metrics, events, and more.
# Execute an inline query
dtctl query "fetch logs | limit 10"
# Filter logs by status
dtctl query "fetch logs | filter status='ERROR' | limit 100"
# Query recent events
dtctl query "fetch events | filter event.type='CUSTOM_ALERT' | limit 50"
# Summarize data
dtctl query "fetch logs | summarize count(), by: {status} | sort count desc"Store complex queries in files for reusability:
# Execute from file
dtctl query -f queries/errors.dql
# Save output to file
dtctl query -f queries/errors.dql -o json > results.jsonFor queries with special characters like quotes, use stdin to avoid shell escaping issues:
# Heredoc syntax (recommended for complex queries)
dtctl query -f - -o json <<'EOF'
metrics
| filter startsWith(metric.key, "dt")
| summarize count(), by: {metric.key}
| fieldsKeep metric.key
| limit 10
EOF
# Pipe from a file
cat query.dql | dtctl query -o json
# Pipe from echo (simple cases)
echo 'fetch logs | filter status="ERROR"' | dtctl query -o tableTip: Using single-quoted heredocs (<<'EOF') preserves all special characters exactly as written—no escaping needed.
PowerShell has different quoting rules that can cause problems with inline DQL queries. Here's how to handle them:
# ❌ FAILS - PowerShell removes inner double quotes
dtctl query 'fetch logs, bucket:{"custom-logs"} | filter contains(host.name, "api")'
# Error: MANDATORY_PARAMETER_HAS_TO_BE_CONSTANT
# PowerShell passes: bucket:{custom-logs} (missing quotes around "custom-logs")
# ❌ FAILS - DQL doesn't support single quotes
dtctl query "fetch logs, bucket:{'custom-logs'} | filter contains(host.name, 'api')"
# Error: PARSE_ERROR_SINGLE_QUOTES
# Single quotes are not supported. Please use double quotes for strings.PowerShell's here-string syntax (@'...'@) preserves all characters exactly:
# ✅ WORKS - Use @'...'@ for verbatim strings
dtctl query -f - -o json @'
fetch logs, bucket:{"custom-logs"}
| filter contains(host.name, "api")
| limit 10
'@
# ✅ More complex example with multiple quotes
dtctl query -f - -o json @'
fetch logs, bucket:{"application-logs"}
| filter contains(log.source, "backend")
| filter status = "ERROR"
| summarize count(), by:{log.source}
| limit 100
'@
# ✅ Works with any DQL query structure
dtctl query -f - -o csv @'
timeseries avg(dt.host.cpu.usage), by:{dt.entity.host}
| filter avg > 80
'@Save your query to a file and reference it:
# Save query to file
@"
fetch logs, bucket:{"custom-logs"}
| filter contains(host.name, "api")
| limit 10
"@ | Out-File -Encoding UTF8 query.dql
# Execute from file
dtctl query -f query.dql -o json# Read from file and pipe
Get-Content query.dql | dtctl query -o json
# Or use cat alias
cat query.dql | dtctl query -o json| Shell | Heredoc Syntax | Example |
|---|---|---|
| Bash/Zsh | <<'EOF' |
dtctl query -f - <<'EOF'fetch logsEOF |
| PowerShell | @'...'@ |
dtctl query -f - @'fetch logs'@ |
Why This Matters:
- DQL requires double quotes for strings (e.g.,
"custom-logs","ERROR","api") - PowerShell's quote parsing can strip or convert these quotes
- Using
-f -(stdin) with here-strings bypasses shell quote parsing entirely
Example query file (queries/errors.dql):
fetch logs
| filter status = 'ERROR'
| filter timestamp > now() - 1h
| summarize count(), by: {log.source}
| sort count desc
| limit 10
Use templates with variables for flexible queries:
# Query with variable substitution
dtctl query -f queries/logs-by-host.dql --set host=my-server
# Override multiple variables
dtctl query -f queries/logs-by-host.dql \
--set host=my-server \
--set timerange=24h \
--set limit=500Example template (queries/logs-by-host.dql):
fetch logs
| filter host = "{{.host}}"
| filter timestamp > now() - {{.timerange | default "1h"}}
| limit {{.limit | default 100}}
Template syntax:
{{.variable}}- Reference a variable{{.variable | default "value"}}- Provide default value
# Table format (default, human-readable)
dtctl query "fetch logs | limit 5" -o table
# JSON format (for processing)
dtctl query "fetch logs | limit 5" -o json
# YAML format
dtctl query "fetch logs | limit 5" -o yaml
# CSV format (for spreadsheets and data export)
dtctl query "fetch logs | limit 5" -o csv
# Export to CSV file
dtctl query "fetch logs" -o csv > logs.csvBy default, DQL queries are limited to 1000 records. Use query limit flags to download larger datasets:
# Increase result limit to 5000 records
dtctl query "fetch logs" --max-result-records 5000 -o csv > logs.csv
# Download up to 15000 records
dtctl query "fetch logs | limit 15000" --max-result-records 15000 -o csv > logs.csv
# Set result size limit in bytes (100MB)
dtctl query "fetch logs" \
--max-result-records 10000 \
--max-result-bytes 104857600 \
-o csv > large_export.csv
# Set scan limit in gigabytes
dtctl query "fetch logs" \
--max-result-records 10000 \
--default-scan-limit-gbytes 5.0 \
-o csv > large_export.csv
# Combine with filters for targeted exports
dtctl query "fetch logs | filter status='ERROR'" \
--max-result-records 5000 \
-o csv > error_logs.csvQuery Limit Parameters:
--max-result-records: Maximum number of result records to return (default: 1000)--max-result-bytes: Maximum result size in bytes (default: varies by environment)--default-scan-limit-gbytes: Scan limit in gigabytes (default: varies by environment)
Query Execution Parameters:
--default-sampling-ratio: Sampling ratio for query results (normalized to power of 10 ≤ 100000)--fetch-timeout-seconds: Time limit for fetching data in seconds--enable-preview: Request preview results if available within timeout--enforce-query-consumption-limit: Enforce query consumption limit--include-types: Include type information in query results
Timeframe Parameters:
--default-timeframe-start: Query timeframe start timestamp (ISO-8601/RFC3339, e.g., '2022-04-20T12:10:04.123Z')--default-timeframe-end: Query timeframe end timestamp (ISO-8601/RFC3339, e.g., '2022-04-20T13:10:04.123Z')
Localization Parameters:
--locale: Query locale (e.g., 'en_US', 'de_DE')--timezone: Query timezone (e.g., 'UTC', 'Europe/Paris', 'America/New_York')
Metadata Parameters:
--metadata,-M: Include query execution metadata in output. Use bare--metadatafor all fields, or select specific fields with--metadata=field1,field2. Valid fields:analysisTimeframe,canonicalQuery,contributions,dqlVersion,executionTimeMilliseconds,locale,query,queryId,sampled,scannedBytes,scannedDataPoints,scannedRecords,timezone--include-contributions: Include bucket contribution details in metadata (requires API support)
Note: All parameters are sent in the DQL query request body and work with both immediate responses and long-running queries that require polling.
Advanced Query Examples:
# Query with specific timeframe
dtctl query "fetch logs" \
--default-timeframe-start "2024-01-01T00:00:00Z" \
--default-timeframe-end "2024-01-02T00:00:00Z" \
-o csv
# Query with timezone and locale
dtctl query "fetch logs" \
--timezone "Europe/Paris" \
--locale "fr_FR" \
-o json
# Query with sampling for large datasets
dtctl query "fetch logs" \
--default-sampling-ratio 10 \
--max-result-records 10000 \
-o csv
# Query with preview mode (faster results)
dtctl query "fetch logs" \
--enable-preview \
-o table
# Query with type information included
dtctl query "fetch logs" \
--include-types \
-o jsonTip: Use CSV output with increased limits for:
- Exporting data for analysis in Excel or Google Sheets
- Creating backups of log data
- Feeding data into external analysis tools
- Generating reports from DQL query results
Monitor DQL query results in real-time with live mode:
# Live mode with periodic updates (default: 60s)
dtctl query "fetch logs | filter status='ERROR'" --live
# Live mode with custom interval
dtctl query "fetch logs" --live --interval 5s
# Live mode with charts
dtctl query "timeseries avg(dt.host.cpu.usage)" -o chart --live --interval 10sDQL queries may return warnings (e.g., scan limits reached, results truncated). These warnings are printed to stderr, keeping stdout clean for data processing.
# Warnings appear on stderr, data on stdout
dtctl query "fetch spans, from: -10d | summarize count()"
# Warning: Your execution was stopped after 500 gigabytes of data were scanned...
# map[count():194414758]
# Pipe data normally - warnings don't interfere
dtctl query "fetch logs | limit 100" -o json | jq '.records[0]'
# Suppress warnings entirely
dtctl query "fetch spans | summarize count()" 2>/dev/null
# Save data to file (warnings still visible in terminal)
dtctl query "fetch logs" -o csv > logs.csv
# Save data and warnings separately
dtctl query "fetch logs" -o json > data.json 2> warnings.txt
# Discard warnings, save only data
dtctl query "fetch logs" -o csv 2>/dev/null > clean_data.csvCommon warnings:
- SCAN_LIMIT_GBYTES: Query stopped after scanning the default limit. Use
--default-scan-limit-gbytesto adjust. - RESULT_TRUNCATED: Results exceeded the limit. Use
--max-result-recordsto increase.
Verify DQL query syntax without executing it. This is useful for:
- Testing queries in CI/CD pipelines
- Pre-commit hooks to validate query files
- Checking query correctness before execution
- Getting the canonical (normalized) representation of queries
# Verify inline query
dtctl verify query "fetch logs | limit 10"
# Verify query from file
dtctl verify query -f query.dql
# Read from stdin (recommended for complex queries)
dtctl verify query -f - <<'EOF'
fetch logs | filter status == "ERROR"
EOF
# Pipe query from file
cat query.dql | dtctl verify query# Get canonical query representation (normalized format)
dtctl verify query "fetch logs" --canonical
# Verify with specific timezone and locale
dtctl verify query "fetch logs" --timezone "Europe/Paris" --locale "fr_FR"
# Get structured output (JSON or YAML)
dtctl verify query "fetch logs" -o json
dtctl verify query "fetch logs" -o yaml
# Fail on warnings (strict validation for CI/CD)
dtctl verify query -f query.dql --fail-on-warnThe verify command returns different exit codes based on the result:
| Exit Code | Meaning |
|---|---|
| 0 | Query is valid |
| 1 | Query is invalid or has errors (or warnings with --fail-on-warn) |
| 2 | Authentication/permission error |
| 3 | Network/server error |
# Check exit code in scripts
if dtctl verify query -f query.dql --fail-on-warn; then
echo "Query is valid"
else
echo "Query validation failed"
exit 1
fi# Validate all queries in a directory
for file in queries/*.dql; do
echo "Verifying $file..."
dtctl verify query -f "$file" --fail-on-warn || exit 1
done
# Pre-commit hook: Verify staged query files
git diff --cached --name-only --diff-filter=ACM "*.dql" | \
xargs -I {} dtctl verify query -f {} --fail-on-warn
# GitHub Actions / CI pipeline
- name: Validate DQL queries
run: |
for file in queries/*.dql; do
dtctl verify query -f "$file" --fail-on-warn || exit 1
doneVerify queries with template variables before execution:
# Verify template query
dtctl verify query -f template.dql --set env=prod --set timerange=1h
# If valid, execute it
if dtctl verify query -f template.dql --set env=prod 2>/dev/null; then
dtctl query -f template.dql --set env=prod -o csv > results.csv
fi# Verify query using here-strings
dtctl verify query -f - @'
fetch logs, bucket:{"custom-logs"} | filter contains(host.name, "api")
'@
# Validate all queries in a directory
Get-ChildItem queries/*.dql | ForEach-Object {
Write-Host "Verifying $_..."
dtctl verify query -f $_.FullName --fail-on-warn
if ($LASTEXITCODE -ne 0) { exit 1 }
}Get the normalized representation of your query:
# Get canonical query
dtctl verify query "fetch logs" --canonical
# Extract canonical query with jq
dtctl verify query "fetch logs" --canonical -o json | jq -r '.canonicalQuery'
# Compare original vs canonical
echo "Original:"
cat query.dql
echo ""
echo "Canonical:"
dtctl verify query -f query.dql --canonical 2>&1 | grep -A 999 "Canonical Query:"SLOs define and track service reliability targets.
# List all SLOs
dtctl get slos
# Filter by name
dtctl get slos --filter 'name~production'
# Get a specific SLO
dtctl get slo slo-123
# Detailed view
dtctl describe slo slo-123Use templates to quickly create SLOs:
# List available templates
dtctl get slo-templates
# View template details
dtctl describe slo-template template-456
# Create SLO from template
dtctl create slo \
--from-template template-456 \
--name "API Availability" \
--target 99.9# Create from file
dtctl create slo -f slo-definition.yaml
# Apply (create or update)
dtctl apply -f slo-definition.yamlExample SLO (slo-definition.yaml):
name: API Response Time
description: 95% of requests should complete within 500ms
target: 95.0
warning: 97.0
evaluationType: AGGREGATE
filter: type("SERVICE") AND entityName.equals("my-api")
metricExpression: "(100)*(builtin:service.response.time:splitBy():sort(value(avg,descending)):limit(10):avg:partition(\"latency\",value(\"good\",lt(500))))/(builtin:service.requestCount.total:splitBy():sort(value(avg,descending)):limit(10):avg)"Evaluate SLOs to get current status, values, and error budget for each criterion:
# Evaluate SLO performance
dtctl exec slo slo-123
# Evaluate with custom timeout (default: 30 seconds)
dtctl exec slo slo-123 --timeout 60
# Output as JSON for analysis
dtctl exec slo slo-123 -o json
# Extract error budget from results
dtctl exec slo slo-123 -o json | jq '.evaluationResults[].errorBudget'
# View in table format (default)
dtctl exec slo slo-123Monitor SLO status changes in real-time:
# Watch all SLOs
dtctl get slos --watch
# Watch with custom interval
dtctl get slos --watch --interval 30s
# Watch with filter
dtctl get slos --filter 'name~production' --watch# Delete an SLO
dtctl delete slo slo-123
# Skip confirmation
dtctl delete slo slo-123 -yView and manage event notifications.
# List all notifications
dtctl get notifications
# Filter by type
dtctl get notifications --type EMAIL
# Get a specific notification
dtctl get notification notif-123
# Detailed view
dtctl describe notification notif-123Monitor notifications in real-time:
# Watch all notifications
dtctl get notifications --watch
# Watch specific notification type
dtctl get notifications --type EMAIL --watch# Delete a notification
dtctl delete notification notif-123Grail buckets provide scalable log and event storage.
# List all buckets
dtctl get buckets
# Get a specific bucket
dtctl get bucket logs-production
# Detailed view with configuration
dtctl describe bucket logs-production# Create a bucket from file
dtctl create bucket -f bucket-config.yaml
# Apply (create or update)
dtctl apply -f bucket-config.yamlExample bucket configuration (bucket-config.yaml):
bucketName: logs-production
displayName: Production Logs
table: logs
retentionDays: 35
status: activeMonitor bucket changes in real-time:
# Watch all buckets
dtctl get buckets --watch
# Watch with custom interval
dtctl get buckets --watch --interval 10s# Delete a bucket
dtctl delete bucket logs-staging
# Skip confirmation
dtctl delete bucket logs-staging -yLookup tables enable data enrichment in DQL queries by mapping key values to additional information. They're stored in Grail and can be referenced in queries to add context like mapping error codes to descriptions, IPs to locations, or IDs to human-readable names.
# List all lookup tables
dtctl get lookups
# Get a specific lookup (shows metadata + 10 row preview)
dtctl get lookup /lookups/production/error_codes
# View detailed information
dtctl describe lookup /lookups/production/error_codesThe easiest way to create a lookup table is from a CSV file. dtctl automatically detects the CSV structure:
# Create from CSV (auto-detects headers and format)
dtctl create lookup -f error_codes.csv \
--path /lookups/production/error_codes \
--display-name "Error Code Mappings" \
--description "Maps error codes to descriptions and severity" \
--lookup-field code
# Output:
# ✓ Created lookup table: /lookups/production/error_codes
# Records: 150
# File Size: 12,458 bytes
# Discarded Duplicates: 0Example CSV file (error_codes.csv):
code,message,severity
E001,Connection timeout,high
E002,Invalid credentials,critical
E003,Resource not found,medium
E004,Rate limit exceeded,lowFor non-CSV formats or custom delimiters, specify a parse pattern:
# Pipe-delimited file
dtctl create lookup -f data.txt \
--path /lookups/custom/pipe_data \
--parse-pattern "LD:id '|' LD:name '|' LD:value" \
--lookup-field id \
--skip-records 1
# Tab-delimited file
dtctl create lookup -f data.tsv \
--path /lookups/custom/tab_data \
--parse-pattern "LD:col1 '\t' LD:col2 '\t' LD:col3" \
--lookup-field col1 \
--skip-records 1Parse Pattern Syntax:
LD:columnName- Define a column','- Comma separator (single quotes required)'\t'- Tab separator'|'- Pipe separator
To update an existing lookup table, you need to delete it first and then recreate it:
# Delete the existing lookup table
dtctl delete lookup /lookups/production/error_codes -y
# Create with new data
dtctl create lookup -f updated_codes.csv \
--path /lookups/production/error_codes \
--lookup-field codeNote: Updates completely replace the existing lookup table data.
Once created, use lookup tables to enrich your query results:
# Simple lookup join
dtctl query "
fetch logs
| filter status = 'ERROR'
| lookup [
fetch dt.system.files
| load '/lookups/production/error_codes'
], sourceField:error_code, lookupField:code
| fields timestamp, error_code, message, severity
| limit 100
"
# Enrich host data with location info
dtctl query "
fetch dt.entity.host
| lookup [
load '/lookups/infrastructure/host_locations'
], sourceField:host.name, lookupField:hostname
| fields host.name, datacenter, region, cost_center
"
# Map user IDs to names
dtctl query "
fetch logs
| filter log.source = 'api'
| lookup [
load '/lookups/users/directory'
], sourceField:user_id, lookupField:id, fields:{name, email, department}
| summarize count(), by:{name, department}
"Create a lookup table for error codes:
# Create error_codes.csv
cat > error_codes.csv <<EOF
code,message,severity,documentation_url
E001,Connection timeout,high,https://docs.example.com/errors/e001
E002,Invalid credentials,critical,https://docs.example.com/errors/e002
E003,Resource not found,medium,https://docs.example.com/errors/e003
E004,Rate limit exceeded,low,https://docs.example.com/errors/e004
E005,Internal server error,critical,https://docs.example.com/errors/e005
EOF
# Upload to Dynatrace
dtctl create lookup -f error_codes.csv \
--path /lookups/monitoring/error_codes \
--display-name "Application Error Codes" \
--lookup-field code
# Use in query
dtctl query "
fetch logs
| filter status = 'ERROR'
| lookup [load '/lookups/monitoring/error_codes'],
sourceField:error_code, lookupField:code
| fields timestamp, error_code, message, severity, documentation_url
| limit 50
"Map IP addresses to geographic locations:
# Create ip_locations.csv
cat > ip_locations.csv <<EOF
ip_address,city,country,datacenter
10.0.1.50,New York,USA,DC-US-EAST-1
10.0.2.50,London,UK,DC-EU-WEST-1
10.0.3.50,Singapore,SG,DC-APAC-1
192.168.1.100,Frankfurt,Germany,DC-EU-CENTRAL-1
EOF
# Upload
dtctl create lookup -f ip_locations.csv \
--path /lookups/infrastructure/ip_locations \
--display-name "IP to Location Mapping" \
--lookup-field ip_address
# Use in query to geo-locate traffic
dtctl query "
fetch logs
| filter log.source = 'nginx'
| lookup [load '/lookups/infrastructure/ip_locations'],
sourceField:client_ip, lookupField:ip_address
| summarize request_count=count(), by:{city, country, datacenter}
| sort request_count desc
"Map service identifiers to team ownership:
# Create service_owners.csv
cat > service_owners.csv <<EOF
service_id,service_name,team,team_email,slack_channel
svc-001,payment-api,Payments,payments@example.com,#team-payments
svc-002,user-service,Identity,identity@example.com,#team-identity
svc-003,order-processor,Fulfillment,fulfillment@example.com,#team-fulfillment
svc-004,notification-service,Platform,platform@example.com,#team-platform
EOF
# Upload
dtctl create lookup -f service_owners.csv \
--path /lookups/services/ownership \
--display-name "Service Ownership" \
--lookup-field service_id
# Find errors by team
dtctl query "
fetch logs
| filter status = 'ERROR'
| lookup [load '/lookups/services/ownership'],
sourceField:service, lookupField:service_id
| summarize error_count=count(), by:{team, team_email, slack_channel}
| sort error_count desc
"# Create country_codes.csv
cat > country_codes.csv <<EOF
code,name,continent,currency
US,United States,North America,USD
GB,United Kingdom,Europe,GBP
DE,Germany,Europe,EUR
JP,Japan,Asia,JPY
AU,Australia,Oceania,AUD
BR,Brazil,South America,BRL
IN,India,Asia,INR
EOF
# Upload
dtctl create lookup -f country_codes.csv \
--path /lookups/reference/countries \
--display-name "Country Reference Data" \
--lookup-field code
# Enrich user analytics
dtctl query "
fetch logs
| filter log.source = 'analytics'
| lookup [load '/lookups/reference/countries'],
sourceField:country_code, lookupField:code,
fields:{name, continent, currency}
| summarize users=countDistinct(user_id), by:{name, continent}
| sort users desc
"# Delete a lookup table
dtctl delete lookup /lookups/production/old_data
# Skip confirmation
dtctl delete lookup /lookups/staging/test_data -yLookup table paths must follow these rules:
- Must start with
/lookups/ - Only alphanumeric characters, hyphens (
-), underscores (_), dots (.), and slashes (/) - Must end with an alphanumeric character
- Maximum 500 characters
- At least 2 slashes (e.g.,
/lookups/category/name)
Good paths:
/lookups/production/error_codes/lookups/infrastructure/host-locations/lookups/reference/country.codes
Invalid paths:
/data/lookup- Must start with/lookups//lookups/test/- Cannot end with slash/lookups/data@prod- Invalid character@/lookups/name- Must have at least 2 slashes
1. Organize with meaningful paths:
/lookups/production/... # Production data
/lookups/staging/... # Staging/test data
/lookups/reference/... # Static reference data
/lookups/infrastructure/... # Infrastructure mappings
/lookups/applications/... # Application-specific data2. Use descriptive display names and descriptions:
dtctl create lookup -f data.csv \
--path /lookups/prod/error_codes \
--display-name "Production Error Code Mappings" \
--description "Maps application error codes to user-friendly messages and severity levels. Updated weekly." \
--lookup-field code3. Export for backup:
# Export lookup metadata and data
dtctl get lookup /lookups/production/error_codes -o yaml > backup.yaml
# List all lookups for documentation
dtctl get lookups -o csv > lookup_inventory.csv4. Version your source CSV files:
# Keep CSV files in version control
git add lookups/error_codes.csv
git commit -m "Update error code E005 description"
# Apply from repository (delete first if it exists)
dtctl delete lookup /lookups/production/error_codes -y 2>/dev/null || true
dtctl create lookup -f lookups/error_codes.csv \
--path /lookups/production/error_codes \
--lookup-field code5. Test before production:
# Upload to staging first
dtctl create lookup -f new_data.csv \
--path /lookups/staging/test_lookup \
--lookup-field id
# Test with queries
dtctl query "fetch logs | lookup [load '/lookups/staging/test_lookup'], sourceField:id, lookupField:key"
# Promote to production (delete first if exists)
dtctl delete lookup /lookups/production/live_lookup -y 2>/dev/null || true
dtctl create lookup -f new_data.csv \
--path /lookups/production/live_lookup \
--lookup-field idFor lookup table management: storage:files:read, storage:files:write, storage:files:delete
See TOKEN_SCOPES.md for complete scope reference.
See TOKEN_SCOPES.md for complete scope lists by safety level.
OpenPipeline processes and routes observability data. As of September 2025, OpenPipeline configurations have been migrated from the direct API to the Settings API v2 for better access control and configuration management.
Important: The direct OpenPipeline commands (dtctl get openpipelines, dtctl describe openpipeline) have been removed. Use the Settings API instead to manage OpenPipeline configurations.
# List OpenPipeline schemas
dtctl get settings-schemas | grep openpipeline
# View specific schema details
dtctl describe settings-schema builtin:openpipeline.logs.pipelines
# List log pipelines
dtctl get settings --schema builtin:openpipeline.logs.pipelines
# Get a specific pipeline by object ID
dtctl get settings <object-id> --schema builtin:openpipeline.logs.pipelinesNote: See the Settings API section below for full details on managing OpenPipeline configurations.
The Settings API provides a unified way to manage Dynatrace configurations, including OpenPipeline pipelines, ingest sources, and routing configurations. Settings are organized by schemas and scopes.
Discover available configuration schemas:
# List all available schemas
dtctl get settings-schemas
# Filter for OpenPipeline schemas
dtctl get settings-schemas | grep openpipeline
# Get a specific schema definition
dtctl get settings-schema builtin:openpipeline.logs.pipelines
# View detailed schema information
dtctl describe settings-schema builtin:openpipeline.logs.pipelines
# Output as JSON for processing
dtctl get settings-schemas -o jsonCommon OpenPipeline Schemas:
builtin:openpipeline.logs.pipelines- Log processing pipelinesbuiltin:openpipeline.logs.ingest-sources- Log ingest sourcesbuiltin:openpipeline.logs.routing- Log routing configurationbuiltin:openpipeline.spans.pipelines- Trace span pipelinesbuiltin:openpipeline.metrics.pipelines- Metric pipelinesbuiltin:openpipeline.bizevents.pipelines- Business event pipelines
View configured settings for a schema:
# List all settings objects for a schema
dtctl get settings --schema builtin:openpipeline.logs.pipelines
# Filter by scope
dtctl get settings --schema builtin:openpipeline.logs.pipelines --scope environment
# Get a specific settings object
dtctl get settings aaaaaaaa-bbbb-cccc-dddd-000000000001
# Output as JSON
dtctl get settings --schema builtin:openpipeline.logs.pipelines -o jsonCreate new configuration objects from YAML or JSON files:
# Create a log pipeline
dtctl create settings -f log-pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment
# Create with template variables
dtctl create settings -f pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment \
--set environment=production,retention=90
# Dry run to preview
dtctl create settings -f pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment \
--dry-runExample pipeline file (log-pipeline.yaml):
customId: production-logs-pipeline
displayName: Production Log Processing Pipeline
processing:
- processor: fields-add
fields:
- name: environment
value: production
- name: team
value: platform
- processor: dql
processorDefinition:
dpl: |
fieldsAdd(severity: if(loglevel=="ERROR", "critical", "info"))
storage:
table: logs
retention: 90
routing:
catchAll: false
rules:
- matcher: matchesValue(log.source, "kubernetes")
target: builtin:storage-defaultModify existing settings:
# Update a settings object
dtctl update settings aaaaaaaa-bbbb-cccc-dddd-000000000001 \
-f updated-pipeline.yaml
# Update with template variables
dtctl update settings aaaaaaaa-bbbb-cccc-dddd-000000000001 \
-f pipeline.yaml \
--set retention=120
# Dry run
dtctl update settings aaaaaaaa-bbbb-cccc-dddd-000000000001 \
-f pipeline.yaml \
--dry-runNote: Updates use optimistic locking automatically - the current version is fetched before updating to prevent conflicts.
Remove settings objects:
# Delete a settings object (with confirmation)
dtctl delete settings aaaaaaaa-bbbb-cccc-dddd-000000000001
# Delete without confirmation
dtctl delete settings aaaaaaaa-bbbb-cccc-dddd-000000000001 -yComplete workflow for managing OpenPipeline configurations:
# 1. Discover available pipeline schemas
dtctl get settings-schemas | grep "openpipeline.logs"
# 2. View the schema structure
dtctl describe settings-schema builtin:openpipeline.logs.pipelines
# 3. List existing pipelines
dtctl get settings --schema builtin:openpipeline.logs.pipelines
# 4. Export existing pipeline for reference
dtctl get settings <pipeline-id> -o yaml > reference-pipeline.yaml
# 5. Create your new pipeline
cat > my-pipeline.yaml <<EOF
customId: my-custom-pipeline
displayName: My Custom Pipeline
processing:
- processor: fields-add
fields:
- name: source
value: my-app
storage:
table: logs
EOF
# 6. Create the pipeline
dtctl create settings -f my-pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment
# 7. Verify it was created
dtctl get settings --schema builtin:openpipeline.logs.pipelines | grep my-customDeploy the same configuration across environments:
# Export from dev
dtctl --context dev get settings <pipeline-id> -o yaml > pipeline.yaml
# Review and modify for production
$EDITOR pipeline.yaml
# Deploy to staging
dtctl --context staging create settings -f pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment \
--set environment=staging
# Deploy to production
dtctl --context prod create settings -f pipeline.yaml \
--schema builtin:openpipeline.logs.pipelines \
--scope environment \
--set environment=productionRequired Token Scopes:
settings:objects:read- List and view settings objects (includes schema read access)settings:objects:write- Create, update, and delete settings objects
See TOKEN_SCOPES.md for complete scope lists by safety level.
Manage Dynatrace apps and their serverless functions.
# List all apps
dtctl get apps
# Filter by name
dtctl get apps --name "monitoring"
# Get a specific app
dtctl get app app-123
# Detailed view
dtctl describe app app-123App functions are serverless backend functions exposed by installed apps. They can be invoked via HTTP to perform various operations like sending notifications, querying external APIs, or executing custom logic.
# List all functions across all installed apps
dtctl get functions
# List functions for a specific app
dtctl get functions --app dynatrace.automations
# Show function descriptions and metadata (wide output)
dtctl get functions --app dynatrace.automations -o wide
# Get details about a specific function
dtctl get function dynatrace.automations/execute-dql-query
# Describe a function (shows usage and metadata)
dtctl describe function dynatrace.automations/execute-dql-queryExample output:
Function: execute-dql-query
Full Name: dynatrace.automations/execute-dql-query
Title: Execute DQL Query
Description: Make use of Dynatrace Grail data in your workflow.
App: Workflows (dynatrace.automations)
Resumable: false
Stateful: true
Usage:
dtctl exec function dynatrace.automations/execute-dql-query
Note: Function input schemas are not currently exposed through the API. To discover what payload a function expects, try executing it with an empty payload
{}to see the error message listing required fields, or check the Dynatrace UI documentation for the app.
# Execute a DQL query function (requires dynatrace.automations app - built-in)
dtctl exec function dynatrace.automations/execute-dql-query \
--method POST \
--payload '{"query":"fetch logs | limit 5"}' \
-o json
# Execute with payload from file
dtctl exec function dynatrace.automations/execute-dql-query \
--method POST \
--data @query.json
# Execute with GET method (for functions that don't require input)
dtctl exec function <app-id>/<function-name>Discovering Required Payload Fields:
Functions don't expose their schemas via the API. To discover what fields are required, try executing the function with an empty payload and examine the error message:
# Try with empty payload to see what fields are required
dtctl exec function dynatrace.automations/execute-dql-query \
--method POST \
--payload '{}' \
-o json 2>&1 | jq -r '.body' | jq -r '.error'
# Output: Error: Input fields 'query' are missing.Discover available functions:
# List all available functions
dtctl get functions
# Find functions by keyword
dtctl get functions | grep -i "query\|http"
# Export function inventory
dtctl get functions -o json > functions-inventory.json
# Get detailed info about a function (shows title, description, stateful)
dtctl get functions --app dynatrace.automations -o wideFind function payloads:
# Method 1: Check the Dynatrace UI
# Navigate to Apps → [App Name] → View function documentation
# Method 2: Use error messages to discover required fields
dtctl exec function <app-id>/<function-name> \
--method POST \
--payload '{}' \
-o json 2>&1 | jq -r '.body' | jq -r '.error // .logs'
# Method 3: Look at existing workflows that use the function
dtctl get workflows -o json | jq -r '.[] | select(.tasks != null)'Common Function Examples:
# DQL Query (dynatrace.automations/execute-dql-query)
# Required: query (string)
dtctl exec function dynatrace.automations/execute-dql-query \
--method POST \
--payload '{"query":"fetch logs | limit 5"}' \
-o json
# Send Email (dynatrace.email/send-email)
# Required: to, cc, bcc (arrays), subject, content (strings)
dtctl exec function dynatrace.email/send-email \
--method POST \
--payload '{
"to": ["user@example.com"],
"cc": [],
"bcc": [],
"subject": "Test Email",
"content": "This is a test email from dtctl"
}'
# Slack Message (dynatrace.slack/slack-send-message)
# Required: connection, channel, message
dtctl exec function dynatrace.slack/slack-send-message \
--method POST \
--payload '{
"connection": "connection-id",
"channel": "#alerts",
"message": "Hello from dtctl"
}'
# Jira Create Issue (dynatrace.jira/jira-create-issue)
# Required: connectionId, project, issueType, components, summary, description
dtctl exec function dynatrace.jira/jira-create-issue \
--method POST \
--payload '{
"connectionId": "connection-id",
"project": "PROJ",
"issueType": "Bug",
"components": [],
"summary": "Issue from dtctl",
"description": "Created via dtctl"
}'
# AbuseIPDB Check (dynatrace.abuseipdb/check-ip)
# Required: observable (object), settingsObjectId (string)
dtctl exec function dynatrace.abuseipdb/check-ip \
--method POST \
--payload '{
"observable": {"type": "IP", "value": "8.8.8.8"},
"settingsObjectId": "settings-object-id"
}'Required Token Scopes:
app-engine:apps:run- Execute app functions
See TOKEN_SCOPES.md for complete scope lists.
Intents enable deep linking and inter-app communication by defining entry points that apps expose for opening resources with contextual data. They allow you to navigate directly to specific app views with parameters.
# List all intents across all apps
dtctl get intents
# List intents for a specific app
dtctl get intents --app dynatrace.distributedtracing
# Show full details in wide format
dtctl get intents -o wide
# Get a specific intent
dtctl get intent dynatrace.distributedtracing/view-trace
# Describe an intent (shows properties and usage)
dtctl describe intent dynatrace.distributedtracing/view-traceExample output:
Intent: view-trace
Full Name: dynatrace.distributedtracing/view-trace
Description: View a distributed trace
App: Distributed Tracing (dynatrace.distributedtracing)
Properties:
- trace_id: string (required)
Description: The trace identifier
- timestamp: string
Format: date-time
Description: When the trace occurred
Required: trace_id
Usage:
dtctl open intent dynatrace.distributedtracing/view-trace --data trace_id=<value>
dtctl find intents --data trace_id=<value>
Find which intents can handle specific data:
# Find intents that match the provided data
dtctl find intents --data trace_id=d052c9a8772e349d09048355a8891b82
# Output shows match quality (100% = all required properties provided)
MATCH% APP INTENT_ID DESCRIPTION
100% dynatrace.distributedtracing view-trace View a distributed trace
# Find intents with multiple properties
dtctl find intents --data trace_id=abc123,timestamp=2026-02-02T16:04:19.947Z
# Output as JSON for processing
dtctl find intents --data log_id=xyz789 -o jsonGenerate deep links to open specific resources in apps:
# Generate intent URL with data
dtctl open intent dynatrace.distributedtracing/view-trace \
--data trace_id=d052c9a8772e349d09048355a8891b82
# Output:
# https://your-env.apps.dynatrace.com/ui/intent/dynatrace.distributedtracing/view-trace#%7B%22trace_id%22%3A%22d052c9a8772e349d09048355a8891b82%22%7D
# Generate with multiple properties
dtctl open intent dynatrace.distributedtracing/view-trace \
--data trace_id=abc123,timestamp=2026-02-02T16:04:19.947Z
# Generate from JSON file
echo '{"trace_id":"abc123","timestamp":"2026-02-02T16:04:19.947Z"}' > data.json
dtctl open intent dynatrace.distributedtracing/view-trace --data-file data.json
# Generate from stdin
cat data.json | dtctl open intent dynatrace.distributedtracing/view-trace --data-file -
# Generate and open in browser
dtctl open intent dynatrace.distributedtracing/view-trace \
--data trace_id=abc123 --browserUse Case 1: Deep Linking from Alerts
# Extract trace ID from alert and open in Dynatrace
TRACE_ID=$(extract_from_alert)
dtctl open intent dynatrace.distributedtracing/view-trace \
--data trace_id=$TRACE_ID --browserUse Case 2: Scripted Navigation
# Find which apps can handle this data, then open the best match
dtctl find intents --data log_id=xyz789 -o json | \
jq -r '.[0].FullName' | \
xargs -I {} dtctl open intent {} --data log_id=xyz789 --browserUse Case 3: Generate Documentation
# Generate intent documentation for all apps
dtctl get intents -o json | \
jq -r '.[] | "## \(.FullName)\n\(.Description)\n"'Use Case 4: Integration with External Tools
# Generate intent URL from external system data
TRACE_DATA=$(curl -s https://external-system/api/trace/123)
TRACE_ID=$(echo $TRACE_DATA | jq -r '.traceId')
dtctl open intent dynatrace.distributedtracing/view-trace \
--data trace_id=$TRACE_IDRequired Token Scopes:
app-engine:apps:run- Required for accessing app manifests and intent data
# Delete an app
dtctl delete app app-123
# Skip confirmation
dtctl delete app app-123 -yEdgeConnect provides secure connectivity for ActiveGates.
# List all EdgeConnect configurations
dtctl get edgeconnects
# Get a specific configuration
dtctl get edgeconnect ec-123
# Detailed view
dtctl describe edgeconnect ec-123# Create from file
dtctl create edgeconnect -f edgeconnect-config.yaml
# Apply (create or update)
dtctl apply -f edgeconnect-config.yamlExample configuration (edgeconnect-config.yaml):
name: "Production EdgeConnect"
hostPatterns:
- "*.example.com"
- "api.production.net"
oauthClientId: "client-id"
oauthClientSecret: "client-secret"# Delete a configuration
dtctl delete edgeconnect ec-123Davis AI provides predictive analytics (Analyzers) and generative AI assistance (CoPilot).
Analyzers perform statistical analysis on time series data for forecasting, anomaly detection, and correlation analysis.
# List all available analyzers
dtctl get analyzers
# Filter analyzers by name
dtctl get analyzers --filter "name contains 'forecast'"
# Get a specific analyzer definition
dtctl get analyzer dt.statistics.GenericForecastAnalyzer
# View analyzer details as JSON
dtctl get analyzer dt.statistics.GenericForecastAnalyzer -o jsonRun analyzers to perform statistical analysis:
# Execute with a DQL query (shorthand for timeseries analyzers)
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer \
--query "timeseries avg(dt.host.cpu.usage)"
# Execute with inline JSON input
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer \
--input '{"timeSeriesData":"timeseries avg(dt.host.cpu.usage)","forecastHorizon":50}'
# Execute from input file
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer -f forecast-input.json
# Validate input without executing
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer \
-f forecast-input.json --validate
# Output result as JSON
dtctl exec analyzer dt.statistics.GenericForecastAnalyzer \
--query "timeseries avg(dt.host.cpu.usage)" -o jsonExample analyzer input file (forecast-input.json):
{
"timeSeriesData": "timeseries avg(dt.host.cpu.usage)",
"forecastHorizon": 100,
"generalParameters": {
"timeframe": {
"startTime": "now-7d",
"endTime": "now"
}
}
}| Analyzer | Description |
|---|---|
dt.statistics.GenericForecastAnalyzer |
Time series forecasting |
dt.statistics.ChangePointAnalyzer |
Detect changes in time series |
dt.statistics.CorrelationAnalyzer |
Find correlations between metrics |
dt.statistics.TimeSeriesCharacteristicAnalyzer |
Analyze time series properties |
dt.statistics.anomaly_detection.StaticThresholdAnomalyDetectionAnalyzer |
Static threshold anomaly detection |
CoPilot provides AI-powered assistance for understanding your Dynatrace environment.
# List available CoPilot skills
dtctl get copilot-skills
# Output:
# NAME
# conversation
# nl2dql
# dql2nl
# documentSearch# Ask a question
dtctl exec copilot "What is DQL?"
# Ask about your environment
dtctl exec copilot "What caused the CPU spike on my production hosts?"
# Read question from file
dtctl exec copilot -f question.txt
# Stream response in real-time (shows tokens as they arrive)
dtctl exec copilot "Explain the recent errors in my environment" --stream
# Provide additional context
dtctl exec copilot "Analyze this issue" \
--context "Error logs show connection timeouts to database"
# Disable Dynatrace documentation retrieval
dtctl exec copilot "What is an SLO?" --no-docs
# Add formatting instructions
dtctl exec copilot "List the top 5 error types" \
--instruction "Format as a numbered list with counts"# Get help writing DQL queries
dtctl exec copilot "Write a DQL query to find all ERROR logs from the last hour"
# Understand existing queries
dtctl exec copilot "Explain this query: fetch logs | filter status='ERROR' | summarize count()"
# Troubleshoot issues
dtctl exec copilot "Why might my service response time be increasing?"
# Learn about Dynatrace features
dtctl exec copilot "How do I set up an SLO for API availability?"Generate DQL queries from natural language descriptions:
# Generate a DQL query from natural language
dtctl exec copilot nl2dql "show me error logs from the last hour"
# Output: fetch logs | filter status = "ERROR" | filter timestamp > now() - 1h
# More complex queries
dtctl exec copilot nl2dql "find hosts with CPU usage above 80%"
dtctl exec copilot nl2dql "count logs by severity for the last 24 hours"
# Read prompt from file
dtctl exec copilot nl2dql -f prompt.txt
# Get full response with messageToken (for feedback)
dtctl exec copilot nl2dql "show recent errors" -o jsonGet natural language explanations of DQL queries:
# Explain a DQL query
dtctl exec copilot dql2nl "fetch logs | filter status='ERROR' | summarize count(), by:{host}"
# Output:
# Summary: Count error logs grouped by host
# Explanation: This query fetches logs, filters for ERROR status, and counts them by host.
# Explain a complex query
dtctl exec copilot dql2nl "timeseries avg(dt.host.cpu.usage), by:{dt.entity.host} | filter avg > 80"
# Read query from file
dtctl exec copilot dql2nl -f query.dql
# Get full response as JSON
dtctl exec copilot dql2nl "fetch logs | limit 10" -o jsonFind relevant notebooks and dashboards:
# Search for documents about CPU analysis
dtctl exec copilot document-search "CPU performance analysis" --collections notebooks
# Search across multiple collections
dtctl exec copilot document-search "error monitoring" --collections dashboards,notebooks
# Exclude specific documents from results
dtctl exec copilot document-search "performance" --exclude doc-123,doc-456
# Output as JSON for processing
dtctl exec copilot document-search "kubernetes" --collections notebooks -o jsonFor Davis AI features:
- Analyzers:
davis:analyzers:read,davis:analyzers:execute - CoPilot (all features):
davis-copilot:conversations:execute
See TOKEN_SCOPES.md for complete scope lists by safety level.
This is the recommended fast flow for Azure onboarding with federated credentials.
dtctl create azure connection --name "my-azure-connection" --type federatedIdentityCredentialCommand output prints dynamic values you need for Azure setup:
- Issuer
- Subject (dt:connection-id/...)
- Audience
CLIENT_ID=$(az ad sp create-for-rbac --name "my-azure-connection" --create-password false --query appId -o tsv)
TENANT_ID=$(az account show --query tenantId -o tsv)IAM_SCOPE="/subscriptions/00000000-0000-0000-0000-000000000000"
az role assignment create --assignee "$CLIENT_ID" --role Reader --scope "$IAM_SCOPE"Use Issuer/Subject/Audience exactly as printed by the create command:
az ad app federated-credential create --id "$CLIENT_ID" --parameters "{'name': 'fd-Federated-Credential', 'issuer': 'https://dev.token.dynatracelabs.com', 'subject': 'dt:connection-id/<connection-object-id>', 'audiences': ['<tenant>.dev.apps.dynatracelabs.com/svc-id/com.dynatrace.da']}"dtctl update azure connection --name "my-azure-connection" --directoryId "$TENANT_ID" --applicationId "$CLIENT_ID"Note: immediately after step 4, Entra propagation can take a short time. If you see AADSTS70025, retry step 5 after a few seconds.
dtctl create azure monitoring --name "my-azure-connection" --credentials "my-azure-connection"
dtctl get azure monitoring my-azure-connection
dtctl describe azure monitoring my-azure-connectionChange location filtering to two regions:
dtctl update azure monitoring --name "my-azure-connection" \
--locationFiltering "eastus,westeurope"Change feature sets to Virtual Machines and Azure Functions:
dtctl update azure monitoring --name "my-azure-connection" \
--featureSets "microsoft_compute.virtualmachines_essential,microsoft_web.sites_functionapp_essential"Create Azure monitoring config with explicit feature sets and two locations:
dtctl create azure monitoring --name "my-azure-monitoring-explicit" \
--credentials "my-azure-connection" \
--locationFiltering "eastus,westeurope" \
--featureSets "microsoft_compute.virtualmachines_essential,microsoft_web.sites_functionapp_essential"This is the recommended onboarding flow for GCP with service account impersonation.
All GCP commands in this section are Preview.
dtctl create gcp connection --name "my-gcp-connection"Define variables used in snippets:
PROJECT_ID="my-project-id"
DT_GCP_PRINCIPAL="dynatrace-<tenant-id>@dtp-prod-gcp-auth.iam.gserviceaccount.com"
CUSTOMER_SA_NAME="dynatrace-integration"
CUSTOMER_SA_EMAIL="${CUSTOMER_SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"Create customer service account:
gcloud iam service-accounts create "${CUSTOMER_SA_NAME}" \
--project "${PROJECT_ID}" \
--display-name "Dynatrace Integration"Grant required viewer permissions to customer service account:
for ROLE in roles/browser roles/monitoring.viewer roles/compute.viewer roles/cloudasset.viewer; do
gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
--quiet --format="none" \
--member "serviceAccount:${CUSTOMER_SA_EMAIL}" \
--role "${ROLE}"
done
Grant Service Account Token Creator to Dynatrace principal (service account impersonation):
gcloud iam service-accounts add-iam-policy-binding "${CUSTOMER_SA_EMAIL}" \
--project "${PROJECT_ID}" \
--member="serviceAccount:${DT_GCP_PRINCIPAL}" \
--role="roles/iam.serviceAccountTokenCreator"Use the service account from step 2 and update connection:
dtctl update gcp connection --name "my-gcp-connection" --serviceAccountId "${CUSTOMER_SA_EMAIL}"dtctl create gcp monitoring --name "my-gcp-monitoring" --credentials "my-gcp-connection"
dtctl describe gcp monitoring my-gcp-monitoringdtctl get gcp monitoring-locations
dtctl get gcp monitoring-feature-setsChange location filtering to two regions:
dtctl update gcp monitoring --name "my-gcp-monitoring" \
--locationFiltering "us-central1,europe-west1"Change feature sets to a focused subset:
dtctl update gcp monitoring --name "my-gcp-monitoring" \
--featureSets "compute_engine_essential,cloud_run_essential"Create GCP monitoring config with explicit feature sets and locations:
dtctl create gcp monitoring --name "my-gcp-monitoring-explicit" \
--credentials "my-gcp-connection" \
--locationFiltering "us-central1,europe-west1" \
--featureSets "compute_engine_essential,cloud_run_essential"dtctl delete gcp monitoring my-gcp-monitoring
dtctl delete gcp connection my-gcp-connectionExperimental: Live Debugger support in
dtctlis experimental. The underlying APIs and query behavior may change in future releases without notice.
For complete guidance, see LIVE_DEBUGGER.md.
Authentication note: Live Debugger breakpoint operations currently require OAuth authentication. The
dev-obs:breakpoints:setscope is supported withdtctl auth login, but is not currently supported with API token authentication (for example viadtctl config set-credentials).
--filters accepts both key:value and key=value pairs.
dtctl update breakpoint --filters k8s.namespace.name:prod
dtctl update breakpoints --filters k8s.namespace.name:prod,dt.entity.host:HOST-123
dtctl update breakpoint --filters k8s.namespace.name=prod,dt.entity.host=HOST-123# Create
dtctl create breakpoint OrderController.java:306
# List
dtctl get breakpoints
# Describe by location or mutable ID
dtctl describe OrderController.java:306
dtctl describe dtctl-rule-123
# Edit condition / enabled state
dtctl update breakpoint OrderController.java:306 --condition "orderId != null"
dtctl update breakpoint OrderController.java:306 --enabled false
# Delete by ID, by location, or all
dtctl delete breakpoint dtctl-rule-123
dtctl delete breakpoint OrderController.java:306
dtctl delete breakpoint --all -y# Simplified (variant wrappers flattened to plain values)
dtctl query "fetch application.snapshots | sort timestamp desc | limit 5" --decode-snapshots
# Full decoded tree with type annotations
dtctl query "fetch application.snapshots | sort timestamp desc | limit 5" --decode-snapshots=full
# Compose with any output format
dtctl query "fetch application.snapshots | limit 5" --decode-snapshots -o json
dtctl query "fetch application.snapshots | limit 5" --decode-snapshots -o yaml--decode-snapshots enriches each record with parsed_snapshot decoded from snapshot.data and snapshot.string_map. By default, variant wrappers are simplified to plain values; use --decode-snapshots=full to preserve type annotations.
Extensions 2.0 manages installed extension packages and their monitoring configurations.
# List all installed extensions
dtctl get extensions
# Filter extensions by name
dtctl get extensions --name "com.dynatrace"
# Get versions of a specific extension
dtctl get extension com.dynatrace.extension.postgres
# Wide output (shows author, feature sets, data sources)
dtctl get extension com.dynatrace.extension.postgres -o wide
# Describe an extension (schema, feature sets, data sources)
dtctl describe extension com.dynatrace.extension.postgres
# Describe a specific version
dtctl describe extension com.dynatrace.extension.postgres --version 2.9.3# List monitoring configurations for an extension
dtctl get extension-configs com.dynatrace.extension.postgres
# Filter by version
dtctl get extension-configs com.dynatrace.extension.postgres --version 2.9.3
# Describe a specific monitoring configuration
dtctl describe extension-config com.dynatrace.extension.postgres --config-id <object-id># Create a new monitoring configuration
dtctl apply extension-config com.dynatrace.extension.postgres -f config.yaml
# Create with a specific scope
dtctl apply extension-config com.dynatrace.extension.postgres -f config.yaml --scope HOST-1234
# Update an existing configuration (objectId in file)
dtctl apply extension-config com.dynatrace.extension.postgres -f config.yaml
# Apply with template variables
dtctl apply extension-config com.dynatrace.extension.postgres -f config.yaml --set env=prod
# Dry run to preview
dtctl apply extension-config com.dynatrace.extension.postgres -f config.yaml --dry-runExample monitoring configuration (config.yaml):
scope: environment
value:
enabled: true
description: "Host monitoring"
featureSets:
- host_performanceAll get and query commands support multiple output formats.
Human-readable table output:
dtctl get workflows
# Output:
# ID TITLE OWNER UPDATED
# wf-123 Health Check me 2h ago
# wf-456 Alert Handler team-sre 1d agoMachine-readable JSON:
dtctl get workflow wf-123 -o json
# Output:
# {
# "id": "wf-123",
# "title": "Health Check",
# "owner": "me",
# ...
# }
# Pretty-print with jq
dtctl get workflows -o json | jq '.'Kubernetes-style YAML:
dtctl get workflow wf-123 -o yaml
# Output:
# id: wf-123
# title: Health Check
# owner: me
# ...Table with additional columns:
dtctl get workflows -o wide
# Shows more details in table formatSpreadsheet-compatible comma-separated values output:
# Export workflows to CSV
dtctl get workflows -o csv > workflows.csv
# Export DQL query results to CSV
dtctl query "fetch logs | limit 100" -o csv > logs.csv
# Download large datasets (up to 10000 records)
dtctl query "fetch logs" --max-result-records 5000 -o csv > large_export.csv
# Import into Excel, Google Sheets, or other toolsCSV Features:
- Proper escaping for special characters (commas, quotes, newlines)
- Alphabetically sorted columns for consistency
- Handles missing values gracefully
- Compatible with all spreadsheet applications
- Perfect for data export and offline analysis
TOON (Token-Oriented Object Notation) is a compact, human-readable format optimised for LLM token efficiency. It uses CSV-style tabular layout for uniform arrays and YAML-like indentation for nested objects, achieving ~40-60% fewer tokens than JSON:
# Get workflows in TOON format
dtctl get workflows -o toon
# Output:
# [#3]{id,title,owner,lastModifiedAt}:
# wf-123,Health Check,me,2025-03-15T10:00:00Z
# wf-456,Alert Handler,team-sre,2025-03-14T08:30:00Z
# wf-789,Deploy Pipeline,platform,2025-03-13T14:15:00Z
# Use TOON format in agent mode for token efficiency
dtctl get workflows --agent -o toonTOON Features:
- ~40-60% fewer tokens than JSON for tabular data
- Lossless round-trip fidelity with JSON data model
- Available in agent mode via
-A -o toon - Handles nested objects and arrays (unlike CSV)
No colors, no interactive prompts (for scripts):
dtctl get workflows --plainAI agents can discover all available dtctl commands, flags, and resources at runtime:
# Full catalog in JSON
dtctl commands -o json
# Compact catalog (no descriptions, no global flags)
dtctl commands --brief -o json
# Filter to a specific resource
dtctl commands workflow -o json
# Generate a Markdown how-to guide
dtctl commands howtoThis is especially useful for agent bootstrap — run dtctl commands --brief -o json at the start of a session to learn what dtctl can do.
Wraps all output in a structured JSON envelope designed for AI agents and automation:
dtctl get workflows --agent
# Output:
# {
# "ok": true,
# "result": [...],
# "context": {
# "total": 5,
# "has_more": false,
# "verb": "get",
# "resource": "workflow",
# "suggestions": [
# "Run 'dtctl describe workflow <id>' for details",
# "Run 'dtctl exec workflow <id>' to trigger a workflow"
# ]
# }
# }Agent mode is auto-detected when running inside an AI agent environment (e.g., GitHub Copilot, Claude Code). To opt out, pass --no-agent. Agent mode implies --plain.
# Force agent mode off in an auto-detected environment
dtctl get workflows --no-agent
# Errors are also structured
# {
# "ok": false,
# "error": {
# "code": "auth_required",
# "message": "Authentication failed",
# "suggestions": ["Run 'dtctl auth login' to refresh your token"]
# }
# }Like kubectl, dtctl automatically paginates through large result sets:
# Default: fetch all results in chunks of 500 (like kubectl)
dtctl get notebooks
# Disable chunking (return only first page from API)
dtctl get notebooks --chunk-size=0
# Use smaller chunks (useful for slow connections)
dtctl get notebooks --chunk-size=100Use resource names instead of memorizing IDs:
# Works with any command that accepts an ID
dtctl describe workflow "My Workflow"
dtctl edit dashboard "Production Overview"
dtctl delete notebook "Old Analysis"
# If multiple resources match, you'll be prompted to select
# Use --plain to require exact matches onlyEnable tab completion for faster workflows:
Bash:
source <(dtctl completion bash)
# Make it permanent:
sudo mkdir -p /etc/bash_completion.d
dtctl completion bash | sudo tee /etc/bash_completion.d/dtctl > /dev/nullZsh:
mkdir -p ~/.zsh/completions
dtctl completion zsh > ~/.zsh/completions/_dtctl
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
rm -f ~/.zcompdump* && autoload -U compinit && compinitFish:
mkdir -p ~/.config/fish/completions
dtctl completion fish > ~/.config/fish/completions/dtctl.fishOrganize your DQL queries in a directory:
# Create a directory for your queries (using XDG data home)
mkdir -p ~/.local/share/dtctl/queries
# Create reusable queries
cat > ~/.local/share/dtctl/queries/errors-last-hour.dql <<EOF
fetch logs
| filter status = 'ERROR'
| filter timestamp > now() - 1h
| limit {{.limit | default 100}}
EOF
# Use them easily
dtctl query -f ~/.local/share/dtctl/queries/errors-last-hour.dqlNote: dtctl follows the XDG Base Directory Specification and adapts to platform conventions:
Linux:
- Config:
$XDG_CONFIG_HOME/dtctl(default:~/.config/dtctl) - Data:
$XDG_DATA_HOME/dtctl(default:~/.local/share/dtctl) - Cache:
$XDG_CACHE_HOME/dtctl(default:~/.cache/dtctl)
macOS:
- Config:
~/Library/Application Support/dtctl - Data:
~/Library/Application Support/dtctl - Cache:
~/Library/Caches/dtctl
Windows:
- Config:
%LOCALAPPDATA%\dtctl - Data:
%LOCALAPPDATA%\dtctl - Cache:
%LOCALAPPDATA%\dtctl
Backup your resources regularly:
# Export all workflows
dtctl get workflows -o yaml > workflows-backup.yaml
# Export all dashboards
dtctl get dashboards -o json > dashboards-backup.json
# Export with timestamp
dtctl get workflows -o yaml > "workflows-$(date +%Y%m%d).yaml"Preview changes before applying:
# See what would be created/updated (shows create vs update, validates structure)
dtctl apply -f workflow.yaml --dry-run
# For dashboards/notebooks, dry-run shows:
# - Whether it will create or update
# - Document name and ID
# - Tile/section count
# - Structure validation warnings
dtctl apply -f dashboard.yaml --dry-run
# Example output:
# Dry run: would create dashboard
# Name: SRE Service Health Overview
# Tiles: 18
#
# Document structure validated successfully
# If there are issues, you'll see warnings:
# Warning: detected double-nested content (.content.content) - using inner content
# Warning: dashboard content has no 'tiles' field - dashboard may be empty
# See what would be deleted
dtctl delete workflow "Test Workflow" --dry-runCompare resources before applying changes:
# Compare local file with remote resource (auto-detects type and ID from file)
dtctl diff -f workflow.yaml
# Compare two local files
dtctl diff -f workflow-v1.yaml -f workflow-v2.yaml
# Compare two remote resources
dtctl diff workflow prod-workflow staging-workflow
# Different output formats
dtctl diff -f dashboard.yaml --semantic # Human-readable with impact analysis
dtctl diff -f workflow.yaml -o json-patch # RFC 6902 JSON Patch format
dtctl diff -f dashboard.yaml --side-by-side # Split-screen comparison
# Ignore metadata changes (timestamps, versions)
dtctl diff -f workflow.yaml --ignore-metadata
# Ignore array order (useful for tasks, tiles, etc.)
dtctl diff -f dashboard.yaml --ignore-order
# Quiet mode (exit code only, for CI/CD)
dtctl diff -f workflow.yaml --quiet
# Exit codes: 0 = no changes, 1 = changes found, 2 = error
# Works with all resource types
dtctl diff -f dashboard.yaml # Dashboards
dtctl diff -f notebook.yaml # Notebooks
dtctl diff -f workflow.yaml # WorkflowsSee exactly what changes when updating resources:
# Show diff when updating a dashboard
dtctl apply -f dashboard.yaml --show-diff
# Output shows:
# --- existing dashboard
# +++ new dashboard
# - "title": "Old Title"
# + "title": "New Title"Debug issues with verbose mode:
# See API calls and responses (auth headers redacted)
dtctl get workflows -v
# Full debug output including auth headers (use with caution!)
dtctl get workflows -vvSet default preferences:
# Set default output format
export DTCTL_OUTPUT=json
# Set default context
export DTCTL_CONTEXT=production
# Override with flags
dtctl get workflows -o yamlCombine dtctl with standard Unix tools:
# Count workflows
dtctl get workflows -o json | jq '. | length'
# Find workflows by owner
dtctl get workflows -o json | jq '.[] | select(.owner=="me")'
# Extract workflow IDs
dtctl get workflows -o json | jq -r '.[].id'
# Filter and format
dtctl query "fetch logs | limit 100" -o json | \
jq '.records[] | select(.status=="ERROR")'Export large datasets from DQL queries for offline analysis:
# Export up to 5000 records to CSV
dtctl query "fetch logs | filter status='ERROR'" \
--max-result-records 5000 \
-o csv > error_logs.csv
# Export multiple datasets with timestamps
dtctl query "fetch logs" --max-result-records 10000 -o csv > "logs-$(date +%Y%m%d-%H%M%S).csv"
# Process large CSV exports with Unix tools
dtctl query "fetch logs" --max-result-records 5000 -o csv | \
grep "ERROR" | \
wc -l
# Split large exports into smaller files
dtctl query "fetch logs" --max-result-records 10000 -o csv | \
split -l 1000 - logs_part_
# Import into databases
dtctl query "fetch logs" --max-result-records 5000 -o csv > logs.csv
# Then use database import tools:
# psql -c "\COPY logs FROM 'logs.csv' CSV HEADER"
# mysql -e "LOAD DATA LOCAL INFILE 'logs.csv' INTO TABLE logs FIELDS TERMINATED BY ',' ENCLOSED BY '\"' IGNORE 1 ROWS"Performance Tips:
- Use filters in your DQL query to reduce dataset size
- Request only the columns you need
- Consider time-based filtering for incremental exports
- CSV format is more compact than JSON for large datasets
Before diving into manual troubleshooting, run the built-in health check:
dtctl doctorThis runs 6 sequential checks — version, config, context, token, connectivity, and authentication — and reports pass/fail with actionable suggestions for each.
dtctl provides contextual error messages with troubleshooting suggestions. When an operation fails, you'll see:
Failed to get workflows (HTTP 401): Authentication failed
Request ID: abc-123-def-456
Troubleshooting suggestions:
• Token may be expired or invalid. Run 'dtctl config get-context' to check your configuration
• Verify your API token has not been revoked in the Dynatrace console
• Try refreshing your authentication with 'dtctl context set' and a new token
Common HTTP status codes and their meanings:
- 401 Unauthorized: Token is invalid, expired, or missing
- 403 Forbidden: Token lacks required permissions/scopes
- 404 Not Found: Resource doesn't exist or wrong ID/name
- 429 Rate Limited: Too many requests (dtctl auto-retries)
- 500/502/503/504: Server error (dtctl auto-retries)
For detailed HTTP request/response logging, use the --debug flag:
# Enable full debug mode with HTTP details
dtctl get workflows --debug
# Output shows:
# ===> REQUEST <===
# GET https://abc12345.apps.dynatrace.com/platform/automation/v1/workflows
# HEADERS:
# User-Agent: dtctl/0.12.0
# Authorization: [REDACTED]
# ...
#
# ===> RESPONSE <===
# STATUS: 200 OK
# TIME: 234ms
# HEADERS:
# Content-Type: application/json
# ...
# BODY:
# {"workflows": [...]}The --debug flag is equivalent to -vv and shows:
- Full HTTP request URL and method
- Request and response headers (auth tokens are always redacted)
- Response body
- Response time
This is useful for:
- Diagnosing API errors
- Verifying request parameters
- Checking response format
- Troubleshooting performance issues
This means you haven't set up your configuration yet. Run:
dtctl config set-context my-env \
--environment "https://YOUR_ENV.apps.dynatrace.com" \
--token-ref my-token
dtctl config set-credentials my-token --token "dt0s16.YOUR_TOKEN"Check:
- Your token has the correct permissions
- Your environment URL is correct
- You're using the right context
Enable debug mode to see detailed HTTP interactions:
dtctl get workflows --debugYour platform token needs appropriate scopes for the resources you want to manage. See TOKEN_SCOPES.md for:
- Complete scope lists for each safety level (copy-pasteable)
- Detailed breakdown by resource type
- Token creation instructions
If you're using dtctl through an AI coding assistant (like Claude Code, GitHub Copilot, Cursor, OpenClaw, etc.), dtctl automatically detects this and includes it in the User-Agent header for telemetry purposes. This helps improve the CLI experience for AI-assisted workflows.
The detection is automatic and doesn't affect functionality. Supported AI agents:
- Claude Code (
CLAUDECODEenv var) - OpenCode (
OPENCODEenv var) - GitHub Copilot (
GITHUB_COPILOTenv var) - Cursor (
CURSOR_AGENTenv var) - Kiro (
KIROenv var) - Junie (
JUNIEenv var) - OpenClaw (
OPENCLAWenv var) - Codeium (
CODEIUM_AGENTenv var) - TabNine (
TABNINE_AGENTenv var) - Amazon Q (
AMAZON_Qenv var)
- API Reference: See dev/API_DESIGN.md for complete command reference
- Architecture: Read dev/ARCHITECTURE.md to understand how dtctl works
- Implementation Status: View dev/IMPLEMENTATION_STATUS.md for roadmap
# General help
dtctl --help
# Command-specific help
dtctl get --help
dtctl query --help
# Resource-specific help
dtctl get workflows --help
# Machine-readable command catalog (for AI agents)
dtctl commands --brief -o jsonUse the --debug flag to see detailed HTTP request/response logs:
# Full debug output
dtctl get workflows --debug
# Alternative: use -vv for the same effect
dtctl get workflows -vvThe debug output includes:
- HTTP method and URL
- Request/response headers (sensitive headers are redacted)
- Response body and status
- Response time
- No flag: Normal output
-v: Verbose output with operation details-vvor--debug: Full HTTP debug mode with request/response details
For issues and feature requests, visit the GitHub repository.