This document is intended for the developers wishing to contribute to Dynatrace Snowflake Observability Agent.
IMPORTANT before you contribute with a new plugin or make changes in the existing one, make sure you understand and follow the semantic conventions for naming dimensions, attributes, and metrics.
Table of content:
The following rules apply to naming fields (dimensions, attributes, and metrics) for telemetry generated by Dynatrace Snowflake Observability Agent:
- Use existing OpenTelemetry or Dynatrace semantics if they express the same field.
- Custom fields SHOULD start with
snowflake. - ALWAYS use lower-case +
snake_casefor naming. - AVOID using measurement units in names; NEVER use them at the end of the field name.
- DO NOT provide dimensionality information in field name
- DO NOT produce different names of the same metric depending on the dimension set; this is no longer necessary in 3rd Gen.
- DO NOT add extension name or technology as part of field name.
- DO NOT use
.countsuffix. - Boolean metrics should have
is_orhas_prefix. - Use singular and plural properly to reflect the field content.
- Split with DOT
.when we can identify a (virtual) object along the field name path. - Avoid reporting OBJECT fields returned by Snowflake directly; this can cause objects to be expanded in an uncontrolled fashion. Instead, expand object within the view and describe semantics for those expanded fields.
These rules apply to making Snowflake stored procedures and temporary tables as part of Dynatrace Snowflake Observability Agent.
- Avoid returning boolean values from Snowflake stored procedures. Instead make them more descriptive e.g. by specifying affected tables.
- Avoid using
create or replace tablein stored procedures by initializing tables before them and truncating them in the procedure. - Grant at least
selecttoDTAGENT_VIEWERandownershiptoDTAGENT_ADMINon tables. Granting table ownership should be done asACCOUNTADMINto make sure it is executed correctly. - Create procedures as
DTAGENT_ADMINand execute them ascallerif possible. - Grant procedure usage to
DTAGENT_VIEWER. - Include exception statements.
- For views reported as logs to DT tenant, it is good practice to include
_MESSAGEcolumn, as it will be automatically mapped tocontentfield in logs, otherwisecontentwill be set tocontext_namewhen usingPlugin::_log_entries(). - When naming and referring to objects within the
DTAGENT_DBmake sure that they are called with all uppercase (notdtagent_db.app...) names. Lowercase names will not be caught and renamed when deployed with a custom tag - which will cause objects to initialize in the defaultDTAGENT_DBinstead of the desired, tagged one.
There are 3 types of metrics defined in OpenTelemetry Metrics API:
counters: something that we can keep on adding upgauge: something that gives a number with which we don’t do sum/avg etchistograms: something that we plan to sum/avg etc, so basically everything else ...
IMPORTANT: However, since Dynatrace API only recognizes counters and gauges, and there are no examples so far of counter metrics,
all metrics are sent as gauge until further improvements in the Dynatrace Snowflake Observability Agent code.
The source code of Dynatrace Snowflake Observability Agent is organized into several types of files:
-
Python sources: These are compiled into the final Snowpark code for the core stored procedures (
DTAGENT()andSEND_TELEMETRY()). -
SQL scripts: Used to initialize Dynatrace Snowflake Observability Agent and create necessary objects, including views, procedures, and tasks. These scripts are located in
*.sqlfolders. The best practice is to prefix all SQL files with three-digit numbers to enforce the proper order of execution. Following prefixes are "reserved" for special type of SQL scripts:0xxfor core Dynatrace Snowflake Observability Agent initialization, plus plugin specific procedures, views, and other scripts necessary for their execution,70xfor core procedures,80xfor task definitions, and90xfor plugin-specific update procedures.
-
Configuration files: Defined for the core and telemetry API, and separately for each plugin. These are located in
*.conffolders. -
Documentation files: Each part of Dynatrace Snowflake Observability Agent and each plugin has documentation in
info.mdfiles, located in*.conffolders. -
Semantic Dictionary files: Defined in
instruments-def.ymlfiles, located in*.conffolders.
The core of Dynatrace Snowflake Observability Agent code is located in the following folders:
src/dtagentcontains the Python files,src/dtagent.confincludes default configuration, documentation, and core semantic dictionary entries, andsrc/dtagent.sqlcontains SQL files used to initialize Dynatrace Snowflake Observability Agent base objects.
The telemetry API Python code is located in the src/dtagent/otel package, with default configuration is in src/dtagent.conf/otel-config.json.
Each plugin consists of:
- a single Python file (
$plugin_name.py), - a
$plugin_name.sqlfolder with definition of views, temporary tables, and helper procedures, and - a
$plugin_name.conffolder with configuration, semantic dictionary, and plugin documentation.
These are located in the src/dtagent/plugins directory.
Dynatrace Snowflake Observability Agent comes with a number of bash scripts used to support development (see Working with Dynatrace Snowflake Observability Agent source code for details):
./compile.shcompiles Python code into Snowpark code,./build.shbuilds Snowpark code into SQL scripts,./build_docs.shrebuilds documentation, including PDF documents,./package.shprepares Dynatrace Snowflake Observability Agent for distribution,./test.shruns a single plugin test,./test_core.shruns a test in Jenkins context,
and deployment (also delivered in distribution package):
./deploy.shused to deploy Dynatrace Snowflake Observability Agent,./setup.shensures all prerequisites for deploying Dynatrace Snowflake Observability Agent are met,./install_snow_cli.shinstalls Snowflake CLI,./prepare_deploy_script.shgenerates a single deployment SQL script,./prepare_config.shprepares single configuration document for uploading to Snowflake,./prepare_configuration_ingest.shprepares SQL script that will update configuration,./prepare_instruments_ingest.shprepares SQL script that will update semantic dictionary,./update_secret.shis called to setup Dynatrace token as API Key,./refactor_field_names.shcall to update names of fields in your DQL code,./send_event.shis called to send bizevents to Dynatrace to indicate when deployment starts and finishes.
The following figure illustrates all the steps to build code ready to be deployed, build the documentation, and finally prepare the distribution package.
The build process for the Dynatrace Snowflake Observability Agent package involves several steps:
- Compilation: The
compile.shscript is used to create_version.pyand complete Python code for both main stored procedures, resulting in a single file for each (_dtagent.pyand_send_telemetry.py). The##INSERTdirective is used to control the order in which source Python files are assembled into the main one. NOTE: When Snowflake reports issues in those stored procedures, the lines in the Python code will correspond to the lines in these two files. - Building and embedding: The
build.shscript creates a single default configuration file (build/config-default.json) and a semantic dictionary file (build/instruments-def.json). It also copies over all SQL files from all*.sqlfolders. During the build process, the compiled Python files are embedded into the templates for theAPP.DTAGENT(sources array)andAPP.SEND_TELEMETRY(sources variant, params object)procedures respectively. The corresponding SQL files reference precompiled Python code to be embedded with the##INSERTdirective. - Documentation Update: The
build_doc.shscript ensures that the documentation inREADME.mdand PDF files is up to date, including all changes to the default configuration and semantic dictionary definition. - Packaging: The
package.shscript copies all files intended for delivery into a separate folder (package), which is eventually zipped into an archive with the version and build number in the name.
This guide was created for developers who want to contribute to the Dynatrace Snowflake Observability Agent. If you only want to install and use the agent, please refer to the INSTALL.md guide.
You will need the following software installed:
The recommended setup is to use VS Code with the Snowflake plugin.
-
Clone the repository:
git clone https://github.com/dynatrace-oss/dynatrace-snowflake-observability-agent.git cd dynatrace-snowflake-observability-agent -
Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate -
Install dependencies:
The
setup.shscript can help install most of the required tools../setup.sh
Alternatively, you can install them manually. You will need the dependencies for running the agent (see
INSTALL.md) plus the development dependencies.
For Ubuntu/Debian:
sudo apt-get update
sudo apt-get install -y pango cairo gdk-pixbuf libffi pandocFor macOS (using Homebrew):
brew install pango cairo gdk-pixbuf libffi pandocAdditional Python packages for all platforms are listed in requirements.txt.
Install them using pip:
pip install -r requirements.txtIf you checked out sources from git, once the environment is set up, before you can deploy your changes to Snowflake or package your release for someone else to deploy it, you need to run:
./build.shThe build process ensures all SQL and JSON files are ready to be deployed by starting with the invocation of ./compile.sh, which creates a single Python script for the main Dynatrace Snowflake Observability Agent procedure (./build/_dtagent.py) and the telemetry sender procedure (./build/telemetry.py). During the build, these Python files are embedded into the templates for the APP.DTAGENT(sources array) and APP.SEND_TELEMETRY(sources variant, params object) procedures, respectively.". Other source files, including configuration and semantic dictionary, are also copied into the ./build directory.
After successfully compiling and building the Dynatrace Snowflake Observability Agent, you need to deploy it using the ./deploy.sh command (see installation documentation for more details).
If you have made any changes to the documentation files (info.md), configuration files (*.conf/*-config.json), semantic dictionary files (*.conf/instruments-def.yml), added a new plugin, or simply want to refresh the documentation, you need to run:
./build_docs.shThis command will rebuild Dynatrace Snowflake Observability Agent and refresh README.md, plus it will deliver Dynatrace-Snowflake-Observability-Agent-$VERSION.pdf.
If the build process fails to run, you might need to install the following packages:
# on macOS
brew install pango cairo gdk-pixbuf libffi
# on Ubuntu
sudo apt-get install libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-devIn some cases, you may need to also set up WEASYPRINT_DLL_DIRECTORIES:
# on macOS
export WEASYPRINT_DLL_DIRECTORIES=/opt/homebrew/libIn case we want to share Dynatrace Snowflake Observability Agent with other users, we can call:
./package.shThis will prepare a distribution package called dynatrace_snowflake_observability_agent-$VERSION.$BUILD.zip. The package will contain everything that is necessary to distribute Dynatrace Snowflake Observability Agent.
After successfully deploying the Dynatrace Snowflake Observability Agent, you can run tests using:
./test.sh $test_nameAll tests are implemented with the pytest framework and stored in the test folder. Before running the tests, make sure to create the test/credentials.json file from the test/credentials.jsonc template.
Parameter $test_name is required and needs to be the name of the file - excluding the extension - from the /test/plugins directory that you want to run. The test files follow the naming pattern /test/plugins/test_*.py.
To test a single plugin, you can call:
./test.sh test_$plugin_nameIf you want to (re)initialize the test data, you need to run:
./test.sh test_$plugin_name -pThe SQL files stored in the test folder can be used to run some additional tests manually.
