Skip to content

Commit 2fb1601

Browse files
doc updates
1 parent 6f778b3 commit 2fb1601

File tree

4 files changed

+72
-18
lines changed

4 files changed

+72
-18
lines changed

README.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44

55
*Author*: Chinmayee Lakkad
66
*Email*: link:mailto:chinmayee.lakkad@snowflake.com[chinmayee.lakkad@snowflake.com]
7-
*Release Date*: November 2025
8-
*Version*: 2.0.0
7+
*Release Date*: March 2026
8+
*Version*: 2.1.0
99

1010
toc::[]
1111

ascii-docs/DEMOS.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ Both data engineering scenarios can be run together as well as separately.
5151
** Document metadata lands in `DOCUMENTS` directory table and its associated stream table
5252

5353
. *Processing*: Document models are refreshed through the triggered task:
54-
** Document models are refreshed through the triggered task:
55-
*** incm_root_triggered_docs_processing
54+
+
55+
*** `incm_root_triggered_docs_processing`
5656
** Models use AI to extract text from the documents (PDFs, Word documents, etc.) uploaded to Snowflake.
5757
** Models use AI to extract questions and answers from the documents.
5858
** Models create a view for processing in Gold Zone.

ascii-docs/SETUP.adoc

Lines changed: 58 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -78,9 +78,17 @@ If you change any of the below parameters, be sure to change the `profiles.yml`
7878
** `DBT_PROJECT_ADMIN_ROLE={default-role}`
7979
====
8080
+
81-
Edit the `{env-file}` file and configure all variables except this one (generated later):
81+
Edit the `{env-file}` file and configure *required* variables:
8282
+
83-
** `DBT_SNOWFLAKE_PASSWORD`
83+
** `GIT_REPOSITORY_URL` — your forked repository URL
84+
** `GIT_USER_EMAIL` — your Git email
85+
** `GIT_USER_REPO_PAT` — a GitHub personal access token
86+
** `DBT_DEPS_EAI` — external access integration object
87+
** `SNOWFLAKE_INTELLIGENCE_INT_OBJECT` — Snowflake Intelligence integration object
88+
** `CORTEX_SEARCH_WH` — warehouse for the Cortex Search service
89+
** `CORTEX_SEARCH_SERVICE_NAME` — name for the Cortex Search service
90+
+
91+
Leave `DBT_SNOWFLAKE_PASSWORD` empty (generated later).
8492

8593

8694
. Setup Snowflake Infrastructure
@@ -101,8 +109,11 @@ Role setup commands require ACCOUNTADMIN privileges
101109
+
102110
[source,bash,subs="attributes+"]
103111
----
104-
# Run complete installation
105-
make install ENV_FILE={env-file} CONN=<{conn-placeholder}>
112+
# Run complete installation (ENV_FILE defaults to .env)
113+
make install CONN=<{conn-placeholder}>
114+
115+
# Or specify a custom env file
116+
make install CONN=<{conn-placeholder}> ENV_FILE=prod.env
106117
----
107118

108119
.. Option B: Step-by-Step Setup
@@ -118,10 +129,10 @@ Run each step individually:
118129
+
119130
[source,bash,subs="attributes+"]
120131
----
121-
# Step 1: Generate snowflake.yml configuration
122-
make generate-yaml ENV_FILE={env-file}
132+
# Step 1: Generate snowflake.yml configuration (ENV_FILE defaults to .env)
133+
make generate-yaml
123134
124-
# Step 2: Setup dbt Projects infrastructure
135+
# Step 2: Setup dbt Projects infrastructure (accountadmin + sysadmin objects)
125136
make setup-dbt-stack CONN=<{conn-placeholder}>
126137
127138
# Step 3: Setup Slack connector infrastructure
@@ -137,6 +148,16 @@ make setup-procs-funcs CONN=<{conn-placeholder}>
137148
{streamlit-env-var}=true make deploy-streamlit CONN=<{conn-placeholder}>
138149
----
139150
+
151+
[TIP]
152+
====
153+
*Quick start without Slack:* Use the `only-dbt` target to run steps 1+2+3+5+6 and load sample test data, skipping the Slack connector setup:
154+
155+
[source,bash,subs="attributes+"]
156+
----
157+
make only-dbt CONN=<{conn-placeholder}>
158+
----
159+
====
160+
+
140161
[NOTE]
141162
====
142163
The Streamlit deployment step is optional and will only proceed if `{streamlit-env-var}` is set to `true`. If the environment variable is not set or set to any other value, the deployment will be skipped with a warning message.
@@ -192,15 +213,42 @@ Drop a test documents (PDFs, Word documents, etc.) in the bronze_zone.DOCUMENTS
192213
+
193214
Manually execute the Tasks available in the following order to build the baseline models and the Cortex AI services on top including the Cortex Agent:
194215
+
195-
* `incm_root_daily_incremental_refresh`
196-
* `incm_root_triggered_docs_processing`
197-
* `incm_root_deploy_cortex_services`
216+
* `incm_root_daily_incremental_refresh` — refreshes daily incremental models (incidents, comment history, etc.)
217+
* `incm_root_triggered_docs_processing` — processes newly uploaded documents (triggered by stream)
218+
* `incm_root_deploy_cortex_tools` — deploys Semantic View, Cortex Search service, and Cortex Agent
198219
+
199220

200221
. Snowflake Intelligence
201222
+
202223
Once Cortex Agent has been created follow instructions link:https://docs.snowflake.com/en/user-guide/snowflake-cortex/snowflake-intelligence#add-agents[here] to add the Cortex Agent to Snowflake Intelligence.
203224

225+
. Loading Sample Test Data (Optional)
226+
+
227+
If you are not using the Slack connector or want to quickly populate the gold zone tables with sample data for testing, use:
228+
+
229+
[source,bash,subs="attributes+"]
230+
----
231+
make load-test-data CONN=<{conn-placeholder}>
232+
----
233+
+
234+
This uploads CSV seed files from `data/csv/` to the stage, creates gold zone tables, and loads the sample data. The tables created are: `incidents`, `active_incidents`, `closed_incidents`, `incident_comment_history`, `incident_attachments`, `document_full_extracts`, and `quaterly_review_metrics`.
235+
+
236+
[NOTE]
237+
====
238+
This target uses `CREATE OR REPLACE TABLE` so it will overwrite existing gold zone tables. It is intended for initial setup and testing only.
239+
====
240+
241+
. Redeploying the dbt Project (Optional)
242+
+
243+
After pushing changes to the git repository (e.g., updated dbt models, macros, or agent specs), redeploy the dbt project in Snowflake:
244+
+
245+
[source,bash,subs="attributes+"]
246+
----
247+
make redeploy-dbt CONN=<{conn-placeholder}>
248+
----
249+
+
250+
This fetches the latest code from the git repository and adds a new version to the dbt project.
251+
204252
. Streamlit Dashboard (Optional)
205253
+
206254
Optionally, you can deploy a Streamlit dashboard to visualize the incidents and their statuses as a more traditional method of reporting pre-calculated metrics and insights.

ascii-docs/cortex_agent.adoc

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,19 @@ This page describes the approach taken by this project to deploy the Cortex Agen
1414

1515
=== Cortex Agent Deployment
1616

17-
The Cortex Agent is deployed using the macro link:../src/incident_management/macros/create_cortex_agent.sql[`create_cortex_agent`] in Gold Zone. This macro is used to create the Cortex Agent with the given name, stage name, spec file, and agent profile.
17+
The Cortex Agent is deployed using the macro link:../src/incident_management/macros/create_cortex_agent.sql[`create_cortex_agent`] in Gold Zone. This macro accepts the following parameters: `agent_name`, `database`, `schema`, `stage_name`, `agent_spec_file`, and `next_version`.
1818

19-
The actual definition of the Cortex Agent is stored in a YAML specification file that should conform to this link:https://docs.snowflake.com/en/sql-reference/sql/create-agent#required-parameters[Cortex Agent Specification], and is expected to be staged in gold_zone.agent_specs stage prior to the deployment of the Cortex Agent.
19+
The macro is *idempotent*: it checks whether the agent already exists in the database. If the agent does not exist, it creates it with `CREATE OR REPLACE AGENT` and registers it with Snowflake Intelligence. If the agent already exists, it updates the live version specification with `ALTER AGENT ... MODIFY LIVE VERSION` instead.
2020

21-
A sample spec file is provided in the link:../src/cortex_agents/incm360_agent_1.yml[incm360_agent_1.yml] file, and is loaded during the Snowflake setup process.
21+
Each deployment is tagged with a version identifier and timestamp in the agent's `COMMENT` metadata, making it easy to track which spec version is deployed and when.
2222

23-
This macro does not need to be run on a regular basis and can only be run when the composition changes in terms of tools, tool resources, or instructions, etc.
23+
The actual definition of the Cortex Agent is stored in a YAML specification file that should conform to this link:https://docs.snowflake.com/en/sql-reference/sql/create-agent#required-parameters[Cortex Agent Specification], and is expected to be staged in `gold_zone.agent_specs` stage prior to deployment.
24+
25+
The current spec file is link:../src/cortex_agents/incm360_agent_v200.yml[incm360_agent_v200.yml], loaded during the Snowflake setup process by `03_procs_and_funcs.sql`.
26+
27+
The agent spec is read from the stage at deployment time using a Python UDF (`READ_STAGE_FILE`) created in the `dbt_project_deployments` schema. The YAML content is passed verbatim into the `FROM SPECIFICATION` clause, preserving original indentation.
28+
29+
This macro does not need to be run on a regular basis and can only be run when the composition changes in terms of tools, tool resources, or instructions, etc. It is invoked via the `incm_deploy_cortex_agent` task in the Cortex services deployment task graph.
2430

2531
Having Cortex Agent created will allow you to use it in Snowflake Intelligence to answer questions about the incident management process. Post installation you will be required to add this explicitly to Snowflake Intelligence as described link:https://docs.snowflake.com/en/user-guide/snowflake-cortex/snowflake-intelligence#add-agents[here].
2632

0 commit comments

Comments
 (0)