You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/audit-logs-vs-information-schema.md
+5-4
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,10 @@ There are two ways to monitor BigQuery jobs:
9
9
- Using the BigQuery audit logs
10
10
- Using the `INFORMATION_SCHEMA.JOBS` table
11
11
12
-
## How to choose
12
+
dbt-bigquery-monitoring supports both methods and goes further by providing a unified way to monitor both by offering a configuration that will combine the two sources.
13
+
See `should_combine_audit_logs_and_information_schema` in the [configuration](/configuration) if you want to combine the two sources.
14
+
15
+
## What's in there?
13
16
14
17
Each of the solution has its advantages and disadvantages. Here is a comparison table to help you choose the right one for your use case:
15
18
@@ -18,9 +21,7 @@ Each of the solution has its advantages and disadvantages. Here is a comparison
18
21
| Max retention | User defined | 6 months |
19
22
| Detailed User information | ✅ | ❌ |
20
23
| BI Engine | ❌ | ✅ |
21
-
| Insights | ❌ | ✅ |
22
-
23
-
At some point, a mode to combine both solutions could be implemented.
Copy file name to clipboardExpand all lines: docs/running-the-package.md
+5-1
Original file line number
Diff line number
Diff line change
@@ -6,12 +6,16 @@ slug: /running-the-package
6
6
# Running the package
7
7
8
8
The package is designed to be run as a daily or hourly job.
9
-
To do so, you can use the following dbt command:
9
+
It leverages incremental modelisations to reduce the amount of data to process and to optimize the performance of the queries. Practically it won't reread data that has already been processed (and not needed anymore).
10
+
11
+
If you plan to run all models, the simplest way to run the job is to run the following dbt command:
10
12
11
13
```
12
14
dbt run -s tag:dbt-bigquery-monitoring
13
15
```
14
16
17
+
The granularity of the data partitioning is hourly so the most cost efficient way to process the data is run it every hour but you may run it more frequently if you need more "real-time" data.
18
+
15
19
## Tags
16
20
17
21
The package provides the following tags that can be used to filter the models:
0 commit comments