Skip to content

Commit b4268ef

Browse files
authored
Merge pull request #90 from bqbooster/doc-improv
Improve the documentation regarding combining audit logs with informa…
2 parents 0a464cd + fdcdc31 commit b4268ef

File tree

3 files changed

+16
-5
lines changed

3 files changed

+16
-5
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
kind: Docs
2+
body: Improve the documentation regarding combining audit logs with information schema jobs data
3+
time: 2024-12-15T17:00:28.333756+01:00
4+
custom:
5+
Author: Kayrnt
6+
Issue: ""

docs/audit-logs-vs-information-schema.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,10 @@ There are two ways to monitor BigQuery jobs:
99
- Using the BigQuery audit logs
1010
- Using the `INFORMATION_SCHEMA.JOBS` table
1111

12-
## How to choose
12+
dbt-bigquery-monitoring supports both methods and goes further by providing a unified way to monitor both by offering a configuration that will combine the two sources.
13+
See `should_combine_audit_logs_and_information_schema` in the [configuration](/configuration) if you want to combine the two sources.
14+
15+
## What's in there?
1316

1417
Each of the solution has its advantages and disadvantages. Here is a comparison table to help you choose the right one for your use case:
1518

@@ -18,9 +21,7 @@ Each of the solution has its advantages and disadvantages. Here is a comparison
1821
| Max retention | User defined | 6 months |
1922
| Detailed User information |||
2023
| BI Engine |||
21-
| Insights |||
22-
23-
At some point, a mode to combine both solutions could be implemented.
24+
| Jobs insights |||
2425

2526
## Audit logs
2627

docs/running-the-package.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,16 @@ slug: /running-the-package
66
# Running the package
77

88
The package is designed to be run as a daily or hourly job.
9-
To do so, you can use the following dbt command:
9+
It leverages incremental modelisations to reduce the amount of data to process and to optimize the performance of the queries. Practically it won't reread data that has already been processed (and not needed anymore).
10+
11+
If you plan to run all models, the simplest way to run the job is to run the following dbt command:
1012

1113
```
1214
dbt run -s tag:dbt-bigquery-monitoring
1315
```
1416

17+
The granularity of the data partitioning is hourly so the most cost efficient way to process the data is run it every hour but you may run it more frequently if you need more "real-time" data.
18+
1519
## Tags
1620

1721
The package provides the following tags that can be used to filter the models:

0 commit comments

Comments
 (0)