Skip to content

Commit dbb6df1

Browse files
authored
[b/440495128] Add Oracle TCO discovery script
This PR adds Oracle TCO discovery script that is an alternative method of extracting Oracle metadata and statistics for BigQuery Migration Assessment. Script uses the same SQL files as dwh-migration-dumper, but executes them with SQL*Plus. This is helpful for the users who cannot run java application due to security or compliance reasons. Currently, it extracts only set of metadata and statistics required to TCO calculation. Later this can be extended with full version.
1 parent 38552c8 commit dbb6df1

File tree

4 files changed

+164
-0
lines changed

4 files changed

+164
-0
lines changed
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# 💻 Oracle Database Discovery Scripts
2+
3+
This package provides easy to use Oracle discovery scripts and automates the collection of key performance and configuration metrics, which are essential for a comprehensive Total Cost of Ownership (TCO) analysis.
4+
5+
The script leverages `SQL*Plus` to execute a series of predefined SQL queries, capturing valuable information about historical performance, storage utilization, and system configuration. Unlike `dwh-migration-dumper`, which is a Java application, these scripts use only native Oracle tooling so they can be used by the DBAs with no additional effort.
6+
7+
Retrieved Oracle data is saved in CSV format accepted by BigQuery Migrations Assessment.
8+
9+
## 🚀 Getting Started
10+
11+
### Prerequisites
12+
13+
To use this script, you must have the following installed and configured on your system:
14+
15+
- **Oracle Client**: A full Oracle Client or the Oracle Instant Client.
16+
- **SQL*Plus**: The sqlplus command-line utility must be accessible from your system's PATH.
17+
- **Database Permissions**: An Oracle common user with SYSDBA privileges.
18+
19+
Please note that script must be executed in the root container. Running it in one of the pluggable databases results in missing performance statistics and metadata about other pluggable databases.
20+
21+
### Usage
22+
23+
Run the script from your terminal, passing the Oracle connection string as the first argument.
24+
25+
```bash
26+
./tco_discovery.sh <ORACLE_CONN_STR>
27+
```
28+
29+
**Example:**
30+
31+
```bash
32+
./tco_discovery.sh system/PASSWORD@localhost:1521/ORCLCDB
33+
```
34+
35+
## 🛠️ Configuration
36+
37+
The script's behavior can be customized by modifying the variables within the script itself:
38+
39+
- `ORACLE_CONN`: The Oracle connection string. This is passed as a command-line argument, as shown in the usage example.
40+
- `DURATION_DAYS`: The number of days of historical data to collect for AWR-based queries. Defaults to 7.
41+
- `OUTPUT_DIR`: The directory where the generated CSV files will be stored. The script will create this directory if it doesn't already exist. Defaults to ./out.
42+
- `DISCOVERY_SQLS`: An array of SQL script filenames to be executed. You can easily add or remove scripts from this list to customize your data collection.
43+
44+
## 📂 Output
45+
46+
Upon successful execution, the script creates the specified OUTPUT_DIR (e.g., ./out) and populates it with a series of CSV files. Each file is named after the corresponding SQL script and contains the data captured from the database.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
WHENEVER SQLERROR EXIT 1;
2+
3+
-- Set SQL*Plus environment options for CSV output
4+
SET HEADING ON
5+
SET FEEDBACK OFF
6+
SET TERMOUT OFF
7+
SET ECHO OFF
8+
SET PAGESIZE 0
9+
SET LINESIZE 32767
10+
SET TRIMOUT ON
11+
set UNDERLINE OFF
12+
SET VERIFY OFF
13+
SET MARKUP CSV ON DELIMITER ',' QUOTE OFF
14+
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
15+
ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
16+
17+
-- Accept the input file name and output file name as arguments
18+
DEFINE INPUT_SCRIPT = &1
19+
DEFINE OUTPUT_FILE = &2
20+
DEFINE DURATION_DAYS = &3
21+
22+
-- Spool the output to the specified CSV file
23+
SPOOL &OUTPUT_FILE
24+
@&INPUT_SCRIPT &DURATION_DAYS;
25+
/
26+
SPOOL OFF;
27+
28+
-- Reset SQL*Plus environment options
29+
SET ECHO ON
30+
SET VERIFY ON
31+
SET FEEDBACK ON
32+
SET TERMOUT ON;
33+
SET PAGESIZE 50;
34+
SET LINESIZE 80;
35+
36+
-- Exit successfully
37+
EXIT SUCCESS
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
#!/bin/bash
2+
3+
# Function to show the spinner animation
4+
show_spinner() {
5+
local -a chars=('/' '-' '\' '|')
6+
local i=0
7+
while true; do
8+
printf "\r${chars[$i]} "
9+
i=$(( (i+1) % ${#chars[@]} ))
10+
sleep 0.1
11+
done
12+
}
13+
14+
# Function to stop the spinner
15+
stop_spinner() {
16+
SPINNER_PID=$1
17+
kill $SPINNER_PID &>/dev/null
18+
wait $SPINNER_PID &>/dev/null
19+
printf "\r" # Clears the spinner line
20+
}
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
#!/bin/bash
2+
3+
source spinner.sh
4+
5+
ORACLE_CONN="$1"
6+
DURATION_DAYS=7
7+
OUTPUT_DIR="./out"
8+
DISCOVERY_SQLS=(
9+
"native/used-space-details.sql" # CDB_SEGMENTS - used space
10+
"native/osstat.sql" # GV$OSSTAT - System statistics like NUM_CPUS
11+
"native/cell-config.sql" # GV$CELL - exadata cell configuration
12+
"native/db-info.sql" # V$DATABASE - Database info
13+
"native/app-schemas-pdbs.sql" # CDB_PDBS - Pluggable databases info
14+
"awr/sys-metric-history.sql" # CDB_HIST_SYSMETRIC_HISTORY - historical system statistics like CPU usage
15+
"awr/segment-stats.sql" # CDB_HIST_SEG_STAT - historical segment statistics
16+
)
17+
DISCOVERY_SQL_DIR="$(dirname "$0")/../../dumper/app/src/main/resources/oracle-stats/cdb"
18+
TMP_QUERY_FILE=".query.sql"
19+
EXPORT_SCRIPT="export.sql"
20+
mkdir -p "$OUTPUT_DIR"
21+
22+
# Run each SQL query and export result to CSV file
23+
for sql_file in "${DISCOVERY_SQLS[@]}"; do
24+
file_path="$DISCOVERY_SQL_DIR/$sql_file"
25+
base_name=$(basename "$file_path" .sql)
26+
output_csv="$OUTPUT_DIR/$base_name.csv"
27+
28+
if [ -f "$file_path" ]; then
29+
echo "[INFO]: Executing $base_name.sql"
30+
31+
# Replace JDBC variable placeholder '?' with SQL*Plus substitution
32+
sed 's/?/\&1/' "$file_path" > "$TMP_QUERY_FILE"
33+
34+
# Show spinner
35+
show_spinner &
36+
SPINNER_PID=$!
37+
38+
# Run SQL*Plus
39+
sqlplus -s "$ORACLE_CONN" "@$EXPORT_SCRIPT" "$TMP_QUERY_FILE" "$output_csv" "$DURATION_DAYS"
40+
stop_spinner "$SPINNER_PID"
41+
if [ $? -ne 0 ]; then
42+
echo "[ERROR]: $base_name extraction failed."
43+
else
44+
echo "[SUCCESS]: $base_name.sql extraction ran without errors."
45+
fi
46+
else
47+
echo "[ERROR]: The file '$file_path' does not exist."
48+
fi
49+
done
50+
51+
# Generate zip metadata files that are required by BigQuery Migration Assessment
52+
cat > $OUTPUT_DIR/compilerworks-metadata.yaml << EOL
53+
format: "oracle_assessment_tco.zip"
54+
timestamp: 1721846085350
55+
product:
56+
arguments: "ConnectorArguments{connector=oracle-stats, assessment=true}"
57+
EOL
58+
59+
# Build final ZIP artifact that can be used with BigQuery Migration Assessment.
60+
zip -j "oracle_assessment_tco.zip" $OUTPUT_DIR/*.csv $OUTPUT_DIR/*.yaml
61+
rm -rf "$TMP_QUERY_FILE"

0 commit comments

Comments
 (0)