Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions warehouse-discovery-scripts/oracle/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# 💻 Oracle Database Discovery Scripts

This package provides easy to use Oracle discovery scripts and automates the collection of key performance and configuration metrics, which are essential for a comprehensive Total Cost of Ownership (TCO) analysis.

The script leverages `SQL*Plus` to execute a series of predefined SQL queries, capturing valuable information about historical performance, storage utilization, and system configuration. Unlike `dwh-migration-dumper`, which is a Java application, these scripts use only native Oracle tooling so they can be used by the DBAs with no additional effort.

Retrieved Oracle data is saved in CSV format accepted by BigQuery Migrations Assessment.

## 🚀 Getting Started

### Prerequisites

To use this script, you must have the following installed and configured on your system:

- **Oracle Client**: A full Oracle Client or the Oracle Instant Client.
- **SQL*Plus**: The sqlplus command-line utility must be accessible from your system's PATH.
- **Database Permissions**: An Oracle common user with SYSDBA privileges.

Please note that script must be executed in the root container. Running it in one of the pluggable databases results in missing performance statistics and metadata about other pluggable databases.

### Usage

Run the script from your terminal, passing the Oracle connection string as the first argument.

```bash
./tco_discovery.sh <ORACLE_CONN_STR>
```

**Example:**

```bash
./tco_discovery.sh system/PASSWORD@localhost:1521/ORCLCDB
```

## 🛠️ Configuration

The script's behavior can be customized by modifying the variables within the script itself:

- `ORACLE_CONN`: The Oracle connection string. This is passed as a command-line argument, as shown in the usage example.
- `DURATION_DAYS`: The number of days of historical data to collect for AWR-based queries. Defaults to 7.
- `OUTPUT_DIR`: The directory where the generated CSV files will be stored. The script will create this directory if it doesn't already exist. Defaults to ./out.
- `DISCOVERY_SQLS`: An array of SQL script filenames to be executed. You can easily add or remove scripts from this list to customize your data collection.

## 📂 Output

Upon successful execution, the script creates the specified OUTPUT_DIR (e.g., ./out) and populates it with a series of CSV files. Each file is named after the corresponding SQL script and contains the data captured from the database.
37 changes: 37 additions & 0 deletions warehouse-discovery-scripts/oracle/export.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
WHENEVER SQLERROR EXIT 1;

-- Set SQL*Plus environment options for CSV output
SET HEADING ON
SET FEEDBACK OFF
SET TERMOUT OFF
SET ECHO OFF
SET PAGESIZE 0
SET LINESIZE 32767
SET TRIMOUT ON
set UNDERLINE OFF
SET VERIFY OFF
SET MARKUP CSV ON DELIMITER ',' QUOTE OFF
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS';

-- Accept the input file name and output file name as arguments
DEFINE INPUT_SCRIPT = &1
DEFINE OUTPUT_FILE = &2
DEFINE DURATION_DAYS = &3

-- Spool the output to the specified CSV file
SPOOL &OUTPUT_FILE
@&INPUT_SCRIPT &DURATION_DAYS;
/
SPOOL OFF;

-- Reset SQL*Plus environment options
SET ECHO ON
SET VERIFY ON
SET FEEDBACK ON
SET TERMOUT ON;
SET PAGESIZE 50;
SET LINESIZE 80;

-- Exit successfully
EXIT SUCCESS
20 changes: 20 additions & 0 deletions warehouse-discovery-scripts/oracle/spinner.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/bin/bash

# Function to show the spinner animation
show_spinner() {
local -a chars=('/' '-' '\' '|')
local i=0
while true; do
printf "\r${chars[$i]} "
i=$(( (i+1) % ${#chars[@]} ))
sleep 0.1
done
}

# Function to stop the spinner
stop_spinner() {
SPINNER_PID=$1
kill $SPINNER_PID &>/dev/null
wait $SPINNER_PID &>/dev/null
printf "\r" # Clears the spinner line
}
61 changes: 61 additions & 0 deletions warehouse-discovery-scripts/oracle/tco_discovery.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/bin/bash

source spinner.sh

ORACLE_CONN="$1"
DURATION_DAYS=7
OUTPUT_DIR="./out"
DISCOVERY_SQLS=(
"native/used-space-details.sql" # CDB_SEGMENTS - used space
"native/osstat.sql" # GV$OSSTAT - System statistics like NUM_CPUS
"native/cell-config.sql" # GV$CELL - exadata cell configuration
"native/db-info.sql" # V$DATABASE - Database info
"native/app-schemas-pdbs.sql" # CDB_PDBS - Pluggable databases info
"awr/sys-metric-history.sql" # CDB_HIST_SYSMETRIC_HISTORY - historical system statistics like CPU usage
"awr/segment-stats.sql" # CDB_HIST_SEG_STAT - historical segment statistics
)
DISCOVERY_SQL_DIR="$(dirname "$0")/../../dumper/app/src/main/resources/oracle-stats/cdb"
TMP_QUERY_FILE=".query.sql"
EXPORT_SCRIPT="export.sql"
mkdir -p "$OUTPUT_DIR"

# Run each SQL query and export result to CSV file
for sql_file in "${DISCOVERY_SQLS[@]}"; do
file_path="$DISCOVERY_SQL_DIR/$sql_file"
base_name=$(basename "$file_path" .sql)
output_csv="$OUTPUT_DIR/$base_name.csv"

if [ -f "$file_path" ]; then
echo "[INFO]: Executing $base_name.sql"

# Replace JDBC variable placeholder '?' with SQL*Plus substitution
sed 's/?/\&1/' "$file_path" > "$TMP_QUERY_FILE"

# Show spinner
show_spinner &
SPINNER_PID=$!

# Run SQL*Plus
sqlplus -s "$ORACLE_CONN" "@$EXPORT_SCRIPT" "$TMP_QUERY_FILE" "$output_csv" "$DURATION_DAYS"
stop_spinner "$SPINNER_PID"
if [ $? -ne 0 ]; then
echo "[ERROR]: $base_name extraction failed."
else
echo "[SUCCESS]: $base_name.sql extraction ran without errors."
fi
else
echo "[ERROR]: The file '$file_path' does not exist."
fi
done

# Generate zip metadata files that are required by BigQuery Migration Assessment
cat > $OUTPUT_DIR/compilerworks-metadata.yaml << EOL
format: "oracle_assessment_tco.zip"
timestamp: 1721846085350
product:
arguments: "ConnectorArguments{connector=oracle-stats, assessment=true}"
EOL

# Build final ZIP artifact that can be used with BigQuery Migration Assessment.
zip -j "oracle_assessment_tco.zip" $OUTPUT_DIR/*.csv $OUTPUT_DIR/*.yaml
rm -rf "$TMP_QUERY_FILE"
Loading