A comprehensive tool dedicated to processing output of TT-NN Graph Tracing. With this tool, you can quickly process what operations are called, with what arguments, and can easily filter through data from single or multiple runs.
- Trace Capture and Storage: Store and organize trace data in a SQLite database
- Web-based Viewer: Browse and analyze trace data through an intuitive web interface
- Export Capabilities: Export trace data to CSV and Google Sheets
- Flexible Data Processing: Filter, sort, and analyze operation data efficiently
- Custom Parsers: Create and save custom parsers for your specific analysis needs
- Automatic Browser Launch: Viewer automatically opens in your default browser
- Smart JSON Processing: Automatically detects and processes both raw and pre-processed JSON formats
Install directly from the source:
pip install .You can also install the latest release directly from GitHub:
pip install https://github.com/ayerofieiev-tt/ttnn-trace-viewer/releases/download/latest/ttnn_trace_viewer-0.1.0-py3-none-any.whlFor development work with editable mode:
pip install -e .After installation, several command-line tools become available:
-
Launch the viewer web interface:
ttnn-trace-viewer
This will start a Flask web server accessible at http://localhost:5000 and automatically open it in your default browser.
To start without automatically opening a browser:
ttnn-trace-viewer --no-browser
-
Store trace data in the database:
# From a JSON file: ttnn-store your_trace_file.json "My Trace Run" # From a directory containing CSV files: ttnn-store your_csv_directory "My CSV Data"
-
Convert trace data to CSV:
ttnn-to-csv input_directory output_file.csv
-
Upload trace data to Google Sheets:
ttnn-to-sheets directory "My Spreadsheet Title"
To set up a development environment:
-
Clone the repository:
git clone <repository-url> cd ttnn_capture
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies in development mode:
pip install -e . -
Run the viewer during development:
python trace_viewer.py
- Capture a trace file from your TT-NN application (typically a JSON file)
- Store it in the database:
ttnn-store your_trace.json "My Application Run" - Launch the viewer to explore the data:
ttnn-trace-viewer
- Use the web interface to filter operations, analyze arguments, and explore trace data
- Store multiple trace files with descriptive names:
ttnn-store trace1.json "Run with optimization A" ttnn-store trace2.json "Run with optimization B"
- Use the viewer to switch between uploads and compare results
- Create custom parsers in the viewer to extract specific metrics
To use the Google Sheets integration:
-
Set up Google Sheets API:
- Go to the Google Cloud Console
- Create a new project or select an existing one
- Enable the Google Sheets API for your project
- Create OAuth 2.0 Client ID credentials
- Download the client configuration file
- Rename it to
credentials.jsonand place it in your working directory
-
Upload data to Google Sheets:
ttnn-to-sheets your_data_directory "Performance Analysis"
The first time you run this, it will open a browser for authentication. Your credentials will be saved in token.pickle for future use.
- Data is stored in a SQLite database (
traces.db) by default - The web viewer provides both upload-based and consolidated views of your trace data
- Custom parsers allow for advanced analysis directly in the viewer
The tool now supports two different JSON formats:
This format contains the raw graph structure with connections and arguments:
[
{
"arguments": [...],
"connections": [...],
"params": {
"name": "operation_name",
...
}
},
...
]When uploading a file in this format, it will be automatically processed using GraphTracerUtils.serialize_graph() before converting to CSV.
This format has already been processed and contains a "content" key with a list of operations:
{
"content": [
{
"operation": "operation_name",
"arguments": [...]
},
...
]
}Files in this format will be directly converted to CSV without additional processing.
You can use the JSON processor directly from the command line:
python json_processor.py input.json [output_file] [--csv] [--group] [--no-duplicates]Arguments:
input.json- Input JSON file (raw or processed format)output_file- Optional output file path--csv- Output in CSV format instead of JSON--group- Group operations by name (CSV only)--no-duplicates- Remove duplicate entries (CSV only)