|
32 | 32 |
|
33 | 33 | <!-- Expanded navigation --> |
34 | 34 | <div id="navbar-collapse" class="navbar-collapse collapse"> |
| 35 | + <!-- Main navigation --> |
| 36 | + <ul class="nav navbar-nav"> |
| 37 | + <li class="nav-item"> |
| 38 | + <a href="." class="nav-link active" aria-current="page">Home</a> |
| 39 | + </li> |
| 40 | + <li class="nav-item dropdown"> |
| 41 | + <a href="#" class="nav-link dropdown-toggle" role="button" data-bs-toggle="dropdown" aria-expanded="false">Commands</a> |
| 42 | + <ul class="dropdown-menu"> |
| 43 | + |
| 44 | +<li> |
| 45 | + <a href="index.md#auto" class="dropdown-item">Auto</a> |
| 46 | +</li> |
| 47 | + |
| 48 | +<li> |
| 49 | + <a href="index.md#tui-interactive-selection" class="dropdown-item">TUI</a> |
| 50 | +</li> |
| 51 | + |
| 52 | +<li> |
| 53 | + <a href="index.md#manual" class="dropdown-item">Manual</a> |
| 54 | +</li> |
| 55 | + </ul> |
| 56 | + </li> |
| 57 | + <li class="nav-item dropdown"> |
| 58 | + <a href="#" class="nav-link dropdown-toggle" role="button" data-bs-toggle="dropdown" aria-expanded="false">Performance Comparison</a> |
| 59 | + <ul class="dropdown-menu"> |
| 60 | + |
| 61 | +<li> |
| 62 | + <a href="index.md#tui-track-interactive-performance-comparison" class="dropdown-item">TUI Track</a> |
| 63 | +</li> |
| 64 | + |
| 65 | +<li> |
| 66 | + <a href="index.md#track-auto" class="dropdown-item">Track Auto</a> |
| 67 | +</li> |
| 68 | + |
| 69 | +<li> |
| 70 | + <a href="index.md#track-manual" class="dropdown-item">Track Manual</a> |
| 71 | +</li> |
| 72 | + </ul> |
| 73 | + </li> |
| 74 | + <li class="nav-item"> |
| 75 | + <a href="index.md#cicd-fail-on-regressions" class="nav-link">CI/CD</a> |
| 76 | + </li> |
| 77 | + </ul> |
35 | 78 |
|
36 | 79 | <ul class="nav navbar-nav ms-md-auto"> |
37 | 80 | <li class="nav-item"> |
|
62 | 105 |
|
63 | 106 | <li class="nav-item" data-bs-level="1"><a href="#profiling-data-management" class="nav-link">Profiling Data Management</a> |
64 | 107 | <ul class="nav flex-column"> |
| 108 | + <li class="nav-item" data-bs-level="2"><a href="#quick-reference" class="nav-link">Quick Reference</a> |
| 109 | + <ul class="nav flex-column"> |
| 110 | + </ul> |
| 111 | + </li> |
65 | 112 | <li class="nav-item" data-bs-level="2"><a href="#auto" class="nav-link">Auto</a> |
66 | 113 | <ul class="nav flex-column"> |
67 | 114 | </ul> |
|
70 | 117 | <ul class="nav flex-column"> |
71 | 118 | </ul> |
72 | 119 | </li> |
| 120 | + <li class="nav-item" data-bs-level="2"><a href="#tui-interactive-selection" class="nav-link">TUI - Interactive Selection</a> |
| 121 | + <ul class="nav flex-column"> |
| 122 | + </ul> |
| 123 | + </li> |
73 | 124 | <li class="nav-item" data-bs-level="2"><a href="#manual" class="nav-link">Manual</a> |
74 | 125 | <ul class="nav flex-column"> |
75 | 126 | </ul> |
|
83 | 134 |
|
84 | 135 | <li class="nav-item" data-bs-level="1"><a href="#performance-comparison" class="nav-link">Performance Comparison</a> |
85 | 136 | <ul class="nav flex-column"> |
| 137 | + <li class="nav-item" data-bs-level="2"><a href="#tui-track-interactive-performance-comparison" class="nav-link">TUI Track - Interactive Performance Comparison</a> |
| 138 | + <ul class="nav flex-column"> |
| 139 | + </ul> |
| 140 | + </li> |
86 | 141 | <li class="nav-item" data-bs-level="2"><a href="#track-auto" class="nav-link">Track Auto</a> |
87 | 142 | <ul class="nav flex-column"> |
88 | 143 | </ul> |
|
109 | 164 |
|
110 | 165 | <h1 id="profiling-data-management">Profiling Data Management</h1> |
111 | 166 | <p>When performing complex profiling, developers often find themselves lost in a maze of repetitive commands and scattered files. You run <code>go test -bench=BenchmarkMyFunc -cpuprofile=cpu.out</code>, then <code>go tool pprof -top cpu.out > results.txt</code>, inspect a function with <code>go tool pprof -list=MyFunc cpu.out</code>, make modifications, run the benchmark again—and hours later, you're exhausted, have dozens of inconsistently named files scattered across directories, and can't remember which changes led to which results. Without systematic organization, you lose track of your optimization journey, lack accurate "before and after" snapshots to share with your team, and waste valuable time context-switching between profiling commands instead of focusing on actual performance improvements. Prof eliminates this chaos by capturing everything in one command and automatically organizing all profiling data—binary files, text reports, function-level analysis, and visualizations—into a structured, tagged hierarchy that preserves your optimization history and makes collaboration effortless.</p> |
| 167 | +<h2 id="quick-reference">Quick Reference</h2> |
| 168 | +<p><strong>Main Commands:</strong></p> |
| 169 | +<ul> |
| 170 | +<li><strong><code>prof auto</code></strong>: Automated benchmark collection and profiling</li> |
| 171 | +<li><strong><code>prof tui</code></strong>: Interactive benchmark collection</li> |
| 172 | +<li><strong><code>prof tui track</code></strong>: Interactive performance comparison</li> |
| 173 | +<li><strong><code>prof manual</code></strong>: Process existing profile files</li> |
| 174 | +<li><strong><code>prof track auto</code></strong>: Compare performance between tags</li> |
| 175 | +<li><strong><code>prof track manual</code></strong>: Compare external profile files</li> |
| 176 | +</ul> |
112 | 177 | <h2 id="auto">Auto</h2> |
113 | 178 | <p>The <code>auto</code> command wraps <code>go test</code> and <code>pprof</code> to run benchmarks, collect all profile types, and organize everything automatically:</p> |
114 | 179 | <pre><code class="language-bash">prof auto --benchmarks "BenchmarkGenPool" --profiles "cpu,memory,mutex,block" --count 10 --tag "baseline" |
@@ -155,6 +220,47 @@ <h2 id="auto-configuration">Auto - Configuration</h2> |
155 | 220 | <li><code>ignore_functions</code>: Exclude specific functions from collection, even if they match the include prefixes.</li> |
156 | 221 | </ul> |
157 | 222 | <p>This filtering helps focus profiling on relevant code paths while excluding test setup and initialization functions that may not be meaningful for performance analysis.</p> |
| 223 | +<h2 id="tui-interactive-selection">TUI - Interactive Selection</h2> |
| 224 | +<p>The <code>tui</code> command provides an interactive terminal interface that automatically discovers benchmarks in your project and guides you through the selection process:</p> |
| 225 | +<pre><code class="language-bash">prof tui |
| 226 | +</code></pre> |
| 227 | +<p><strong>What it does:</strong></p> |
| 228 | +<ol> |
| 229 | +<li><strong>Discovers benchmarks</strong>: Automatically scans your Go module for <code>func BenchmarkXxx(b *testing.B)</code> functions in <code>*_test.go</code> files</li> |
| 230 | +<li><strong>Interactive selection</strong>: Presents a menu where you can select:</li> |
| 231 | +<li>Which benchmarks to run (multi-select from discovered list)</li> |
| 232 | +<li>Which profiles to collect (cpu, memory, mutex, block)</li> |
| 233 | +<li>Number of benchmark runs (count)</li> |
| 234 | +<li>Tag name for organizing results</li> |
| 235 | +</ol> |
| 236 | +<p><strong>Navigation:</strong></p> |
| 237 | +<ul> |
| 238 | +<li><strong>Page size</strong>: Shows up to 20 benchmarks at once for readability</li> |
| 239 | +<li><strong>Scroll</strong>: Use arrow keys (↑/↓) to navigate through the list</li> |
| 240 | +<li><strong>Multi-select</strong>: Use spacebar to select/deselect benchmarks</li> |
| 241 | +<li><strong>Search</strong>: Type to filter and find specific benchmarks quickly</li> |
| 242 | +</ul> |
| 243 | +<p><strong>Example workflow:</strong></p> |
| 244 | +<pre><code class="language-bash">$ prof tui |
| 245 | + |
| 246 | +? Select benchmarks to run: |
| 247 | + ◯ BenchmarkGenPool |
| 248 | + ◯ BenchmarkCacheGet |
| 249 | + ◯ BenchmarkCacheSet |
| 250 | + ◯ BenchmarkHTTPHandler |
| 251 | + [Use arrows to move, space to select, type to filter] |
| 252 | + |
| 253 | +? Select profiles: |
| 254 | + ◉ cpu |
| 255 | + ◯ memory |
| 256 | + ◯ mutex |
| 257 | + ◯ block |
| 258 | + [Use arrows to move, space to select] |
| 259 | + |
| 260 | +? Number of runs (count): 10 |
| 261 | + |
| 262 | +? Tag name (used to group results under bench/<tag>): v2.0-optimized |
| 263 | +</code></pre> |
158 | 264 | <h2 id="manual">Manual</h2> |
159 | 265 | <p>The <code>manual</code> command processes existing profile files without running benchmarks - it only uses <code>pprof</code> to organize data you already have:</p> |
160 | 266 | <pre><code class="language-bash">prof manual --tag "external-profiles" BenchmarkGenPool_cpu.out memory.out block.out |
@@ -191,6 +297,72 @@ <h2 id="manual-configuration">Manual - Configuration</h2> |
191 | 297 | <p>For example, <code>BenchmarkGenPool_cpu.out</code> becomes <code>BenchmarkGenPool_cpu</code> in the configuration.</p> |
192 | 298 | <h1 id="performance-comparison">Performance Comparison</h1> |
193 | 299 | <p>Prof's performance comparison automatically drills down from benchmark-level changes to show you exactly which functions changed. Instead of just reporting that performance improved or regressed, Prof pinpoints the specific functions responsible and shows you detailed before-and-after comparisons.</p> |
| 300 | +<h2 id="tui-track-interactive-performance-comparison">TUI Track - Interactive Performance Comparison</h2> |
| 301 | +<p>The <code>tui track</code> command provides an interactive interface for comparing performance between existing benchmark runs. This is a companion to the main <code>prof tui</code> command and requires that you have already collected benchmark data using either <code>prof tui</code> or <code>prof auto</code>.</p> |
| 302 | +<pre><code class="language-bash">prof tui track |
| 303 | +</code></pre> |
| 304 | +<p><strong>What it does:</strong></p> |
| 305 | +<ol> |
| 306 | +<li><strong>Discovers existing data</strong>: Scans the <code>bench/</code> directory for tags you've already collected</li> |
| 307 | +<li><strong>Interactive selection</strong>: Guides you through selecting:</li> |
| 308 | +<li>Baseline tag (the "before" version)</li> |
| 309 | +<li>Current tag (the "after" version)</li> |
| 310 | +<li>Benchmark to compare</li> |
| 311 | +<li>Profile type to analyze</li> |
| 312 | +<li>Output format</li> |
| 313 | +<li>Regression threshold settings</li> |
| 314 | +</ol> |
| 315 | +<p><strong>Prerequisites:</strong></p> |
| 316 | +<ul> |
| 317 | +<li>Must have run <code>prof tui</code> or <code>prof auto</code> at least twice to create baseline and current tags</li> |
| 318 | +<li>Data must be organized under <code>bench/<tag>/</code> directories</li> |
| 319 | +</ul> |
| 320 | +<p><strong>Example workflow:</strong></p> |
| 321 | +<pre><code class="language-bash">$ prof tui track |
| 322 | + |
| 323 | +? Select baseline tag (the 'before' version) [Press Enter to select]: |
| 324 | + baseline |
| 325 | + optimized |
| 326 | + [Use arrows to move, Enter to select, type to filter] |
| 327 | + |
| 328 | +? Select current tag (the 'after' version) [Press Enter to select]: |
| 329 | + optimized |
| 330 | + [Use arrows to move, Enter to select, type to filter] |
| 331 | + |
| 332 | +? Select benchmark to compare [Press Enter to select]: |
| 333 | + BenchmarkGenPool |
| 334 | + BenchmarkCacheGet |
| 335 | + [Use arrows to move, Enter to select, type to filter] |
| 336 | + |
| 337 | +? Select profile type to compare [Press Enter to select]: |
| 338 | + cpu |
| 339 | + memory |
| 340 | + [Use arrows to move, Enter to select, type to filter] |
| 341 | + |
| 342 | +? Select output format [Press Enter to select]: |
| 343 | + summary |
| 344 | + detailed |
| 345 | + summary-html |
| 346 | + detailed-html |
| 347 | + summary-json |
| 348 | + detailed-json |
| 349 | + [Use arrows to move, Enter to select, type to filter] |
| 350 | + |
| 351 | +? Do you want to fail on performance regressions? (Y/n) |
| 352 | + |
| 353 | +? Enter regression threshold percentage (e.g., 5.0 for 5%): 5.0 |
| 354 | + |
| 355 | +🚀 Running: prof track auto --base baseline --current optimized --bench-name BenchmarkGenPool --profile-type cpu --output-format detailed --fail-on-regression --regression-threshold 5.0 |
| 356 | +</code></pre> |
| 357 | +<p><strong>Output formats supported:</strong></p> |
| 358 | +<ul> |
| 359 | +<li><strong>summary</strong>: High-level overview of all performance changes</li> |
| 360 | +<li><strong>detailed</strong>: Comprehensive analysis for each changed function</li> |
| 361 | +<li><strong>summary-html</strong>: HTML export of summary report</li> |
| 362 | +<li><strong>detailed-html</strong>: HTML export of detailed report</li> |
| 363 | +<li><strong>summary-json</strong>: JSON export of summary report</li> |
| 364 | +<li><strong>detailed-json</strong>: JSON export of detailed report</li> |
| 365 | +</ul> |
194 | 366 | <h2 id="track-auto">Track Auto</h2> |
195 | 367 | <p>Use <code>track auto</code> when comparing data collected with <code>prof auto</code>. Simply reference the tag names:</p> |
196 | 368 | <pre><code class="language-bash">prof track auto --base "baseline" --current "optimized" \ |
@@ -420,5 +592,5 @@ <h4 class="modal-title" id="keyboardModalLabel">Keyboard Shortcuts</h4> |
420 | 592 |
|
421 | 593 | <!-- |
422 | 594 | MkDocs version : 1.6.1 |
423 | | -Build Date UTC : 2025-08-14 16:56:21.163251+00:00 |
| 595 | +Build Date UTC : 2025-08-18 17:22:19.351267+00:00 |
424 | 596 | --> |
0 commit comments