You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/copilot-instructions.md
+28-34Lines changed: 28 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,31 +2,28 @@
2
2
3
3
## Project Overview
4
4
5
-
PerfSpect is a performance analysis tool for Linux systems written in Go. It provides several commands:
6
-
-`metrics`: Collects CPU performance metrics using hardware performance counters
7
-
-`report`: Generates system configuration and health reports from collected data
8
-
-`benchmark`: Runs performance micro-benchmarks to evaluate system health
9
-
-`telemetry`: Gathers system telemetry data
10
-
-`flamegraph`: Creates CPU flamegraphs
11
-
-`lock`: Analyzes lock contention
12
-
-`config`: Modifies system configuration for performance tuning
5
+
PerfSpect is a performance analysis tool for Linux systems written in Go. It provides commands for collecting CPU performance metrics, generating system configuration reports, running micro-benchmarks, gathering telemetry, creating flamegraphs, analyzing lock contention, and modifying system configuration. It can target both local and remote systems via SSH.
13
6
14
-
The tool can target both local and remote systems via SSH.
7
+
See `ARCHITECTURE.md` for detailed architecture, data flow diagrams, and concurrency model.
Copy file name to clipboardExpand all lines: ARCHITECTURE.md
+14-6Lines changed: 14 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ This document describes the high-level architecture of PerfSpect to help new con
6
6
7
7
PerfSpect is a performance analysis tool for Linux systems. It collects system configuration data, hardware performance metrics, and generates reports. The tool supports both local execution and remote targets via SSH.
@@ -52,7 +52,7 @@ PerfSpect is a performance analysis tool for Linux systems. It collects system c
52
52
53
53
## Directory Structure
54
54
55
-
```
55
+
```text
56
56
perfspect/
57
57
├── main.go # Entry point
58
58
├── cmd/ # Command implementations
@@ -99,6 +99,7 @@ type Target interface {
99
99
```
100
100
101
101
**Implementations:**
102
+
102
103
-`LocalTarget`: Executes commands directly on the local machine
103
104
-`RemoteTarget`: Executes commands via SSH on remote machines
104
105
@@ -118,6 +119,7 @@ type ReportingCommand struct {
118
119
```
119
120
120
121
**Workflow (`ReportingCommand.Run()`):**
122
+
121
123
1. Parse flags and validate inputs
122
124
2. Initialize targets (local or from `--target`/`--targets` flags)
123
125
3. For each target in parallel:
@@ -132,12 +134,14 @@ type ReportingCommand struct {
132
134
Collection scripts are defined in `internal/script/scripts.go`. Script dependencies, i.e., tools used by the scripts to collect data, are in `internal/script/resources/` and embedded in the binary using `//go:embed`. The scripts are executed on targets via a controller script that manages concurrent/sequential execution and signal handling.
133
135
134
136
**Key concepts:**
137
+
135
138
-`ScriptDefinition`: Defines a script (template, dependencies, required privileges)
136
139
-`ScriptOutput`: Captures stdout, stderr, and exit code
137
140
-`controller.sh`: Generated script that orchestrates all scripts on a target
138
141
139
142
**Flow:**
140
-
```
143
+
144
+
```text
141
145
1. Scripts defined in code with templates and dependencies
142
146
2. Controller script generated from concurrent + sequential scripts
143
147
3. Scripts and dependencies copied to target temp directory
@@ -161,6 +165,7 @@ type TableDefinition struct {
161
165
```
162
166
163
167
**Field value retrieval:**
168
+
164
169
-`ValuesFunc`: Function that parses script output and returns field values
165
170
- Supports regex extraction, JSON parsing, and custom logic
166
171
@@ -175,6 +180,7 @@ type Loader interface {
175
180
```
176
181
177
182
**Implementations:**
183
+
178
184
-`LegacyLoader`: Original format (CLX, SKX, BDX, AMD processors)
179
185
-`PerfmonLoader`: Intel perfmon JSON format (GNR, EMR, SPR, ICX)
180
186
-`ComponentLoader`: ARM processors (Graviton, Axion, Ampere)
@@ -183,7 +189,7 @@ The `NewLoader()` factory function returns the appropriate loader based on CPU m
@@ -224,7 +230,8 @@ PerfSpect uses goroutines for parallel operations:
224
230
3.**Signal handling**: A goroutine listens for SIGINT/SIGTERM and coordinates graceful shutdown across all targets
225
231
226
232
**Signal handling flow:**
227
-
```
233
+
234
+
```text
228
235
SIGINT received
229
236
→ Signal handler goroutine activated
230
237
→ For each target: send SIGINT to controller.sh PID
@@ -243,4 +250,5 @@ make check # Run all code quality checks (format, vet, lint)
243
250
Test files are colocated with source files (e.g., `extract_test.go` alongside `extract.go`).
244
251
245
252
## Functional Testing
246
-
Functional tests are located in an Intel internal GitHub repository. The tests run against various Linux distributions and CPU architectures on internal servers and public cloud systems to validate end-to-end functionality.
253
+
254
+
Functional tests are located in an Intel internal GitHub repository. The tests run against various Linux distributions and CPU architectures on internal servers and public cloud systems to validate end-to-end functionality.
Submit your vulnerabilities as bug reports to the GitHub issues page. Refer to GitHub's [Creating an issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-an-issue) for instructions.
Submit your support needs to the GitHub issues page. Refer to GitHub's [Creating an issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-an-issue) for instructions.
4
+
Submit your support needs to the GitHub issues page. Refer to GitHub's [Creating an issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-an-issue) for instructions.
Copy file name to clipboardExpand all lines: docs/perfspect-daemonset.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
-
##Example DaemonSet for PerfSpect for GKE
1
+
# Example DaemonSet for PerfSpect for GKE
2
2
3
3
This is an example DaemonSet for exposing PerfSpect metrics as a prometheus compatible metrics endpoint. This example assumes the use of Google Kubernetes Engine (GKE) and using the `PodMonitoring` resource to collect metrics from the metrics endpoint.
4
4
5
-
```
5
+
```yaml
6
6
apiVersion: apps/v1
7
7
kind: DaemonSet
8
8
metadata:
@@ -61,4 +61,5 @@ spec:
61
61
- port: metrics-port
62
62
interval: 30s
63
63
```
64
-
* Replace `docker.registry/user-sandbox/ar-us/perfspect` with the location of your perfspect container image.
64
+
65
+
* Replace `docker.registry/user-sandbox/ar-us/perfspect` with the location of your perfspect container image.
Copy file name to clipboardExpand all lines: docs/perfspect_flamegraph.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,5 @@
1
1
# perfspect flamegraph
2
2
3
-
4
3
```text
5
4
Collect flamegraph data from target(s)
6
5
@@ -18,7 +17,7 @@ Flags:
18
17
--frequency number of samples taken per second (default: 11)
19
18
--pids comma separated list of PIDs. If not specified, all PIDs will be collected (default: [])
20
19
--perf-event perf event to use for native sampling (e.g., cpu-cycles, instructions, cache-misses, branches, context-switches, mem-loads, mem-stores, etc.) (default: cycles:P)
21
-
--dual-native-stacks also record DWARF unwind perf and merge with frame-pointer stacks per process (larger profiles) (default: false)
20
+
--dual-native-stacks also record DWARF unwind perf and merge with frame-pointer stacks per process (larger profiles, longer post-processing time) (default: false)
22
21
--asprof-args arguments to pass to async-profiler, e.g., $ asprof start <these arguments> -i <interval> <pid>. (default: -t -F probesp+vtable)
23
22
--max-depth maximum render depth of call stack in flamegraph (0 = no limit) (default: 0)
0 commit comments