-
Notifications
You must be signed in to change notification settings - Fork 81
Open
Labels
help wantedExtra attention is neededExtra attention is needed
Description
Context
Vixen must be able to keep up with source streams so that its internal overhead never causes it to lag behind incoming change events.
Objective
Create a Rust benchmark that measures how efficiently Vixen processes a large batch of fixture messages end-to-end. This benchmark will serve as a baseline for performance comparisons as the core processing engine evolves.
Requirements
- Add a benchmark test (e.g., in benches/ or under #[cfg(bench)]) that:
- Loads a large collection of fixture messages (e.g., 10k–100k)
- Runs them through the full Vixen processing pipeline (parsing → validation → transformation)
- Logs processing start and end times
- Outputs total elapsed time and computed throughput (messages per second)
- Include baseline benchmark results (commit SHA + hardware used)
- Integrate with Criterion.rs (or cargo bench) for consistent measurement
- Optionally fail or warn if performance regressions exceed a defined threshold (e.g., >5%)
- Add a GitHub Actions job to run the benchmark weekly or on PRs that modify the core engine
Acceptance Criteria
- Benchmark runs deterministically on the same fixture input.
- Total processing time and average throughput are logged.
- Performance can be compared across commits to detect regressions or improvements.
Metadata
Metadata
Assignees
Labels
help wantedExtra attention is neededExtra attention is needed