Skip to content

feat: Performance and Scale Study #492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Musaddika
Copy link

KRO Performance Framework

This directory contains the KRO Performance Framework, a comprehensive testing infrastructure for measuring KRO (Kubernetes Resource Orchestrator) performance metrics.

Overview

The framework is designed to benchmark and analyze the performance of various KRO components:

  1. CRUD Operations: Create, Read, Update, Delete operations on KRO resources
  2. CEL Expressions: Common Expression Language evaluation with varying complexity levels
  3. ResourceGraph Operations: Graph operations with different sizes and complexities

How to test

Although this project is in the development phase, once some dependencies are resolved, it can be tested by this way:-

Quick Start

To run a quick demo of simulated performance results:

./run_performance_tests.sh --demo

Running Benchmarks

To run all benchmarks:

./kro-perf benchmark --simulation --duration 30s --workers 4 --resources 100

Or you can use the shell script with similar options:

./run_performance_tests.sh --duration 30s --workers 4 --resources 100

Running Specific Benchmarks

Run only CRUD benchmarks:

./kro-perf benchmark --type crud --simulation

Run only CEL expression benchmarks:

./kro-perf benchmark --type cel --complexity all --simulation

Run only ResourceGraph benchmarks:

./kro-perf benchmark --type resourcegraph --graph-complexity all --simulation

Analysis and Visualization

Analyze benchmark results:

./kro-perf analyze --input ./results/all_results.json --output ./results/analysis.json

Generate visualizations:

./kro-perf visualize --input ./results/analysis.json --output ./results/visualizations --charts all

Command Line Options

Benchmark Command

Option Description Default
--type Benchmark type: crud, cel, resourcegraph, all all
--duration Duration of each benchmark 60s
--workers Number of concurrent workers 4
--resources Number of resources for CRUD tests 100
--namespace Kubernetes namespace default
--kubeconfig Path to kubeconfig file
--simulation Run in simulation mode true
--complexity CEL expression complexity: simple, medium, complex, very-complex, all all
--graph-complexity Graph complexity: small, medium, large, all all
--nodes Number of nodes for large graph tests 30
--edges Number of edges for large graph tests 45
--output Output directory for results ./results
--verbose Enable verbose output false

Analyze Command

Option Description Default
--input Input results file to analyze (required)
--output Output file for analysis (required)
--baseline Baseline results for comparison
--format Output format: json, yaml, text json
--compare-to-last Compare to last run false

Visualize Command

Option Description Default
--input Input analysis file to visualize (required)
--output Output directory for visualizations (required)
--charts Chart types to generate: bar, line, heatmap, all all

@Musaddika
Copy link
Author

Hi @a-hilaly,
I have implemented this project. If you could kindly give it a quick review, I’ll be able to proceed further with the next steps. Let me know if any changes are required.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant