This repository contains a Go-native performance test framework for Harbor.
The current runner is harborperf, a single CLI that:
- prepares benchmark data on a target Harbor instance
- runs Harbor-specific benchmark scenarios
- writes structured JSON results
- generates markdown and HTML comparison reports
If you see Harbor performance issues, open an issue in goharbor/harbor.
harborperf is organized around Harbor workflows rather than generic HTTP load generation.
cmd/harborperfCLI entrypoint withlist,prepare,run,compare, andcleanuppkg/configEnvironment-driven configuration, size presets, output paths, dataset policypkg/harborNative Harbor API and OCI push/pull client codepkg/prepareDataset preparation pipeline: projects, users, members, artifacts, tags, audit logs, vulnerability preppkg/runnerScenario lifecycle runner with setup, per-worker execution, teardown, and metrics collectionpkg/metricsLatency summaries, success rate, throughput, summary JSON, and run metadatapkg/reportMarkdown report generation and HTML comparison chartsscenariosBuilt-in Harbor benchmark scenariosxk6-harborIn-repo source of generated Harbor API types and client code reused by the native runner
Each scenario follows the same lifecycle:
- Scenario setup
- Worker initialization
- Shared-iterations execution with configured workers
- Scenario teardown
- Summary and detailed result output
The current runner uses a closed workload model driven by:
HARBOR_VUSHARBOR_ITERATIONS
Benchmark data is managed by the prepare pipeline and controlled by:
HARBOR_SIZE=ci|small|mediumHARBOR_DATASET_POLICY=fresh|verify|reuse
Every run writes dataset.json with the resolved dataset contract and fingerprint. Comparisons use that fingerprint to prevent invalid A/B comparisons.
Each run writes artifacts into ./outputs by default:
dataset.json<scenario>.summary.json<scenario>.run.jsonreport.mdwhenHARBOR_REPORT=trueapi-comparison.htmlandpull-push-comparison.htmlfor comparisons
- Go toolchain
- A reachable Harbor instance
Optional:
- Docker or Kubernetes only if you are provisioning Harbor locally yourself
Run directly:
go run ./cmd/harborperf listOr build a binary:
go build -o harborperf ./cmd/harborperf
./harborperf listThe runner is configured through environment variables.
| Variable | Description |
|---|---|
HARBOR_URL |
Harbor URL in the form http(s)://username:password@host |
| Variable | Description | Default |
|---|---|---|
HARBOR_SIZE |
Dataset size preset: ci, small, medium |
small |
HARBOR_VUS |
Worker count for scenario execution | preset-dependent |
HARBOR_ITERATIONS |
Total iterations shared across all workers | 2 x HARBOR_VUS |
HARBOR_DATASET_POLICY |
fresh, verify, or reuse |
reuse |
HARBOR_REPORT |
Generate report.md after a run |
false |
HARBOR_OUTPUT_DIR |
Output directory for run artifacts | ./outputs |
PROJECT_PREFIX |
Prefix for benchmark projects | project |
USER_PREFIX |
Prefix for benchmark users | user |
SCANNER_URL |
Scanner endpoint for vulnerability prep | unset |
FAKE_SCANNER_URL |
Fake scanner endpoint used during project prep | unset |
AUTO_SBOM_GENERATION |
Enable automatic SBOM generation on prepared projects | false |
BLOB_SIZE |
Blob size used for artifact generation | preset-dependent |
BLOBS_COUNT_PER_ARTIFACT |
Number of blobs per artifact | preset-dependent |
Compatibility aliases still accepted by the current runner:
K6_CSV_OUTPUTK6_JSON_OUTPUT
go run ./cmd/harborperf listReuse existing benchmark-owned data when present:
HARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_DATASET_POLICY=reuse \
go run ./cmd/harborperf prepareRecreate benchmark data from scratch:
HARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_DATASET_POLICY=fresh \
go run ./cmd/harborperf prepareVerify the expected dataset exists without creating anything:
HARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_DATASET_POLICY=verify \
go run ./cmd/harborperf prepareHARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_VUS=10 \
HARBOR_ITERATIONS=10 \
HARBOR_REPORT=true \
go run ./cmd/harborperf runHARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_VUS=10 \
HARBOR_ITERATIONS=10 \
go run ./cmd/harborperf run --api-onlyHARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
HARBOR_VUS=20 \
HARBOR_ITERATIONS=100 \
go run ./cmd/harborperf run list-projects get-projectHARBOR_URL=http://admin:Harbor12345@localhost:8080 \
HARBOR_SIZE=ci \
go run ./cmd/harborperf cleanupgo run ./cmd/harborperf compare ./results/run-a ./results/run-bThe compare command expects both directories to contain:
dataset.json*.summary.json
It refuses to compare runs when dataset fingerprints do not match.
Use harborperf list for the source of truth. The built-in set currently includes:
get-artifact-by-digestget-artifact-by-tagget-catalogget-projectget-repositoryget-v2list-artifact-tagslist-artifactslist-audit-logslist-project-logslist-project-memberslist-projectslist-quotaslist-repositorieslist-userspull-artifacts-from-different-projectspull-artifacts-from-same-projectpush-artifacts-to-different-projectspush-artifacts-to-same-projectsearch-users
The same commands used locally are intended to run in CI:
harborperf prepareharborperf run- archive the output directory
- optionally run
harborperf compareagainst a stored baseline
Recommended CI environment for a short validation run:
HARBOR_SIZE=ci
HARBOR_VUS=10
HARBOR_ITERATIONS=10
HARBOR_DATASET_POLICY=fresh
HARBOR_REPORT=trueAgainst a local Harbor on localhost:8080:
export HARBOR_URL=http://admin:Harbor12345@localhost:8080
export HARBOR_SIZE=ci
export HARBOR_VUS=10
export HARBOR_ITERATIONS=10
export HARBOR_REPORT=true
go run ./cmd/harborperf prepare
go run ./cmd/harborperf run --api-onlyThen inspect the outputs:
ls ./outputs
cat ./outputs/dataset.json
cat ./outputs/report.mdharborperf listdoes not require Harbor connectivity.prepareandcleanuponly target benchmark-owned resources based on the configured naming prefixes.- The legacy
mageandscripts/flow is still present in the repository, but the native Go CLI is the current primary path.