Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions perf/scale_test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Mock Org Perf Suite

Generated synthetic Python modules for stressing import and type indexing.

Use generate_mock_org.py to regenerate this dataset.
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Committing multiple ~2K-line generated Python modules significantly increases repo size and can slow down clones, indexing, and CI steps that scan the tree. If the benchmark harness can generate these files at build/benchmark time, consider committing only the generator (and perhaps a small “golden” seed) rather than the full generated output; alternatively, document why committing the generated modules is required (determinism, offline runs, etc.) and consider keeping the generated set minimal.

Suggested change
Use generate_mock_org.py to regenerate this dataset.
The generated modules are checked in so the benchmark can run deterministically
and in offline environments without requiring a generation step during CI or
local performance runs.
Use `generate_mock_org.py` only when intentionally refreshing this dataset.
Keep the committed generated corpus minimal so repository size, clone time,
indexing, and CI tree scans stay as small as practical.

Copilot uses AI. Check for mistakes.
Loading
Loading