Skip to content

Conversation

@kobi2187
Copy link
Contributor

Faster NimSuggest. Boasting AI description follows, take with grain of salt:

Major LSP performance improvements by replacing O(n) scans with O(1) caching:

  1. Symbol location cache for ideUse/ideDus

    • Build hash table mapping symbol ID to all locations
    • Replaces O(n) iteration through ALL symbols across ALL files
    • Cache built lazily on first use
    • Invalidated on recompilation to stay fresh
  2. Optimized ideHighlight

    • Use integer ID comparison instead of pointer equality
    • Add 1,000 result limit to prevent excessive iterations
    • Early termination when limit reached
  3. Cache invalidation strategy

    • Clear cache on recompilePartially()
    • Clear cache on recompileFullProject()
    • Ensures cache remains accurate after file changes

Performance impact:

  • ideUse/ideDus: O(n) -> O(1) for finding all symbol references
  • ideHighlight: Faster comparison + early termination
  • First lookup builds cache (one-time cost)
  • Subsequent lookups are instant hash table access

This significantly improves LSP responsiveness for:

  • Find all references (ideUse/ideDus)
  • Highlight symbol in file (ideHighlight)
  • Large codebases with thousands of symbols

Major LSP performance improvements by replacing O(n) scans with O(1) caching:

1. Symbol location cache for ideUse/ideDus
   - Build hash table mapping symbol ID to all locations
   - Replaces O(n) iteration through ALL symbols across ALL files
   - Cache built lazily on first use
   - Invalidated on recompilation to stay fresh

2. Optimized ideHighlight
   - Use integer ID comparison instead of pointer equality
   - Add 1,000 result limit to prevent excessive iterations
   - Early termination when limit reached

3. Cache invalidation strategy
   - Clear cache on recompilePartially()
   - Clear cache on recompileFullProject()
   - Ensures cache remains accurate after file changes

Performance impact:
- ideUse/ideDus: O(n) -> O(1) for finding all symbol references
- ideHighlight: Faster comparison + early termination
- First lookup builds cache (one-time cost)
- Subsequent lookups are instant hash table access

This significantly improves LSP responsiveness for:
- Find all references (ideUse/ideDus)
- Highlight symbol in file (ideHighlight)
- Large codebases with thousands of symbols
@Araq
Copy link
Member

Araq commented Dec 17, 2025

My educated guess: It doesn't improve any query performance in reality and nimsuggest takes up even more memory than it already does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants