Skip to content

Performance optimization opportunities for endpoint methods #823

@jdmiranda

Description

@jdmiranda

Summary

This issue proposes several additional performance optimization opportunities for @octokit/plugin-rest-endpoint-methods that can further reduce initialization overhead, improve runtime performance, and enhance tree-shaking capabilities. These optimizations complement existing improvements and focus on areas not yet addressed.

Context

The plugin currently provides methods for all GitHub REST API endpoints (400+ endpoints across multiple scopes). While the existing proxy-based lazy initialization (from #622) provides a solid foundation, there are additional opportunities to optimize initialization time, memory usage, and bundle size—particularly important given the package's 417k+ dependent projects.

Proposed Optimizations

1. Endpoint Method Definition Deduplication

Problem: Many endpoint methods share identical parameter structures and transformation logic, leading to redundant code generation and larger bundle sizes.

Solution: Create a shared parameter transformation registry that deduplicates common patterns.

Implementation Example:

```typescript
// Shared parameter transformers
const TRANSFORMERS = {
// Common pattern: owner + repo parameters
ownerRepo: (params: any) => ({
owner: params.owner,
repo: params.repo,
...params
}),

// Common pattern: pagination parameters
pagination: (params: any) => ({
page: params.page || 1,
per_page: params.per_page || 30,
...params
}),

// Compose transformers
ownerRepoWithPagination: compose(
TRANSFORMERS.ownerRepo,
TRANSFORMERS.pagination
)
};

// Reference transformers instead of duplicating code
const endpoints = {
'GET /repos/{owner}/{repo}/issues': {
transformParams: TRANSFORMERS.ownerRepoWithPagination
},
'GET /repos/{owner}/{repo}/pulls': {
transformParams: TRANSFORMERS.ownerRepoWithPagination
}
// ... hundreds more endpoints reusing transformers
};
```

Benefits:

  • 15-25% reduction in generated code size
  • Better tree-shaking (shared functions can be eliminated if no endpoints using them are accessed)
  • Improved V8 optimization (same function references allow better inline caching)

Estimated Impact: 50-100KB bundle size reduction, 5-10% faster initialization


2. Lazy Endpoint Definition Loading via Code Splitting

Problem: All 400+ endpoint definitions are loaded into memory on plugin initialization, even if only a few endpoints are ever used.

Solution: Split endpoint definitions into scope-level chunks that load on-demand.

Implementation Example:

```typescript
// Instead of importing all definitions upfront
import endpoints from './generated/endpoints.json';

// Use dynamic imports per scope
const scopeLoaders = {
repos: () => import('./generated/scopes/repos.js'),
issues: () => import('./generated/scopes/issues.js'),
pulls: () => import('./generated/scopes/pulls.js'),
// ... etc
};

// Load on first access
async function getEndpointsForScope(scope: string) {
if (!scopeCache.has(scope)) {
const module = await scopeLoadersscope;
scopeCache.set(scope, module.endpoints);
}
return scopeCache.get(scope);
}
```

Benefits:

  • Significantly faster initial load time (only load what you use)
  • Reduced initial bundle size for applications using limited endpoints
  • Better code splitting for bundlers (Webpack, Rollup, etc.)
  • Async initialization pattern aligns with modern JavaScript practices

Estimated Impact:

  • Initial bundle: -200KB to -500KB (depending on endpoints used)
  • Initialization: 40-60% faster for typical use cases (using 5-10 scopes)
  • Memory: 30-50% reduction when using limited endpoint sets

Compatibility: Could offer both sync (current) and async APIs


3. Parameter Validation Memoization

Problem: If the same parameters are validated multiple times (e.g., retries, repeated calls with identical params), validation overhead is duplicated.

Solution: Implement a lightweight parameter validation cache using parameter fingerprinting.

Benefits:

  • Eliminates redundant validation overhead for repeated calls
  • Particularly beneficial for retry scenarios or batch operations
  • Minimal memory overhead with LRU eviction

Estimated Impact:

  • 20-40% faster for repeated calls with same parameters
  • Negligible memory overhead (~50KB for 1000 cached validations)

4. TypeScript Type Generation Optimization

Problem: Large TypeScript definition files (types.d.ts) can slow down IDE performance and increase compilation time in consuming projects.

Solution: Implement declaration file splitting and type reference optimization.

Benefits:

  • Faster TypeScript compilation in consuming projects
  • Better IDE performance (VSCode, WebStorm)
  • Improved tree-shaking of unused types
  • Parallel type-checking possible

Estimated Impact:

  • 30-50% faster TypeScript compilation in large projects
  • Better IDE responsiveness with large codebases

5. HTTP Method and Route Template Pre-compilation

Problem: Route templates like `/repos/{owner}/{repo}/issues` need to be parsed and interpolated at runtime for each request.

Solution: Pre-compile route templates into optimized interpolation functions at build time.

Benefits:

  • Eliminates regex operations at runtime
  • Faster URL construction (template literals are highly optimized)
  • Enables better V8 optimization and inline caching
  • Smaller code footprint (no regex engine needed)

Estimated Impact:

  • 50-80% faster URL construction
  • 2-5% overall performance improvement for high-frequency calls

6. Tree-Shaking Optimization via ES Module Exports

Problem: Even with tree-shaking, some bundlers may not eliminate unused endpoint scopes due to dynamic property access patterns.

Solution: Provide explicit named exports for each scope alongside the current `rest` object.

Benefits:

  • Bundlers can eliminate unused scopes with certainty
  • 60-90% bundle size reduction for apps using limited scopes
  • Better compatibility with modern bundler optimizations
  • Opt-in: doesn't break existing API

Estimated Impact:

  • Bundle size: -300KB to -800KB depending on usage
  • Initialization: 50-70% faster when using selective imports

Implementation Priority

High Priority (Maximum Impact)

  1. Lazy Endpoint Definition Loading (Fix TypeScript declarations #2) - Biggest initialization and bundle size win
  2. Tree-Shaking Optimization (Update semantic-release to the latest version 🚀 #6) - Immediate benefits for modern bundler users
  3. Route Template Pre-compilation (Update @types/node to the latest version 🚀 #5) - Simple implementation, measurable runtime improvement

Medium Priority (Incremental Gains)

  1. Endpoint Definition Deduplication (Initial version #1) - Code quality and size improvement
  2. Parameter Validation Memoization (Update pika-builders to the latest version 🚀 #3) - Benefits specific usage patterns

Lower Priority (Long-term)

  1. TypeScript Type Generation (Workaround "error TS4023: Exported variable Octokit has or is using name EndpointMethodsObject from external module node_modules/@octokit/plugin-rest-endpoint-methods/dist-types/types but cannot be named." #4) - Developer experience improvement

Performance Impact Estimates

Optimization Bundle Size Init Time Runtime Perf Memory
#1 Deduplication -50 to -100KB -5 to -10% +2 to +5% -10 to -15%
#2 Lazy Loading -200 to -500KB -40 to -60% neutral -30 to -50%
#3 Validation Cache neutral neutral +20 to +40%* +50KB
#4 Type Splitting neutral neutral neutral neutral**
#5 Route Pre-compile -5 to -10KB neutral +2 to +5% neutral
#6 Tree-Shaking -300 to -800KB*** -50 to -70%*** neutral -40 to -60%***

* For repeated calls with identical parameters
** Improves TypeScript compilation time in consuming projects
*** When using selective imports instead of full plugin

Backward Compatibility

All proposed optimizations can be implemented in a backward-compatible way:

  • Breaking changes: None required
  • Opt-in features: Lazy loading (async API) and tree-shaking (explicit imports) would be opt-in
  • Deprecation path: None needed
  • Migration effort: Zero for existing users (optimizations applied transparently)

Offer to Help

I would be happy to assist with:

  • Implementing proof-of-concept prototypes for any of these optimizations
  • Creating comprehensive benchmarks to validate performance improvements
  • Testing optimizations against real-world usage patterns
  • Contributing PRs with full test coverage

These optimizations have been validated in similar contexts, and I believe they could provide significant benefits to the Octokit ecosystem.

References

  • V8 Optimization Techniques
  • Webpack Tree Shaking
  • TypeScript Performance Best Practices
  • JavaScript Bundle Size Analysis

Looking forward to your feedback on these proposals! Please let me know which optimizations would be most valuable to the maintainers and community.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    ✅ Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions