Skip to content

Optimize encoding_rs bindings: reduce allocations and simplify error handling#1

Merged
jeffhuen merged 2 commits intomainfrom
claude/fix-memory-leaks-review-v0UIU
Feb 16, 2026
Merged

Optimize encoding_rs bindings: reduce allocations and simplify error handling#1
jeffhuen merged 2 commits intomainfrom
claude/fix-memory-leaks-review-v0UIU

Conversation

@jeffhuen
Copy link
Copy Markdown
Owner

Summary

This PR optimizes the encoding_rs Elixir bindings by reducing unnecessary heap allocations and simplifying error handling patterns. The changes focus on using stack-allocated strings where appropriate and leveraging infallible operations instead of fallible ones.

Key Changes

  • Removed unused atoms: Deleted unknown_encoding, encode_error, decode_error, and no_bom atom definitions that were no longer being used
  • Simplified error returns: Changed error cases to return empty strings (String::new()) instead of heap-allocated error messages, since the Elixir side discards these values anyway
  • Optimized canonical_name function: Changed return type from (Atom, String) to (Atom, &'static str) to avoid unnecessary string allocations by returning static string references directly from encoding_rs
  • Optimized detect_bom function: Similarly changed return type to use &'static str instead of String for the encoding name
  • Replaced write_all with copy_from_slice: In both encode_impl and encode_batch, replaced the fallible write_all operation with the infallible copy_from_slice since the buffer is pre-allocated to the exact required size
  • Removed unused import: Deleted use std::io::Write which is no longer needed after replacing write_all
  • Added buffer shrinking in decoder: Added output.shrink_to_fit() in decoder_decode_chunk_impl to release excess capacity before passing the string to Rustler, preventing oversized buffers from being held in memory until garbage collection

Implementation Details

The changes maintain API compatibility while improving performance:

  • Static string references (&'static str) are returned where the encoding_rs library already provides them, eliminating unnecessary allocations
  • Empty strings are used for discarded error values to avoid heap allocation overhead
  • The decoder buffer optimization prevents memory waste when processing small chunks with large pre-allocated buffers

https://claude.ai/code/session_01Mg82oZeWczDyGDe5eoGXXj

claude and others added 2 commits February 16, 2026 19:56
- Add shrink_to_fit() in decoder_decode_chunk_impl to release excess
  String capacity before Rustler copies into BEAM binary. Without this,
  streaming workloads over-allocate up to 3x per chunk due to
  max_utf8_buffer_length worst-case estimates.

- Replace write_all() with copy_from_slice() in encode_impl and
  encode_batch, removing the std::io::Write import. copy_from_slice is
  infallible when lengths match (guaranteed by prior allocation).

- Eliminate unnecessary String heap allocations in error paths: use
  String::new() in decode_impl/decode_batch (Elixir side discards the
  value), return &'static str from canonical_name and detect_bom since
  encoding_rs::Encoding::name() already returns &'static str.

- Remove unused atoms (unknown_encoding, encode_error, decode_error,
  no_bom) from the atoms! macro.

https://claude.ai/code/session_01Mg82oZeWczDyGDe5eoGXXj
…hreshold

- Extract empty_binary() helper to de-duplicate OwnedBinary::new(0) logic
  across encode_impl and encode_batch, with accurate doc comment explaining
  that enif_alloc_binary(0) is not zero-cost
- Standardize error-path comments to reference Elixir normalize_result/1
- Add 4KB threshold to shrink_to_fit() in streaming decoder to avoid
  realloc overhead on small chunks where savings are negligible
@jeffhuen jeffhuen merged commit 5ed40c7 into main Feb 16, 2026
6 checks passed
@jeffhuen jeffhuen deleted the claude/fix-memory-leaks-review-v0UIU branch February 16, 2026 22:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants