Skip to content

GC performance issues for MMTk Ruby #25

Open
@wks

Description

@wks

Here we enumerate performance issues of MMTk Ruby.

  • Updating fstring table.
    • This hash table is simply too big. We need a way to update it efficiently, possibly using multiple threads.
  • Load balance
    • Adjust work packet size in mmtk-core
      • Ideally, work packet should be made into sizes so that each work packet takes, e.g. a few milliseconds, to execute.
    • Scanning large arrays
      • Some arrays are too large (e.g. 500000 elements or longer). We may split the scanning into multiple work packets.
      • We can use 'edge enqueuing' (more precisely, slot enqueuing) so that we enqueue individual slots instead of whole objects. Better if we can enqueue slices of slots. See Add visit_slice to EdgeVisitor mmtk-core#986
  • Overall object scanning speed
    • Specialise object scanning in Rust to avoid crossing the Rust-to-C boundary
      • If it's too difficult to handle all types in Rust, only optimise for the most proliferate types (Objects, Strings, Arrays, MatchData, ...)
        • If still too difficult, only optimise for the most common case (e.g. Strings that are not shared)
    • (Update): We should probably only use fast paths for the most common cases for the proliferate types. Even T_OBJECT may sometimes be not proliferate enough to justify the use of fast paths (depending on work load, of course).
  • obj_free candidates
    • Hash
    • Object
    • Data (Can we just allocate the data in the MMTk heap and pin it?)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions