Open
Description
Here we enumerate performance issues of MMTk Ruby.
- Updating fstring table.
- This hash table is simply too big. We need a way to update it efficiently, possibly using multiple threads.
- Load balance
- Adjust work packet size in mmtk-core
- Ideally, work packet should be made into sizes so that each work packet takes, e.g. a few milliseconds, to execute.
- Scanning large arrays
- Some arrays are too large (e.g. 500000 elements or longer). We may split the scanning into multiple work packets.
- We can use 'edge enqueuing' (more precisely, slot enqueuing) so that we enqueue individual slots instead of whole objects. Better if we can enqueue slices of slots. See Add
visit_slice
toEdgeVisitor
mmtk-core#986
- Adjust work packet size in mmtk-core
- Overall object scanning speed
- Specialise object scanning in Rust to avoid crossing the Rust-to-C boundary
- If it's too difficult to handle all types in Rust, only optimise for the most proliferate types (Objects, Strings, Arrays, MatchData, ...)
- If still too difficult, only optimise for the most common case (e.g. Strings that are not shared)
- If it's too difficult to handle all types in Rust, only optimise for the most proliferate types (Objects, Strings, Arrays, MatchData, ...)
- (Update): We should probably only use fast paths for the most common cases for the proliferate types. Even
T_OBJECT
may sometimes be not proliferate enough to justify the use of fast paths (depending on work load, of course).
- Specialise object scanning in Rust to avoid crossing the Rust-to-C boundary
- obj_free candidates
- Hash
- Object
- Data (Can we just allocate the data in the MMTk heap and pin it?)
Metadata
Metadata
Assignees
Labels
No labels