You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Optimize Capstone disassembly performance across the stack
Thread-safe per-handle O(1) lookup tables in Capstone X86:
- Move lookup tables from static globals to cs_struct fields
- Build tables in X86_global_init(), free in cs_close()
- Add find_insn_h() and X86_insn_reg_{intel,att}_h() per-handle variants
- Keep binary search fallback for decoder paths without handle access
- Replace vsnprintf number formatting with fast custom formatters in SStream
- Use memset for MCInst tied_op_idx initialization
ARM plugin (plugin_cs.c):
- Switch from allocating cs_disasm() to stack-based cs_disasm_iter()
- Replace r_str_newf mnemonic construction with direct malloc+memcpy
x86 plugin (plugin_cs.c):
- Replace r_str_newf + r_str_replace (2 allocs) with single malloc + in-place memmove
- Remove redundant per-instruction cs_option(CS_OPT_DETAIL) call
- Inline cs_len_prefix_opcode() for branch penalty elimination
Core disassembly (disasm.c):
- Compute decode_mask once based on display settings (asm.emu, asm.cmt.esil)
- Skip ESIL/OPEX generation when not needed for display
- Use R_ARCH_OP_MASK_BASIC for color-only decode paths
Analysis (fcn.c):
- Remove R_ARCH_OP_MASK_ESIL from default analysis loop for non-ARM archs
- ESIL generation only when architecture needs it for pattern matching
https://claude.ai/code/session_01KDR9eBZ4vEAftFBQ2vuhmr
0 commit comments