-
Notifications
You must be signed in to change notification settings - Fork 61
Description
With #147 resolved, I see a couple more opportunities to speed up the slub-dump command.
The main current bottleneck is the walk_caches routine. It walks the cache list forward and keeps walking until it goes through the whole list. Even though it only processes list entries that are present in target_names, the list walking itself takes time.
I see the following changes that could be made:
-
Make
walk_cachesstop walking the list once it finds alltarget_namescaches (can there be more than 1?). -
Make
walk_cacheswalk the list backwards. The reason for this is thatkmalloccaches are at the end of the list (they are the first caches to be created, and new caches get added to the front of the list). And I think it's fair to assume thatkmalloccaches are the most frequent ones to be dumped, so optimizing for them seems to make sense.
With the dirty patch below, running slub-dump kmalloc-1k -vv --cpu 0 -n over KGDB completes 3 times faster (but note that this patch breaks --list).
diff --git a/gef.py b/gef.py
index 811c9062..91875a2f 100644
--- a/gef.py
+++ b/gef.py
@@ -74947,7 +74947,8 @@ class SlubDumpCommand(GenericCommand, BufferingOutput):
def get_next_kmem_cache(self, addr, point_to_base=True):
if point_to_base:
addr += self.kmem_cache_offset_list
- return read_int_from_memory(addr) - self.kmem_cache_offset_list
+ # XXX: speed up by walking backwards
+ return read_int_from_memory(addr + 8) - self.kmem_cache_offset_list
def get_name(self, addr):
name_addr = read_int_from_memory(addr + self.kmem_cache_offset_name)
@@ -75258,6 +75259,9 @@ class SlubDumpCommand(GenericCommand, BufferingOutput):
# goto next
current_kmem_cache = kmem_cache["next"]
self.quiet_info("! slub-dump: walk_caches: loop 1: 4")
+ # XXX: dirty hack to bail out if target is found
+ if len(target_names) == 1 and kmem_cache["name"] in target_names:
+ break
if self.args.list:
return parsed_caches # fast return
```