Skip to content

Assess creating an iterator style API to support very large directories #16

@joachimmetz

Description

@joachimmetz

Assess iterator style API for very large directories

time fsntfsinfo -H ntfs_10000_files.raw 
real	0m0.201s
user	0m0.147s
sys	0m0.053s

128 MiB test image 100000 files

time fsntfsinfo -H ntfs_100000_files.raw
real	0m2.033s
user	0m1.426s
sys	0m0.589s

2 GiB test image 1000000 files

time fsntfsinfo -H ntfs_1000000_files.raw
real	0m13.955s
user	0m11.975s
sys	0m1.944s

Same image on a different system

time fsntfsinfo -H ntfs_1000000_files.raw
real	0m5.561s
user	0m4.343s
sys	0m1.210s

32 GiB test image 10000000 files
(I'm wondering how realistic this scenario is, since the script to generate test image has been running for about 5 days)

time fsntfsinfo -H ntfs_10000000_files

real	1m4.064s
user	0m48.789s
sys	0m13.869s

When too many files:

libcdata_internal_array_resize: invalid entries size value exceeds maximum.
libcdata_array_append_entry: unable to resize array.
libfdata_list_append_element: unable to append mapped range to array.
libfsntfs_directory_entries_tree_insert_index_value: unable to append index value to entries list.
libfsntfs_directory_entries_tree_read_from_index_node: unable to insert index value into directory entries tree.
...
libfsntfs_directory_entries_tree_read_from_i30_index: unable to read directory entries tree from root node.
...

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions