Skip to content

Conversation

@cgkantidis
Copy link
Contributor

Instead, compute the list of longest files only in case of an error, which is when the list is needed.

This reduces both runtime and memory for non-erroneous runs, which should be the majority of cases.

Instead, compute the list of longest files only in case of an error, which is
when the list is needed.

This reduces both runtime and memory for non-erroneous runs, which should be the
majority of cases.
@cgkantidis
Copy link
Contributor Author

Benchmarking on Quake2's codebase shows no speedup or memory reduction, so I tried llvm's codebase, and there's still no significant change.

I guess the size of 10 of the vector of LOC/filename is small enough so that the iterative sorting doesn't consume much runtime overall.

Still, I think this is a good refactoring.

@dlidstrom
Copy link
Owner

Agreed 👍🏻

@dlidstrom dlidstrom merged commit b5d01b5 into dlidstrom:main Mar 23, 2025
6 of 7 checks passed
@cgkantidis cgkantidis deleted the long_files branch May 18, 2025 11:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants