Skip to content

Conversation

@effigies
Copy link
Contributor

This is an optimization to allow for raw datasets with extensive nested derivatives to be efficiently validated. It does not change the default behavior or performance characteristics.

This proactively prunes the filetree during construction, which can prevent many unnecessary stats and construction of many unnecessary objects. The primary downside is that this prevents recursive validation and including pruned files in the dataset size estimate.

Closes #115.

@effigies
Copy link
Contributor Author

I'm not sure if there's any value in performing a similar optimization for the web interface.

@effigies effigies force-pushed the fix/prune-subdatasets branch from dc535ee to d5cb385 Compare December 1, 2024 14:36
@rwblair rwblair merged commit a411f4c into bids-standard:main Dec 9, 2024
17 checks passed
@effigies effigies deleted the fix/prune-subdatasets branch January 9, 2025 19:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fatal JavaScript out of memory: Ineffective mark-compacts near heap limit

2 participants