feat: Usage of H5py Virtual Datasets for concat_on_disk#2032
feat: Usage of H5py Virtual Datasets for concat_on_disk#2032selmanozleyen wants to merge 25 commits intoscverse:mainfrom
concat_on_disk#2032Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2032 +/- ##
==========================================
- Coverage 86.64% 84.71% -1.93%
==========================================
Files 46 46
Lines 7218 7276 +58
==========================================
- Hits 6254 6164 -90
- Misses 964 1112 +148
|
ilan-gold
left a comment
There was a problem hiding this comment.
- I think this should be put behind an argument so it is opt-in
- A single test should be enough to ensure results match
- When the dataset is read back in, and a backing file has been deleted, does
h5raise an error? Or will it error weirdly somehow withinanndataonce you try to access the data?
These are done
According to the requirements documentation: https://support.hdfgroup.org/releases/hdf5/documentation/rfc/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
From my experience it tries as hard to use a cache or something like that because when I delete original files it doesn't throw an error and the result file looks as if it didn't change. This behaviour isn't discussed much in https://docs.h5py.org |
|
@selmanozleyen Do you want my review on this again or are the comments still somewhat unaddressed? |
I found the behavior undefined/unpredictable when the source files are deleted. I couldn't find a way to overcome that because it's not stated clearly in the docs. Sometimes it errors sometimes it doesn't as in the CI tests. If we are fine with just documenting this and merging I will remove the fail assertions then it will be read to merge |
ilan-gold
left a comment
There was a problem hiding this comment.
I found the behavior undefined/unpredictable when the source files are deleted. I couldn't find a way to overcome that because it's not stated clearly in the docs.
Can we ask the developers of h5py?
If we are fine with just documenting this and merging I will remove the fail assertions then it will be read to merge
I would like to at least open an issue with the h5py people and give it a day or two (I have some changes requested here anyway)
tests/test_concatenate_disk.py
Outdated
| ) | ||
|
|
||
|
|
||
| def test_anndatas_virtual_concat_missing_file( |
There was a problem hiding this comment.
We shouldn't have multiple different functions that do almost identical things. Please refactor
| init_elem, # TODO: user should be able to specify dataset kwargs | ||
| dataset_kwargs=dict(indptr_dtype=indptr_dtype), | ||
| ) |
There was a problem hiding this comment.
I see this as the TODO but I'm not sure its relevance to
Added TODO for being able to specify compression args since when I used the default approach the output file size grew up too much compared to the size of the inputs.
or in general, why the resultant dataset is 12GB. Is this obs and var or?
There was a problem hiding this comment.
I am not sure but should I do an analyis on this? In general passing datasets kwargs sounds like a good idea to me given that this function is already for advanced users and larger scales
fix error on other tests linting
729c0f1 to
aa468c8
Compare
concat_on_diskconcat_on_disk
Co-authored-by: Ilan Gold <ilanbassgold@gmail.com>
concat_on_diskconcat_on_disk
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
When there is no need for reindexing and the backend is
hdf5we can just use the virtual datasets instead. This requires no in memory copies but instead it just links to the original file location instead. I was able to concat the tahoe datasets (314GB in total) in a few minutes and the result was a 12GB.h5adfile.Other notes:
indptr's were in total 780Mb for all the tahoe files so I just concatenate them in memory instead.TODOs: