Skip to content

Units tests for data_load.py #548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open

Units tests for data_load.py #548

wants to merge 30 commits into from

Conversation

acordonez
Copy link
Contributor

@acordonez acordonez commented Apr 29, 2025

Summary of changes and related issue

Add more unit tests for core.data_load.py to get coverage up to 83%. Add type hints to data_load.py.

Relevant motivation and context

We want to increase test coverage to at least 80%.

No tests written for read_catalog_from_csv(), which is unused and will be removed.

How to test

pip install pytest-cov
pytest tests/data_load/ --cov=climakitae.core.data_load --cov-report term-missing

These tests have taken ~7 minutes to complete for me locally. Many of the tests are loading and checking real datasets.

I ran the interactive_data_access_and_viz and historical_climate_data_comparisons notebooks to check that changes to data_load.py have not affected any workflows.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • None of the above

To-Do

  • Unit tests
    • Existing unit tests are passing
    • If relevant, new unit tests are written (required 80% unit test coverage)
  • Documentation
  • Naming conventions followed
    • Helper functions hidden with _ before the name
  • Any notebooks known to utilize the affected functions are still working
  • Black formatting has been utilized
  • Tagged/notified 2 reviewers for this PR

@neilSchroeder
Copy link
Contributor

neilSchroeder commented Apr 30, 2025

The failing test is almost certainly my fault. I can go in a fix it.

edit: apparently just re-running it worked. yay 🥳

@acordonez
Copy link
Contributor Author

The failing test is almost certainly my fault. I can go in a fix it.

edit: apparently just re-running it worked. yay 🥳

@neilSchroeder Thanks!

@elehmer
Copy link
Contributor

elehmer commented May 1, 2025

The failing test is almost certainly my fault. I can go in a fix it.
edit: apparently just re-running it worked. yay 🥳

@neilSchroeder Thanks!

Yeah, sometimes the test environment just doesn't build right. Sigh

@neilSchroeder
Copy link
Contributor

The failing test is almost certainly my fault. I can go in a fix it.
edit: apparently just re-running it worked. yay 🥳

@neilSchroeder Thanks!

Yeah, sometimes the test environment just doesn't build right. Sigh

@elehmer can you take a look at this code and approve?

@elehmer
Copy link
Contributor

elehmer commented May 2, 2025

The failing test is almost certainly my fault. I can go in a fix it.
edit: apparently just re-running it worked. yay 🥳

@neilSchroeder Thanks!

Yeah, sometimes the test environment just doesn't build right. Sigh

@elehmer can you take a look at this code and approve?

Will try and get to it this afternoon

Copy link
Contributor

@elehmer elehmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good. If we merge the all_touched PR first we will need to make a test for that option. Thanks

@acordonez
Copy link
Contributor Author

Looks really good. If we merge the all_touched PR first we will need to make a test for that option. Thanks

@elehmer Can do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants