-
-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Problem
It is hard to standardize best practices for the large number of Jupyter repos.
Proposed Solution
Add maintainability audit checklist to maintainer-tools
- Point to example implementation PRs and current config files as appropriate
- Advise annual audit for updated list and current best practices - upgrade version tags, etc.
Ideas to include:
- Use Jupyter Releaser
- Triage Label enforcement
- Use "Base Setup" GitHub Action
- Cron job for daily builds
- Precommit setup
- Add minimal pre-commit file without auto-formatters
- Enable and run the auto-formatters as a separate commit
- Merge the PR
- Add a new PR that adds the .git-blame-ignore-revs file
- Add a new PR for flake8 and/or eslint
- Run autoformatters on backport branches to make backporting easier
- ReadTheDocs setup with PR builds
- Pydata theme with MyST
- Example binder and PR commenter using a workflow or jupyterlab-probot config
- 4 kinds of docs
- Coverage shown on PRs
- Coverage thresholds enforced in pytest
- Tests for downstream libraries
- CI job minimum dependency versions
- CI job against prerelease dependency versions
- Test with warnings as errors
- Test with warnings as errors in docs
- No upper version constraints on dependencies
- Include tests in sdist but not wheel - tests should be at the top level of the repo and excluded when using
find_packages
- Run the test suite on the conda-forge recipe
- Run
pre-commit autoupdate
or usepre-commit.ci
- Add
.github/dependabot.yml
file with weekly updates - Consider adding mypy - copy pip config, add
py.typed
file - Use
dev
requirements for anything that isn't strictly required to run tests, e.g.pre-commit
, and recommend usingpip install -e ".[dev,test]"
Perhaps tie this to an annual update of supported Python versions.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request