-
Notifications
You must be signed in to change notification settings - Fork 0
Complete Pydantic V1 to V2 migration for Python 3.14 compatibility #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughMigrates the codebase from Pydantic v1 to v2 across compatibility layer, models, validators, serialization, and settings; updates CI/workflows and dependency constraints for Python 3.14; and applies corresponding API/behavior adaptions in components, layers, utilities, dialogs, and tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (8)
🔇 Additional comments (20)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (25)
.github/workflows/circleci.yml-1-4 (1)
1-4: Fix the line-ending issue flagged by YAMLlint.The static analysis tool reports a line-ending error at line 1: "wrong new line character: expected \n". Ensure the file uses Unix-style line endings (LF) throughout, not Windows-style (CRLF). This can typically be fixed by configuring your Git client or editor to use LF for this file.
.github/workflows/test_vendored.yml-1-1 (1)
1-1: Fix YAML file encoding: wrong newline character.YAMLlint reports an encoding issue at the start of the file. Ensure the file uses Unix line endings (LF,
\n) instead of Windows (CRLF) or other formats..github/workflows/test_vendored.yml-7-7 (1)
7-7: Remove extra whitespace after colon in cron schedule.YAMLlint flags excessive whitespace after the colon in the cron line. YAML style convention calls for a single space.
🔎 Proposed fix
- - cron: '0 2 * * 1' # Mondays, 2AM + - cron: '0 2 * * 1' # Mondays, 2AMexamples/clipping_planes_interactive_.py-135-143 (1)
135-143: Remove duplicate assignment on line 138.Line 138 duplicates the
labels_layer.mouse_pan = Falseassignment from line 137. This is likely a copy-paste error. Verify the intended target layer for line 138 or remove the duplicate if both assignments are unnecessary.🔎 Likely fix: remove the duplicate
# Disable interactivity during plane drag volume_layer.mouse_pan = False labels_layer.mouse_pan = False - labels_layer.mouse_pan = False points_layer.mouse_pan = False.pre-commit-config.yaml-1-1 (1)
1-1: Change line endings from CRLF to LF.The file uses Windows-style line endings (CRLF,
\r\n) throughout. Pre-commit configuration should use Unix-style line endings (LF,\n)..github/workflows/test_translations.yml-6-6 (1)
6-6: Fix YAMLlint formatting issue: excessive whitespace after colon.Line 6 has two spaces after the colon, which violates YAMLlint's colons rule. Correct to single space for consistency.
- - cron: '0 1 * * *' + - cron: '0 1 * * *'.github/workflows/test_translations.yml-1-1 (1)
1-1: Fix CRLF line endings in the workflow file.The file uses CRLF (
\r\n) line endings instead of LF (\n). Convert the entire file to use consistent LF line endings.examples/dev/slicing/random_shapes.py-61-62 (1)
61-62: Consider usingtempfilemodule for secure temporary file handling.Using
/tmp/directly is insecure for production code. While acceptable for development examples, consider using Python'stempfilemodule for improved security and platform compatibility.🔎 Suggested improvement
+import tempfile test_filename = '/tmp/napari_example_shapes.npz'Then update line 61:
-test_filename = '/tmp/napari_example_shapes.npz' +test_filename = os.path.join(tempfile.gettempdir(), 'napari_example_shapes.npz')Committable suggestion skipped: line range outside the PR's diff.
examples/dev/gui_notifications_threaded.py-12-12 (1)
12-12: Add explicitstackleveltowarnings.warn().When calling
warnings.warn()from a helper function, always includestacklevel=2(or higher) so the warning points to the caller, not the helper. This improves debugging.🔎 Proposed fix
- warnings.warn('Warning in another thread') + warnings.warn('Warning in another thread', stacklevel=2)examples/dev/gui_notifications.py-15-15 (1)
15-15: Add explicitstackleveltowarnings.warn().When calling
warnings.warn()from a helper function, always includestacklevel=2(or higher) so the warning points to the caller, not the helper.🔎 Proposed fix
def warn_(): - warnings.warn('warning!') + warnings.warn('warning!', stacklevel=2).github/labeler.yml-15-22 (1)
15-22: Duplicate glob patterns in theqtlabel configuration.Lines 18-19 and 20-21 contain duplicate entries:
napari/_qt/**/*.pyappears twicenapari/qt/**/*.pyappears twiceThis doesn't break functionality but adds unnecessary noise.
🔎 Proposed fix
qt: - changed-files: - any-glob-to-any-file: - 'napari/_qt/**/*.py' - - 'napari/_qt/**/*.py' - 'napari/qt/**/*.py' - - 'napari/qt/**/*.py'.github/workflows/make_release.yml-1-8 (1)
1-8: Fix newline character format.Static analysis indicates the file uses incorrect newline characters (likely CRLF instead of LF). Ensure the file uses Unix-style line endings (
\n) for consistency.MANIFEST.in-4-7 (1)
4-7: Duplicate include directive.Line 7 (
recursive-include src/napari *.pyi) is an exact duplicate of line 4. This redundancy should be removed.🔎 Suggested fix
recursive-include src/napari *.pyi recursive-include src/napari _tests/*.py recursive-include src/napari_builtins _tests/*.py -recursive-include src/napari *.pyi recursive-include src/napari *.png *.svg *.qss *.gif *.ico *.icnsexamples/dev/poly-hole.py-18-23 (1)
18-23: Theoptsparameter is accepted but silently ignored.The fallback
triangulatefunction acceptsoptsto match the external library's signature, but it's never used. When called withopts='p'(line 80), this option is silently discarded, which may cause behavioral differences compared to thetrianglelibrary.Consider either:
- Documenting that options are ignored in the vispy fallback, or
- Raising a warning when unsupported options are passed.
🔎 Suggested improvement
def triangulate(params, opts=None): + if opts is not None: + import warnings + warnings.warn("triangulate options are ignored when using vispy fallback", stacklevel=2) vertices_raw = params['vertices'] edges = params['segments'] tri = Triangulation(vertices_raw, edges) tri.triangulate() return {'vertices': tri.pts, 'triangles': tri.tris}.github/workflows/test_comprehensive.yml-1-136 (1)
1-136: Fix newline character issue.YAMLlint reports incorrect newline characters. Ensure the file uses LF (
\n) line endings..github/workflows/benchmarks_report.yml-1-99 (1)
1-99: Fix newline character issue.YAMLlint reports that the file uses incorrect newline characters. Ensure the file uses LF (
\n) line endings rather than CRLF (\r\n).This can typically be fixed by configuring your editor or running:
dos2unix .github/workflows/benchmarks_report.yml.github/workflows/deploy_docs.yml-1-48 (1)
1-48: Fix newline character issue.YAMLlint reports incorrect newline characters. Ensure the file uses LF (
\n) line endings..github/workflows/reusable_run_tox_test.yml-163-163 (1)
163-163: Fix extra space before colon inifconditions.The static analysis tool correctly identifies extra spaces before the colon on lines 163 and 174. This should be
if:without the space for YAML consistency.🔎 Proposed fix
- if : ${{ env.SECOND != 'none' }} + if: ${{ env.SECOND != 'none' }}- if : ${{ env.THIRD != 'none' }} + if: ${{ env.THIRD != 'none' }}Also applies to: 174-174
examples/features_table_widget.py-12-12 (1)
12-12: Typo: "viceversa" should be "vice versa".🔎 Proposed fix
-on the layer and viceversa. +on the layer and vice versa..github/workflows/pr_dependency.yml-1-16 (1)
1-16: Incorrect line ending characters detected.Similar to the dependabot.yml file, this workflow uses CRLF line endings instead of LF. Consider converting to Unix-style line endings for consistency.
sed -i 's/\r$//' .github/workflows/pr_dependency.yml.github/dependabot.yml-1-16 (1)
1-16: Incorrect line ending characters detected.The static analysis indicates this file uses CRLF (
\r\n) line endings instead of the expected LF (\n). This can cause inconsistencies in cross-platform development and CI environments.Please convert the file to use Unix-style line endings (LF). Most editors support this, or you can use:
sed -i 's/\r$//' .github/dependabot.yml.github/workflows/edit_pr_description.yml-1-40 (1)
1-40: Line endings should use LF (\n) instead of CRLF.Same issue as other workflow files - this file uses incorrect line ending characters. Ensure consistent LF line endings across all YAML workflow files.
.github/workflows/test_prereleases.yml-1-106 (1)
1-106: Line endings should use LF (\n) instead of CRLF.The static analysis tool reports this file uses incorrect line ending characters. For cross-platform consistency in git repositories, YAML files should use Unix-style LF (
\n) line endings rather than Windows-style CRLF (\r\n).Consider adding or updating a
.gitattributesfile to enforce LF line endings for YAML files:*.yml text eol=lf *.yaml text eol=lf.github/workflows/benchmarks.yml-1-216 (1)
1-216: Line endings should use LF (\n) instead of CRLF.Consistent with other workflow files in this PR, this file uses incorrect line ending characters. Consider running a bulk fix across all YAML files in the repository.
.github/ISSUE_TEMPLATE/design_related.md-21-21 (1)
21-21: Minor typo: "etc" should be "etc." in American English.The abbreviation "etc" requires a period.
🔎 Proposed fix
-- [ ] **Visual:** creating mockups, icons, etc +- [ ] **Visual:** creating mockups, icons, etc.
🧹 Nitpick comments (22)
examples/dynamic-projections-dask.py (2)
42-42: Remove unusednoqadirective.The
noqa: E402comment is flagged as unused because E402 (module-level import not at top) is not enabled in the Ruff configuration. Since the suppression has no effect, the comment can be safely removed.🔎 Proposed fix
-from magicgui import magicgui # noqa: E402 +from magicgui import magicgui
48-48: Prefer tuple unpacking over concatenation.Use the unpacking operator
*for a more Pythonic and concise syntax.🔎 Proposed fix
- window_shape = (size,) + (arr.shape[1:]) + window_shape = (size, *arr.shape[1:])examples/export_rois.py (1)
72-73: Optional: Normalize line continuation formatting for consistency.The line continuation on line 72–73 is syntactically valid but uses an unusual indentation pattern. Consider reformatting to align with the project's style conventions.
🔎 Suggested formatting adjustment
- screenshot_rois_scaled = viewer.export_rois(rois, scale=2 - ) + screenshot_rois_scaled = viewer.export_rois(rois, scale=2)Alternatively, if the longer line is intentional for readability:
- screenshot_rois_scaled = viewer.export_rois(rois, scale=2 - ) + screenshot_rois_scaled = viewer.export_rois( + rois, scale=2 + )examples/interactive_move_rectangle_3d.py (1)
1-73: Scope check: This file appears orthogonal to the Pydantic V1→V2 migration.This example script contains no Pydantic models, validators, custom types, or deprecated method calls (.dict(), .json(), etc.). The PR objectives focus on internal utilities and layer managers in src/napari—not examples. If this file was reformatted as part of a broader style pass, that's reasonable, but verify that:
- No unintended functional changes were introduced (the AI summary claims formatting-only).
- napari's public API (Viewer, shapes_layer) hasn't changed in ways that would break this example (e.g., method signatures, property access).
The code logic itself looks correct: rectangle data initialization, normal vector computation, viewer setup, mouse drag callback, and main entry point all appear intact.
To confirm the example still works post-migration, please verify:
napari.Viewer(ndisplay=3)still accepts thendisplayparameterviewer.add_shapes(),viewer.camera,viewer.text_overlayproperties are still accessible and compatible- The
@shapes_layer.mouse_drag_callbacks.appenddecorator pattern still works- Running this example on Python 3.12 (or 3.14 once npe2 is updated) does not raise errors
If you'd like, I can generate a shell script to verify that this example is syntactically correct and runs without import or attribute errors, though actual execution would require napari to be installed.
examples/custom_mouse_functions.py (1)
24-25: Callback signatures match framework API; unused parameters are intentional.The unused parameters (
vieweron line 24,layeron lines 59 and 76) are part of napari's callback handler contract. Not all handlers use both parameters, which is a normal pattern in event-driven systems.If you prefer stricter linting, you could adopt the
_parameterconvention for explicitly unused parameters (e.g.,_viewer,_layer), but this is optional for example code.Also applies to: 59-59, 76-76
.github/FUNDING.yml (1)
1-2: Consider scope alignment.This file is a funding configuration unrelated to the stated PR objective of completing Pydantic V1→V2 migration. If this change is incidental (e.g., from a cleanup commit), verify that line-ending normalization is intentional and coordinated across the repository.
examples/mgui_dask_delayed_.py (1)
25-38: Combine the twoif __name__ == '__main__':blocks into one.The code has two separate
if __name__ == '__main__':blocks (lines 25-35 and 37-38). While technically valid in Python, this is non-idiomatic and reduces clarity. Best practice is to combine setup and execution code into a single block.🔎 Proposed refactor to combine the blocks
if __name__ == '__main__': from dask.distributed import Client client = Client() @magicgui(client={'bind': client}) def widget(client, nz: int = 1000) -> Future[ImageData]: return client.submit(_slow_function, nz) viewer = napari.Viewer() viewer.window.add_dock_widget(widget, area='right') - -if __name__ == '__main__': napari.run()examples/dev/slicing/random_points.py (1)
7-9: Module docstring placement is non-standard.The docstring appears after the imports, but PEP 257 recommends placing module docstrings at the very top of the file, before any imports. This is a pre-existing style issue, not introduced by this change.
🔎 Suggested fix
+""" +Stress the points layer by generating a large number of points. +""" + import argparse import numpy as np import napari -""" -Stress the points layer by generating a large number of points. -""" - parser = argparse.ArgumentParser()examples/dev/leaking_check.py (1)
43-47: Consider adding error handling for referrer chain traversal.The nested
gc.get_referrers()calls with hard-coded indexing ([0],[1]) will raiseIndexErrorif the expected referrers don't exist or ifli()returnsNone. Since this is a dev utility, adding basic checks could make debugging more robust.Optional: Add defensive checks
print(gc.get_referrers(li())) -print(gc.get_referrers(li())[1]) -print(gc.get_referrers(gc.get_referrers(gc.get_referrers(li())[0]))) -res = gc.get_referrers(gc.get_referrers(gc.get_referrers(li())[0])[0]) -print(res) +referrers = gc.get_referrers(li()) +if len(referrers) > 1: + print(referrers[1]) +if referrers: + level2 = gc.get_referrers(referrers[0]) + print(gc.get_referrers(level2)) + if level2: + res = gc.get_referrers(level2[0]) + print(res)examples/dev/demo_shape_creation.py (2)
73-73: Fix spacing inconsistency in list.Missing space after the comma before
22.🔎 Proposed fix
- coords = coords[:, [0, 8, 16,22]] + coords = coords[:, [0, 8, 16, 22]]
93-96: Normalize list spacing for consistency.The RGBA color arrays on lines 93 and 95 lack spaces after commas, inconsistent with other lists in the file (e.g., line 73).
🔎 Proposed fix
- 'face_color': 'class' if args.properties else [1,1,1,1], + 'face_color': 'class' if args.properties else [1, 1, 1, 1], 'face_color_cycle': color_cycle, - 'edge_color': 'class' if args.properties else [1,1,1,1], + 'edge_color': 'class' if args.properties else [1, 1, 1, 1],examples/minimum_blending.py (1)
27-33: Inconsistent spacing around=in keyword arguments.Lines 30-31 use spaces around
=while line 32 has inconsistent spacing (no space before, space after). Per PEP 8, keyword arguments should not have spaces around=:contrast_limits=[[1110, 23855], [1600, 50000]], colormap=['I Purple', 'I Orange'], blending=['translucent_no_depth', 'minimum']Given this PR includes formatting normalization, consider standardizing the spacing for consistency.
🔎 Proposed fix for consistent keyword argument spacing
-viewer.add_image(data.cells3d(), - name=['membrane', 'nuclei'], - channel_axis=1, - contrast_limits = [[1110, 23855], [1600, 50000]], - colormap = ['I Purple', 'I Orange'], - blending= ['translucent_no_depth', 'minimum'] - ) +viewer.add_image(data.cells3d(), + name=['membrane', 'nuclei'], + channel_axis=1, + contrast_limits=[[1110, 23855], [1600, 50000]], + colormap=['I Purple', 'I Orange'], + blending=['translucent_no_depth', 'minimum'] + )examples/dev/overlays.py (1)
50-64: Event handler callbacks requireeventparameter by framework contract—consider marking as intentionally unused.The Ruff lint flags (ARG002) on the
eventparameters are false positives in the context of event handler patterns. However, since theeventparameter is genuinely unused in these methods, you can make the intent explicit by renaming to a leading underscore (e.g.,_event), which signals to linters that the parameter is accepted by contract but intentionally unused.🔎 Suggested convention change
- def _on_color_change(self, event=None): + def _on_color_change(self, _event=None): self.node.color = self.overlay.color - def _on_position_change(self, event=None): + def _on_position_change(self, _event=None): # we can overload the position changing to account for the size, so that the dot # always sticks to the edge; there are `offset` attributes specifically for this self.x_offset = self.y_offset = self.overlay.size / 2 super()._on_position_change() - def _on_size_change(self, event=None): + def _on_size_change(self, _event=None): self.node.radius = self.overlay.size / 2 self.x_size = self.overlay.size self.y_size = self.overlay.size # trigger position update since the radius changed self._on_position_change()examples/dev/slicing/janelia_s3_n5_multiscale.py (1)
1-14: Consider moving module docstring before imports per PEP 257.The module docstring (lines 6-13) is currently placed after the imports. Per PEP 257, module docstrings should appear at the very beginning of the file, before any import statements. While this doesn't affect functionality, it would improve consistency with Python conventions.
🔎 Suggested structure
+""" +The sample data here is Interphase HeLa Cell [https://openorganelle.janelia.org/datasets/jrc_hela-3], +from HHMI's OpenOrganelle [https://openorganelle.janelia.org]. + +The data are hosted by Open Data on AWS on S3. + +This tests access to multi-scale remote data. +""" + import dask.array as da import zarr import napari - -""" -The sample data here is Interphase HeLa Cell [https://openorganelle.janelia.org/datasets/jrc_hela-3], -from HHMI's OpenOrganelle [https://openorganelle.janelia.org]. - -The data are hosted by Open Data on AWS on S3. - -This tests access to multi-scale remote data. -""".github/CONTRIBUTING.md (1)
17-19: Minor: Trailing whitespace on line 19.There appears to be trailing whitespace after "not directly." on line 19.
🔎 Proposed fix
- `reusable_run_tox_test.yml` uses our constraint files to install the compatible dependencies for each test environment which may differ - by OS and qt versions. It is called from `test_pull_request.yml` and `test_comprehensive.yml`, not directly. + by OS and qt versions. It is called from `test_pull_request.yml` and `test_comprehensive.yml`, not directly.examples/add_surface_2D.py (1)
20-20: Inconsistent spacing around commas inside the tuple.The spaces before commas (
data ,andfaces ,) are unconventional per PEP 8 style. Consider removing the extra spaces for consistency.🔎 Suggested fix
-layer = viewer.add_surface((data , faces , values)) +layer = viewer.add_surface((data, faces, values))dockerfile (1)
1-6: Consider parameterizing platform flag in FROM statement.The
FROM --platform=linux/amd64uses a constant value. For better portability and flexibility, consider using a build argument (e.g.,--build-arg TARGETPLATFORM) or omitting the flag to allow automatic platform detection, unlessamd64is a hard requirement.examples/3D_paths.py (2)
18-20: Inconsistent indentation reduces readability.The arguments are over-indented relative to the function call. Consider aligning with PEP 8 continuation style.
🔎 Suggested formatting
-blobs = data.binary_blobs( - length=128, blob_size_fraction=0.05, n_dim=3, volume_fraction=0.05 - ) +blobs = data.binary_blobs( + length=128, blob_size_fraction=0.05, n_dim=3, volume_fraction=0.05 +)
26-29: Inconsistent indentation in nested array definition.The inner array elements have varying indentation levels which reduces readability. Consider consistent formatting.
🔎 Suggested formatting
-path = np.array([np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [20, 5, 15], - [56, 70, 21], [127, 127, 127]]), - np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [0, 5, 15], - [0, 70, 21], [0, 127, 127]])]) +path = np.array([ + np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [20, 5, 15], + [56, 70, 21], [127, 127, 127]]), + np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [0, 5, 15], + [0, 70, 21], [0, 127, 127]]) +])examples/action_manager.py (2)
106-109: Minor formatting inconsistency in dictionary.The dictionary values have extra leading whitespace creating inconsistent indentation compared to typical Python style.
🔎 Suggested fix
settings = { - 'napari:activate_points_add_mode' : '=', - 'napari:activate_points_select_mode': '-', + 'napari:activate_points_add_mode': '=', + 'napari:activate_points_select_mode': '-', }
112-114: Minor: Inconsistent spacing in loop body.There's an extra space before the
=in the assignment.🔎 Suggested fix
for action, key in settings.items(): - _old_shortcut = action_manager.unbind_shortcut(action) - action_manager.bind_shortcut(action, key) + _old_shortcut = action_manager.unbind_shortcut(action) + action_manager.bind_shortcut(action, key)examples/live_tiffs_.py (1)
111-114: Consider clarifying the for-else construct.The
elseclause on aforloop executes when the loop completes without abreak. Here it yieldsNoneafter processing all files, which is intentional for signaling the thread worker. However, this Python idiom is often unfamiliar to readers.A more explicit alternative:
# yield every file to process as a dask.delayed function object. for p in sorted(files_to_process, key=alphanumeric_key): yield delayed(imread)(os.path.join(path, p)) - else: - yield + + # Yield None if no files were processed this iteration + if not files_to_process: + yield NoneThis is optional—the current code is correct, just less common.
examples/dev/controllable_overlay.py
Outdated
| # mypy: disable-error-code="attr-defined" | ||
| import warnings | ||
| from collections.abc import Generator | ||
| from copy import deepcopy | ||
|
|
||
| import numpy as np | ||
| from magicgui import magicgui | ||
|
|
||
| import napari | ||
| from napari._vispy.mouse_event import NapariMouseEvent | ||
| from napari._vispy.overlays.interaction_box import VispySelectionBoxOverlay | ||
| from napari._vispy.utils.visual import overlay_to_visual | ||
| from napari.components.overlays import SelectionBoxOverlay | ||
| from napari.components.overlays.interaction_box import InteractionBoxHandle | ||
| from napari.layers import Image | ||
| from napari.layers.utils.interaction_box import ( | ||
| generate_interaction_box_vertices, | ||
| get_nearby_handle, | ||
| ) | ||
|
|
||
|
|
||
| # define a model for the selection box overlay; | ||
| # we subclass from SelectionBoxOverlay to get the | ||
| # default behavior of the selection box; | ||
| class SelectionBoxNoRotation(SelectionBoxOverlay): | ||
| """Selection box overlay with no rotation handle.""" | ||
|
|
||
|
|
||
| # we also need to define an equivalent vispy overlay; | ||
| # again, we subclass from VispySelectionBoxOverlay | ||
| class VispySelectionBoxNoRotation(VispySelectionBoxOverlay): | ||
| """Vispy selection box overlay with no rotation handle.""" | ||
|
|
||
| # the _on_bounds_change method is the same as in the | ||
| # original VispySelectionBoxOverlay, but we set | ||
| # rotation to False to not draw the rotation handle | ||
| def _on_bounds_change(self) -> None: | ||
| if self.layer._slice_input.ndisplay == 2: | ||
| top_left, bot_right = self.overlay.bounds | ||
| self.node.set_data( | ||
| # invert axes for vispy | ||
| top_left[::-1], | ||
| bot_right[::-1], | ||
| handles=self.overlay.handles, | ||
| selected=self.overlay.selected_handle, | ||
| # by setting rotation to False, | ||
| # the circle handle will not be drawn | ||
| rotation=False, | ||
| ) | ||
|
|
||
|
|
||
| # before we can use the new overlay, we have to update | ||
| # the overlay_to_visual mapping to include our new overlay; | ||
| # this is necessary so that the correct vispy overlay | ||
| # is used when the overlay is created | ||
| overlay_to_visual[SelectionBoxNoRotation] = VispySelectionBoxNoRotation | ||
|
|
||
| viewer = napari.Viewer() | ||
|
|
||
| # we add an image layer with random data; | ||
| data = np.random.randint(0, 255, size=(1024, 512), dtype=np.uint8) | ||
| image = viewer.add_image( | ||
| data, | ||
| name='image', | ||
| ) | ||
|
|
||
| # just for type checking | ||
| assert isinstance(image, Image) | ||
|
|
||
| # we recover the bounds of the image layer; | ||
| # this method will ensure that the overlay is drawn | ||
| # correctly in the viewer | ||
| # TODO: this half pixel offset should be done on the visual side actually | ||
| layer_bounds = ((0, 0), data.shape) | ||
| image._overlays['selection_no_rotation'] = SelectionBoxNoRotation( | ||
| bounds=layer_bounds, handles=True | ||
| ) | ||
|
|
||
| # with some adjustments, the selection box overlay | ||
| # can be interacted with via mouse events; | ||
| # we need to first setup the mouse event handlers | ||
| # to allow for the interaction with the overlay; | ||
|
|
||
|
|
||
| # this callback will handle the mouse events of | ||
| # dragging and dropping the selection box handles; | ||
| # it will check if the mouse is in range of one of the | ||
| # overlay handles; if it is, we will set the selected handle | ||
| # to the handle that is closest to the mouse position; | ||
| # then we will set the bounds of the overlay to the | ||
| # new position of the mouse; | ||
| def resize_selection_box( | ||
| layer: Image, event: 'NapariMouseEvent' | ||
| ) -> 'Generator[None, None, None]': | ||
| """Resize the selection box based on mouse movement. | ||
| Parameters | ||
| ---------- | ||
| layer : DetectorLayer | ||
| The layer to resize the selection box for. | ||
| event : NapariMouseEvent | ||
| The event triggered by mouse movement. | ||
| Yields | ||
| ------ | ||
| None | ||
| This is a generator function that handles mouse dragging. | ||
| """ | ||
| if len(event.dims_displayed) != 2: | ||
| return | ||
|
|
||
| # Get the selected handle | ||
| selected_handle = layer._overlays['selection_no_rotation'].selected_handle | ||
| if selected_handle is None or selected_handle in [ | ||
| InteractionBoxHandle.INSIDE, | ||
| InteractionBoxHandle.ROTATION, | ||
| ]: | ||
| # If no handle is selected or the selected handle | ||
| # is INSIDE or ROTATION, do nothing | ||
| return | ||
|
|
||
| top_left, bot_right = ( | ||
| list(x) | ||
| for x in deepcopy(layer._overlays['selection_no_rotation'].bounds) | ||
| ) | ||
|
|
||
| layer_bounds = image._display_bounding_box_augmented([0, 1]) | ||
|
|
||
| # to prevent the event from being passed down to the | ||
| # pan-zoom event handler, set the event as handled; | ||
| event.handled = True | ||
|
|
||
| yield | ||
|
|
||
| # Main event loop for handling drag events | ||
| while event.type == 'mouse_move': | ||
| mouse_pos = layer.world_to_data(event.position)[event.dims_displayed] | ||
| clipped_y = np.clip(mouse_pos[0], *layer_bounds[0]) | ||
| clipped_x = np.clip(mouse_pos[1], *layer_bounds[1]) | ||
|
|
||
| # based on the new mouse position, we recalculate the bounds | ||
| # of the overlay; we need to ensure that the new bounds are within | ||
| # the bounds of the image | ||
| match selected_handle: | ||
| case InteractionBoxHandle.TOP_LEFT: | ||
| top_left[0] = clipped_y | ||
| top_left[1] = clipped_x | ||
| case InteractionBoxHandle.TOP_CENTER: | ||
| top_left[0] = clipped_y | ||
| case InteractionBoxHandle.TOP_RIGHT: | ||
| top_left[0] = clipped_y | ||
| bot_right[1] = clipped_x | ||
| case InteractionBoxHandle.CENTER_LEFT: | ||
| top_left[1] = clipped_x | ||
| case InteractionBoxHandle.CENTER_RIGHT: | ||
| bot_right[1] = clipped_x | ||
| case InteractionBoxHandle.BOTTOM_LEFT: | ||
| bot_right[0] = clipped_y | ||
| top_left[1] = clipped_x | ||
| case InteractionBoxHandle.BOTTOM_CENTER: | ||
| bot_right[0] = clipped_y | ||
| case InteractionBoxHandle.BOTTOM_RIGHT: | ||
| bot_right[0] = clipped_y | ||
| bot_right[1] = clipped_x | ||
| case _: | ||
| pass | ||
|
|
||
| # now we update the bounds of the overlay | ||
| # to trigger the visual update; | ||
| layer._overlays['selection_no_rotation'].bounds = deepcopy( | ||
| (tuple(top_left), tuple(bot_right)) | ||
| ) | ||
| yield | ||
|
|
||
|
|
||
| # this callback will hightlight the overlay handles | ||
| # when the mouse hovers over them; | ||
| def highlight_roi_box_handles(layer: Image, event: NapariMouseEvent) -> None: | ||
| """Highlight the hovered handle of a selection box. | ||
| Parameters | ||
| ---------- | ||
| layer : Image | ||
| The layer to highlight the selection box for. | ||
| event : NapariMouseEvent | ||
| The event triggered by mouse movement. | ||
| """ | ||
| # the event is not handled by the viewer | ||
| # if the number of displayed dimensions is not 2 | ||
| # this is a requirement for the overlay to be displayed | ||
| if len(event.dims_displayed) != 2: | ||
| return | ||
|
|
||
| # we work in data space so we're axis aligned which simplifies calculation | ||
| # same as Layer.world_to_data | ||
| world_to_data = ( | ||
| layer._transforms[1:].set_slice(layer._slice_input.displayed).inverse | ||
| ) | ||
|
|
||
| # interaction box calculations all happen in vispy coordinates (zyx) | ||
| pos = np.array(world_to_data(event.position))[event.dims_displayed][::-1] | ||
|
|
||
| top_left, bot_right = layer._overlays['selection_no_rotation'].bounds | ||
| handle_coords = generate_interaction_box_vertices( | ||
| top_left[::-1], bot_right[::-1], handles=True | ||
| ) | ||
| nearby_handle = get_nearby_handle(pos, handle_coords) | ||
|
|
||
| # if the selected handle is INSIDE or ROTATION, we don't want to | ||
| # highlight the handles, so we return without doing anything | ||
| if nearby_handle in [ | ||
| InteractionBoxHandle.INSIDE, | ||
| InteractionBoxHandle.ROTATION, | ||
| ]: | ||
| nearby_handle = None | ||
|
|
||
| # set the selected vertex of the box to the nearby_handle (can also be INSIDE or None) | ||
| layer._overlays['selection_no_rotation'].selected_handle = nearby_handle | ||
|
|
||
|
|
||
| # after defining the callbacks, we need to connect them to our layer; | ||
| # mouse_move_callbacks is a list of callbacks invoked when the mouse | ||
| # hovers over the layer; | ||
| # mouse_drag_callbacks is a list of callbacks invoked when the | ||
| # mouse is pressed, moved and released; | ||
| image.mouse_move_callbacks.append(highlight_roi_box_handles) | ||
| image.mouse_drag_callbacks.append(resize_selection_box) | ||
|
|
||
|
|
||
| # we use a simple magicgui widget to allow | ||
| # the toggling of the selection box overlay | ||
| # as demonstration | ||
| @magicgui(auto_call=True) | ||
| def toggle_overlay( | ||
| viewer: napari.Viewer, toggle_selection_box: bool = False | ||
| ) -> None: | ||
| with warnings.catch_warnings(): | ||
| warnings.simplefilter('ignore') | ||
| viewer.layers['image']._overlays[ | ||
| 'selection_no_rotation' | ||
| ].visible = toggle_selection_box | ||
|
|
||
|
|
||
| # add the widget to the viewer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether these example files import or use Pydantic models
echo "=== Checking for Pydantic imports in controllable_overlay.py ==="
grep -n "pydantic\|BaseModel\|validator\|field_validator" examples/dev/controllable_overlay.py || echo "No Pydantic imports found"
echo -e "\n=== Checking for Pydantic usage patterns ==="
grep -n "\.dict()\|\.model_dump()\|\.json()\|\.model_dump_json()\|__fields__\|model_fields" examples/dev/controllable_overlay.py || echo "No Pydantic method calls found"Repository: derekthirstrup/napari
Length of output: 232
This file contains no Pydantic imports, model definitions, or method calls (e.g., .dict(), .model_dump(), .json()). The PR objectives describe a comprehensive Pydantic V2 migration, yet this example file has no involvement in that work.
Clarify whether inclusion is intentional due to indirect Pydantic dependencies through napari's internal modules, or if the file was included by mistake. If there are transitive Pydantic dependencies, document that context.
| err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error) | ||
| warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mismatched arguments to QtViewerPushButton.
The arguments to QtViewerPushButton don't align with the constructor signature. Per the reference code, the signature is __init__(button_name: str, tooltip: str = '', slot=None, action: str = '', ...).
Line 23 passes None as button_name (should be a string), and the third argument 'new Error' is passed to slot (should be a callable, not a string). The function make_error is passed to action (should be a string, not a callable).
The argument order and types need correction.
🔎 Proposed fix
-err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error)
-warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning)
+err_btn = QtViewerPushButton('error', 'new Error', make_error)
+warn_btn = QtViewerPushButton('warning', 'new Warn', make_warning)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error) | |
| warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning) | |
| err_btn = QtViewerPushButton('error', 'new Error', make_error) | |
| warn_btn = QtViewerPushButton('warning', 'new Warn', make_warning) |
🤖 Prompt for AI Agents
In examples/dev/gui_notifications_threaded.py around lines 23-24, the
QtViewerPushButton arguments are mismatched: pass a string as button_name and a
callable as slot, and ensure action is a string. Change the calls to use the
button name first (e.g., "new Error" / "new Warn"), tooltip second (e.g.,
"warning"), slot third as the callable (make_error / make_warning), and provide
a string action (e.g., "error" / "warn" or empty string) as the fourth argument.
examples/dev/gui_notifications.py
Outdated
| import something_that_does_not_exist | ||
| return something_that_does_not_exist.fun(x, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove broken import that will crash the example.
Line 10 imports a module that does not exist, and line 11 attempts to call a method on it. This will raise an ImportError at runtime and prevent the example from running. If the intent is to demonstrate error handling in threads, use a real module or raise an error explicitly without the bogus import.
🔎 Proposed fix
def raise_():
x = 1
y = 'a string'
- import something_that_does_not_exist
- return something_that_does_not_exist.fun(x, y)
+ raise RuntimeError('Error in another thread')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import something_that_does_not_exist | |
| return something_that_does_not_exist.fun(x, y) | |
| def raise_(): | |
| x = 1 | |
| y = 'a string' | |
| raise RuntimeError('Error in another thread') |
🤖 Prompt for AI Agents
In examples/dev/gui_notifications.py around lines 10-11, remove the broken
import of something_that_does_not_exist and the call to its .fun method; replace
with either a valid, real import and function call (e.g., use a standard library
module or a helper function defined in this repo) or explicitly raise an
exception to demonstrate thread error handling (e.g., raise RuntimeError("demo
error") inside the thread) so the example no longer fails with ImportError at
runtime.
examples/dev/gui_notifications.py
Outdated
|
|
||
|
|
||
| viewer = napari.Viewer() | ||
| layer_buttons = viewer.window._qt_viewer.layerButtons |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid accessing private attributes.
Line 19 accesses viewer.window._qt_viewer.layerButtons. The _qt_viewer attribute (with underscore prefix) is a private implementation detail. Use the public API qt_viewer instead to ensure forward compatibility.
🔎 Proposed fix
-layer_buttons = viewer.window._qt_viewer.layerButtons
+layer_buttons = viewer.window.qt_viewer.layerButtons📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| layer_buttons = viewer.window._qt_viewer.layerButtons | |
| layer_buttons = viewer.window.qt_viewer.layerButtons |
🤖 Prompt for AI Agents
In examples/dev/gui_notifications.py around line 19, the code accesses the
private attribute viewer.window._qt_viewer.layerButtons; change this to use the
public API by replacing it with viewer.window.qt_viewer.layerButtons (or
viewer.qt_viewer.layerButtons if the public accessor is on viewer directly in
this context), ensuring you reference the public qt_viewer attribute instead of
the underscored _qt_viewer so the code uses the supported API and remains
forward-compatible.
# Description Currently, adding a label triggers the benchmark workflow. It is then skipped conditionally, but add an entry to the list of actions in the GitHub interface. <img width="1382" height="931" alt="obraz" src="https://github.com/user-attachments/assets/0dc3320b-a060-4261-8be7-f1ed0d3346b7" /> This PR moves label trigger to a separate workflow that will contain all skipped
# References and relevant issues closes napari#8521 # Description It looks like a bug is introduced in napari#7150. There was a wrong assumption that for `ScalarField` the `data.ndim` is equal to `layer.ndim`. This PR comes back to using `layer.ndim` to properly expand data dimensionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/build_docs.yml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/build_docs.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: ubuntu-22.04 / ubuntu-22.04 py 3.10 headless cov
- GitHub Check: windows-2022 / windows-2022 py 3.12 pyqt5 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.13 pyqt6 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.12 pyqt5 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.11 pyqt5 cov
- GitHub Check: windows-2022 / windows-2022 py 3.13 pyqt5 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.11 pyside6 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.10 pyqt5 cov
- GitHub Check: ubuntu-22.04 / ubuntu-22.04 py 3.10 pyqt5 min_req cov
- GitHub Check: windows-2022 / windows-2022 py 3.11 pyqt5 cov
- GitHub Check: windows-2022 / windows-2022 py 3.10 pyqt5 cov
- GitHub Check: ubuntu-latest / ubuntu-latest py 3.13 pyqt5 cov
- GitHub Check: test examples / ubuntu-latest py 3.12 py312-linux-pyqt6-examples-cov cov
- GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
- GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
- GitHub Check: pip install / ubuntu-latest 3.12 pip install
- GitHub Check: build_docs / Build & Upload Docs Artifact
- GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)
- GitHub Check: build1 (Docker, napari, napari/napari)
🔇 Additional comments (1)
.github/workflows/build_docs.yml (1)
29-31: Reasonable temporary workaround for blocking Pydantic V2 incompatibility.The change to
slimfasttarget is a pragmatic solution to skip npe2 docs prep, which fails due to Pydantic V2 schema incompatibility in npe2's_docs/render.py. Theslimfasttarget usesprep-stubsinstead ofprep-docs, avoiding the problematic step. The comments and PR reference provide clear context for this temporary measure until npe2 migrates to Pydantic V2.
.github/workflows/build_docs.yml
Outdated
| name: Build PR Docs | ||
|
|
||
| on: | ||
| push: | ||
| branches: | ||
| - docs | ||
| tags: | ||
| - 'v*' | ||
| workflow_dispatch: | ||
| workflow_call: | ||
| pull_request: | ||
| paths: | ||
| - '.github/workflows/build_docs.yml' | ||
| - 'resources/constraints/*docs*' | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.ref }} | ||
| cancel-in-progress: true | ||
|
|
||
| jobs: | ||
| build_docs: | ||
| uses: napari/shared-workflows/.github/workflows/build_docs.yml@main | ||
| secrets: inherit | ||
| with: | ||
| main-repo: ${{ github.repository }} | ||
| main-ref: ${{ github.sha }} | ||
| docs-repo: napari/docs | ||
| docs-ref: main | ||
| # Use slimfast to skip npe2 docs prep until npe2 has Pydantic V2 compatibility | ||
| # See: https://github.com/napari/npe2/pull/419 | ||
| make_target: "slimfast" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix line ending format.
YAMLlint reports incorrect line endings in this file. Ensure the file uses LF (\n) line endings instead of CRLF (\r\n) for consistency with Unix conventions and to avoid potential issues with git and CI tools.
🔎 Fix line endings
Convert the file to use LF line endings. In most editors:
- VS Code: Click the line ending indicator in the status bar and select "LF"
- vim/neovim:
:set ff=unixthen:w - Command line:
dos2unix .github/workflows/build_docs.yml
Or configure git to handle line endings automatically via .gitattributes:
*.yml text eol=lf
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: wrong new line character: expected \n
(new-lines)
🤖 Prompt for AI Agents
In .github/workflows/build_docs.yml around lines 1 to 31, the file currently
uses CRLF line endings; change the file to use LF line endings (\n) throughout
(e.g., via your editor’s line-ending setting or a conversion tool) and commit
the change, and add/update a .gitattributes entry to enforce LF for YAML files
(so future commits preserve Unix-style line endings).
# References and relevant issues closes napari#8506 # Description When calculating mitter vectors, initialize the output array for division with zeros to avoid random values. This resolves the problem pointed out by the warning introduced in NumPy 2.4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/build_docs.yml (1)
1-1: Line ending issue already flagged.The CRLF line ending issue has already been identified in a previous review comment.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/build_docs.yml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/build_docs.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
- GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
- GitHub Check: build_docs / Build & Upload Docs Artifact
- GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)
.github/workflows/build_docs.yml
Outdated
| # Use slimfast to skip npe2 docs prep until npe2 has Pydantic V2 compatibility | ||
| # Note: docs build may still fail due to Pydantic V2's model_json_schema | ||
| # docstrings having formatting issues that Sphinx treats as errors. | ||
| # This requires sphinx configuration changes in napari/docs repo. | ||
| # See: https://github.com/napari/npe2/pull/419 | ||
| make_target: "slimfast" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check if the file exists and read the lines around 29-34
head -40 .github/workflows/build_docs.yml | tail -20Repository: derekthirstrup/napari
Length of output: 817
🏁 Script executed:
# Search for "slimfast" in the repository to understand what this target does
rg "slimfast" --type yaml --type makefile -B 2 -A 2Repository: derekthirstrup/napari
Length of output: 97
🌐 Web query:
napari npe2 pull request 419 Pydantic V2 compatibility
💡 Result:
Do you want a link, a summary of the changes in PR napari#419, or the current merge/status (and whether it adds Pydantic v2 compatibility)?
🏁 Script executed:
# Search for slimfast without file type restrictions
rg "slimfast" -B 2 -A 2Repository: derekthirstrup/napari
Length of output: 47
🌐 Web query:
github.com/napari/npe2/pull/419 Pydantic V2
💡 Result:
Do you want a summary of that specific GitHub pull request (napari/npe2 PR napari#419), the changes needed for Pydantic v2 in npe2, or a link to the PR page so you can view it?
🌐 Web query:
site:github.com/napari/npe2/pull/419
💡 Result:
I couldn't find a GitHub Pull Request at site:github.com/napari/npe2/pull/419. The npe2 repository is here if you want me to search it directly or check a different PR number: https://github.com/napari/npe2. [1]
Would you like me to (a) search the npe2 repo for PRs mentioning a specific topic, (b) list recent npe2 PRs, or (c) open a different PR number/URL?
🌐 Web query:
npe2 Pydantic V2 compatibility status 2024 2025
💡 Result:
Short summary (2024–2025):
- Napari moved to Pydantic v2 as a minimum dependency starting with the 0.6.x series — napari requires Pydantic >= 2.2 and dropped Pydantic v1 support in that transition. [1][2]
- During 0.6.x napari auto-converts older npe1 plugins to npe2, but some plugins that still depend on Pydantic v1 may break; napari documents this and recommends plugin authors migrate to npe2/Pydantic v2. [2][3]
- Pydantic v2 itself continued receiving updates through 2024–2025 (performance/bugfix releases); some v2 versions had compatibility issues with newer Python (e.g., pre-2.8.0 v2 had pydantic-core/Python 3.13 incompatibilities), so upgrading to recent Pydantic v2.x is recommended. [4][5]
If you want, I can:
- list specific napari plugin compatibility notes or plugins known to be broken, or
- check whether a particular plugin or environment (Python version) will work with Pydantic v2.
🏁 Script executed:
# Look for documentation about the shared-workflows repository
rg "shared-workflows" -A 5 -B 5Repository: derekthirstrup/napari
Length of output: 47
🌐 Web query:
napari shared-workflows build_docs slimfast make target
💡 Result:
Short answer: there is no make target named build_docs — use the docs-related targets instead (most relevant: slimfast or docs). CI uses the slimfast docs-only build by default; to request a different docs target on a PR comment write:
@napari-bot make
where can be: html, html-noplot, docs, slimfast, slimgallery (and -live variants for local/pixi). Locally you can run the same targets with make or pixi (recommended: pixi run slimfast). [1][2]
Sources:
- napari docs contributing (lists make targets, slimfast default, and @napari-bot trigger). [1]
- napari/docs README (make/pixi usage and slimfast/html targets). [2]
🌐 Web query:
github.com/napari/shared-workflows build_docs.yml make_target
💡 Result:
Do you want me to (A) locate and show the build_docs.yml file in napari/shared-workflows, or (B) explain what the make_target input/variable does and where it's used in the napari docs CI? Which one?
Reasonable workaround for Pydantic V2 migration.
The change to use the "slimfast" make target is a pragmatic temporary solution to avoid npe2 Pydantic V2 compatibility issues during the migration. The "slimfast" target is a documented, standard make target in napari used for docs-only builds. The comments provide clear context and rationale.
Note: The referenced PR link (napari/npe2#419) could not be verified as accessible. If this PR exists in a different location or under a different number, please update the comment reference for future clarity.
🤖 Prompt for AI Agents
.github/workflows/build_docs.yml lines 29-34: the workflow comment references a
PR link that could be incorrect or unreachable; verify whether
https://github.com/napari/npe2/pull/419 is the correct source for the Pydantic
V2/sphinx discussion and update the comment to point to the correct PR/issue or
remove the link if none exists; if you confirm the link, add a brief note with
the PR title or change the URL to the canonical issue/PR in the napari/npe2 repo
so future readers can locate the discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.circleci/config.ymlpyproject.tomlsrc/napari/_qt/_qapp_model/injection/_qprocessors.py
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.circleci/config.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
- GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
- GitHub Check: Import lint
- GitHub Check: build_docs / Build & Upload Docs Artifact
- GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)
- GitHub Check: build1 (Docker, napari, napari/napari)
🔇 Additional comments (2)
.circleci/config.yml (2)
1-9: LGTM!The header comments and configuration structure are clear and well-documented.
48-49: Clarify the discrepancy between CircleCI's 'slim' and build_docs.yml's 'slimfast' target.The switch to a reduced docs build target is appropriate as a temporary workaround during the Pydantic V2 migration. However, note that .github/workflows/build_docs.yml specifies
make_target: "slimfast"while this CircleCI config usesmake slim. Verify these targets are equivalent or update one for consistency. The build_docs.yml comments already document that docs may still encounter Pydantic V2 formatting issues requiring sphinx configuration changes in the napari/docs repository (see napari/npe2#419).
.circleci/config.yml
Outdated
| build-docs: | ||
| jobs: | ||
| - build-docs | ||
| # As much as possible, this file should be kept in sync with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix line endings to use LF instead of CRLF.
The static analysis tool reports that this file uses incorrect line endings (CRLF instead of LF). This should be corrected for consistency with Unix/Linux conventions and better Git handling.
You can fix this by running:
#!/bin/bash
# Convert CRLF to LF line endings
dos2unix .circleci/config.yml
# Or using sed:
# sed -i 's/\r$//' .circleci/config.yml🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: wrong new line character: expected \n
(new-lines)
🤖 Prompt for AI Agents
.file .circleci/config.yml lines 1-1: the file uses CRLF line endings; convert
them to LF and commit the change. Fix by running a line-ending conversion (e.g.,
dos2unix or sed to remove trailing CR characters), verify the file now has LF
endings, set repository/git config (core.autocrlf=false or appropriate
.gitattributes) to prevent reintroduction, then add, commit, and push the
updated .circleci/config.yml.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/benchmarks.yml (2)
109-114: Critical: Pull request context unavailable in workflow_call.When
GITHUB_EVENT_NAME == workflow_call, the expressionsgithub.event.pull_request.*will be undefined becauseworkflow_callevents don't automatically inherit the calling workflow's event context. Thebenchmark_labels_trigger.ymlworkflow doesn't pass PR context as inputs, so these variables will be empty:
github.event.pull_request.number(line 110)github.event.pull_request.base.sha(line 111)github.event.pull_request.base.label(line 113)HEAD_LABELfrom line 97 (github.event.pull_request.head.label)This will cause benchmark execution to fail or produce incorrect baseline/contender refs.
🔎 Proposed fix: Define and use workflow inputs
Add input definitions to the
workflow_calltrigger:on: workflow_call: + inputs: + pr_number: + type: string + required: false + base_sha: + type: string + required: false + head_sha: + type: string + required: false + base_label: + type: string + required: false + head_label: + type: string + required: falseThen update the conditional logic to use inputs:
- if [[ $GITHUB_EVENT_NAME == workflow_call ]]; then - EVENT_NAME="PR #${{ github.event.pull_request.number }}" - BASE_REF=${{ github.event.pull_request.base.sha }} + if [[ $GITHUB_EVENT_NAME == workflow_call ]] && [[ -n "${{ inputs.pr_number }}" ]]; then + EVENT_NAME="PR #${{ inputs.pr_number }}" + BASE_REF=${{ inputs.base_sha }} CONTENDER_REF=${GITHUB_SHA} - echo "Baseline: ${BASE_REF} (${{ github.event.pull_request.base.label }})" - echo "Contender: ${CONTENDER_REF} ($HEAD_LABEL)" + echo "Baseline: ${BASE_REF} (${{ inputs.base_label }})" + echo "Contender: ${CONTENDER_REF} (${{ inputs.head_label }})"Also update
HEAD_LABELat line 97:- HEAD_LABEL: ${{ github.event.pull_request.head.label }} + HEAD_LABEL: ${{ inputs.head_label || github.event.pull_request.head.label }}
179-191: Critical: Same pull request context issue for PR comment generation.The conditional at line 179 checks for
workflow_callbut then attempts to accessgithub.event.pull_request.numberat line 181, which will be undefined for the same reason explained in the previous comment.Update this section to use workflow inputs as well:
- if [[ $GITHUB_EVENT_NAME == workflow_call ]]; then + if [[ $GITHUB_EVENT_NAME == workflow_call ]] && [[ -n "${{ inputs.pr_number }}" ]]; then - echo "${{ github.event.pull_request.number }}" > .asv/results/pr_number + echo "${{ inputs.pr_number }}" > .asv/results/pr_number
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/benchmark_labels_trigger.yml.github/workflows/benchmarks.yml.github/workflows/benchmarks_report.ymlsrc/napari/_pydantic_compat.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
- GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
🔇 Additional comments (2)
.github/workflows/benchmarks_report.yml (1)
14-14: LGTM! Correctly extends trigger coverage.The addition of "Benchmark Trigger by Label" to the workflow_run trigger list ensures that benchmark reports are generated regardless of which workflow invoked the benchmarks.
.github/workflows/benchmarks.yml (1)
26-27: LGTM! Good additions for benchmark robustness.The
OMP_NUM_THREADSandASV_OPTIONSenvironment variables enhance benchmark stability by disabling OpenMP parallelism and configuring asv behavior (split reports, show errors, 1.5x performance threshold, 900s timeout).
| jobs: | ||
| trigger-benchmarks: | ||
| if: github.event.label.name == 'run-benchmarks' | ||
| uses: ./.github/workflows/benchmarks.yml | ||
| secrets: inherit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: PR context not available in called workflow.
When this workflow invokes benchmarks.yml via workflow_call, the called workflow won't have access to github.event.pull_request.* context. However, benchmarks.yml lines 109-114 and 179-191 expect this context when GITHUB_EVENT_NAME == workflow_call:
# benchmarks.yml line 110-114
EVENT_NAME="PR #${{ github.event.pull_request.number }}"
BASE_REF=${{ github.event.pull_request.base.sha }}
CONTENDER_REF=${GITHUB_SHA}These values will be empty/undefined, causing the benchmark run to fail.
🔎 Proposed fix: Pass PR context as workflow inputs
Update benchmark_labels_trigger.yml to pass PR context:
jobs:
trigger-benchmarks:
if: github.event.label.name == 'run-benchmarks'
uses: ./.github/workflows/benchmarks.yml
+ with:
+ pr_number: ${{ github.event.pull_request.number }}
+ base_sha: ${{ github.event.pull_request.base.sha }}
+ head_sha: ${{ github.event.pull_request.head.sha }}
+ base_label: ${{ github.event.pull_request.base.label }}
+ head_label: ${{ github.event.pull_request.head.label }}
secrets: inheritThen update benchmarks.yml to accept and use these inputs:
on:
workflow_call:
+ inputs:
+ pr_number:
+ type: string
+ base_sha:
+ type: string
+ head_sha:
+ type: string
+ base_label:
+ type: string
+ head_label:
+ type: stringAnd update the conditional logic in benchmarks.yml to use these inputs instead of github.event.pull_request.*.
🤖 Prompt for AI Agents
In .github/workflows/benchmark_labels_trigger.yml around lines 11-15 the called
workflow benchmarks.yml will not have github.event.pull_request.* available when
invoked via workflow_call; update this file to forward PR context as explicit
inputs (PR number, base SHA, contender/head SHA, and any event name) to the
called workflow, and then update benchmarks.yml to declare and consume those
inputs (using them when GITHUB_EVENT_NAME == workflow_call) with fallbacks to
github.event.pull_request.* when invoked from a PR event; ensure conditional
logic that builds EVENT_NAME, BASE_REF and CONTENDER_REF uses the passed inputs
instead of directly reading github.event.pull_request.*
This commit migrates napari from Pydantic V1 to V2 native API, enabling Python 3.14 support. ## Key Changes ### Validator Migration (V1 → V2 patterns) 1. **`@validator` → `@field_validator`**: All field validators updated with `mode='before'` for pre-processing behavior 2. **`@root_validator` → `@model_validator`**: Root validators converted with proper `mode='before'` or `mode='after'` depending on when validation should occur 3. **`PydanticCustomError`**: Custom validation errors now use V2's error format 4. **`__get_pydantic_core_schema__`**: Custom types now implement this classmethod instead of `__get_validators__` ### Bug Fixes for V2 Compatibility 1. **Union Type Equality (`src/napari/utils/misc.py`)**: - `pick_equality_operator()` now handles Union types like `Array | None` - Uses `get_origin()` and `get_args()` to unwrap and check type arguments 2. **ColorValue JSON Serialization (`src/napari/utils/color.py`)**: - Added `serialization` parameter to `__get_pydantic_core_schema__` 3. **ColorManager Validation (`src/napari/layers/utils/color_manager.py`)**: - Fixed validation to work with V2's validation model 4. **TextManager Translation Field (`src/napari/layers/utils/text_manager.py`)**: - Changed to use `Field(default_factory=...)` for array defaults ### Deprecated Method Replacements - `.dict()` → `.model_dump()` - `.json()` → `.model_dump_json()` - `.copy()` → `.model_copy()` - `__fields__` → `model_fields` - `update_forward_refs()` → `model_rebuild()` ### Dependencies - Added `pydantic-settings>=2.0.0` (BaseSettings moved to separate package) - Added `pydantic-extra-types>=2.0.0` (Color type moved to separate package) - npe2: Conditional dependency for Python 3.14 compatibility - triangle: Using git version for Python 3.14 compatibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
4321361 to
fdfeca9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
pyproject.toml (1)
211-211: Same concern about git-based triangle in docs dependencies.The git-based
triangledependency is consistently applied across optional, docs, and dependency-groups sections. The same considerations from the optional dependencies apply here: verify compatibility across all Python versions 3.10-3.14 and consider whether this should be conditional.Also applies to: 303-303
🧹 Nitpick comments (4)
pyproject.toml (1)
132-132: Consider making the triangle dependency conditional for Python 3.14 only.The PyPI version of triangle lacks wheels for Python 3.14 (only supports 3.10–3.13), necessitating the git source. However, the dependency is currently unconditional, forcing all Python versions to install from source. Since your project already uses conditional syntax elsewhere (e.g.,
npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'), apply the same pattern here:"triangle @ git+https://github.com/drufat/triangle.git; python_version >= '3.14'"This allows Python 3.10–3.13 users to benefit from pre-built wheels, while ensuring 3.14 users get the compatible source version.
src/napari/settings/_fields.py (1)
220-235: Version usesany_schema()which may be too permissive.Using
core_schema.any_schema()allows any input type to pass through to the validator. This works becauseVersion.validatecalls_from_objwhich handles multiple input types (str, bytes, dict, tuple, list, Version). However, consider whetherstr_schema()with a union or more specific schema would provide better error messages for invalid inputs.src/napari/components/viewer_model.py (1)
360-387: Serialization methods with backward compatibility aliases.The implementation provides:
model_dump_json()andmodel_dump()as the V2 primary methodsjson()anddict()as deprecated aliases for backward compatibilityConsider adding deprecation warnings to the compatibility aliases to guide users toward the V2 API.
🔎 Proposed enhancement with deprecation warnings
# Compatibility aliases for Pydantic V1 API def json(self, **kwargs): """Deprecated: Use model_dump_json() instead.""" + import warnings + warnings.warn( + "json() is deprecated, use model_dump_json() instead", + DeprecationWarning, + stacklevel=2, + ) return self.model_dump_json(**kwargs) def dict(self, **kwargs): """Deprecated: Use model_dump() instead.""" + import warnings + warnings.warn( + "dict() is deprecated, use model_dump() instead", + DeprecationWarning, + stacklevel=2, + ) return self.model_dump(**kwargs)src/napari/_pydantic_compat.py (1)
70-71: Deprecated aliases should include deprecation warnings.The
validatorandroot_validatoraliases are marked as deprecated in comments but don't emit warnings. Adding runtime warnings would help users migrate.🔎 Proposed enhancement with deprecation warnings
-# These are deprecated and will be removed. Use the V2 equivalents. -validator = field_validator # deprecated: use @field_validator -root_validator = model_validator # deprecated: use @model_validator +import warnings +import functools + +def _deprecated_alias(new_func, old_name, new_name): + @functools.wraps(new_func) + def wrapper(*args, **kwargs): + warnings.warn( + f"{old_name} is deprecated, use {new_name} instead", + DeprecationWarning, + stacklevel=2, + ) + return new_func(*args, **kwargs) + return wrapper + +# These are deprecated and will be removed. Use the V2 equivalents. +validator = _deprecated_alias(field_validator, 'validator', 'field_validator') +root_validator = _deprecated_alias(model_validator, 'root_validator', 'model_validator')
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.circleci/config.ymlpyproject.tomlsrc/napari/_pydantic_compat.pysrc/napari/_qt/dialogs/preferences_dialog.pysrc/napari/components/dims.pysrc/napari/components/viewer_model.pysrc/napari/layers/utils/color_manager.pysrc/napari/layers/utils/text_manager.pysrc/napari/settings/_application.pysrc/napari/settings/_base.pysrc/napari/settings/_fields.pysrc/napari/utils/color.pysrc/napari/utils/colormaps/colormap_utils.pysrc/napari/utils/misc.pysrc/napari/utils/theme.py
🧰 Additional context used
🧬 Code graph analysis (8)
src/napari/utils/colormaps/colormap_utils.py (2)
src/napari/components/viewer_model.py (1)
model_dump(370-378)src/napari/settings/_base.py (1)
model_dump(324-358)
src/napari/layers/utils/text_manager.py (2)
src/napari/components/viewer_model.py (1)
model_dump(370-378)src/napari/settings/_base.py (1)
model_dump(324-358)
src/napari/_qt/dialogs/preferences_dialog.py (3)
src/napari/components/viewer_model.py (1)
model_dump(370-378)src/napari/settings/_base.py (1)
model_dump(324-358)src/napari/_vendor/qt_json_builder/qt_jsonschema_form/form.py (1)
WidgetBuilder(19-139)
src/napari/components/viewer_model.py (2)
src/napari/components/_viewer_mouse_bindings.py (2)
dims_scroll(8-22)double_click_to_zoom(25-44)src/napari/settings/_base.py (1)
model_dump(324-358)
src/napari/settings/_fields.py (1)
src/napari/utils/theme.py (1)
available_themes(307-315)
src/napari/layers/utils/color_manager.py (1)
src/napari/_pydantic_compat.py (2)
_validate(253-267)_validate(308-320)
src/napari/utils/theme.py (2)
src/napari/components/viewer_model.py (1)
model_dump(370-378)src/napari/settings/_base.py (1)
model_dump(324-358)
src/napari/settings/_base.py (3)
src/napari/utils/events/evented_model.py (1)
EventedModel(165-449)src/napari/_pydantic_compat.py (1)
SettingsError(155-156)src/napari/utils/misc.py (1)
deep_update(653-661)
🪛 GitHub Actions: PR Test
pyproject.toml
[error] 1-1: Build failed: invalid pyproject.toml config: project.dependencies[13] must be pep508. Offending value: 'npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'' . Command: python -m build --outdir dist/.
🪛 Ruff (0.14.10)
src/napari/_pydantic_compat.py
255-255: Avoid specifying long messages outside the exception class
(TRY003)
258-258: Avoid specifying long messages outside the exception class
(TRY003)
260-260: Avoid specifying long messages outside the exception class
(TRY003)
262-262: Avoid specifying long messages outside the exception class
(TRY003)
264-264: Avoid specifying long messages outside the exception class
(TRY003)
266-266: Avoid specifying long messages outside the exception class
(TRY003)
310-310: Avoid specifying long messages outside the exception class
(TRY003)
313-313: Avoid specifying long messages outside the exception class
(TRY003)
315-315: Avoid specifying long messages outside the exception class
(TRY003)
317-317: Avoid specifying long messages outside the exception class
(TRY003)
319-319: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (42)
src/napari/utils/misc.py (2)
465-466: LGTM: Local imports for Union type support.The local imports are appropriate for avoiding module-level overhead, especially since these helpers are only needed for the Union type handling path.
479-489: Good solution for unwrapping Optional types to find the correct operator.The Union handling correctly addresses the ambiguous-array truth-value errors by unwrapping
Array | Nonepatterns and recursively finding the appropriate equality operator for the underlying type. The logic properly handles bothtyping.Unionandtypes.UnionType(Python 3.10+|syntax) and correctly skipsNoneType.The implementation returns the first non-default operator found when processing Union type arguments. While theoretically a Union could contain multiple non-None array types with different operators, this pattern does not appear in the codebase—actual unions are limited to cases like
Union[Array, None]or simple scalar unions (e.g.,Union[str, Path],Union[int, float]). The recursive approach is well-suited to the intended use case of unwrapping Optional types in Pydantic field annotations..circleci/config.yml (1)
48-49: Reasonable workaround for npe2 incompatibility.The switch to the
slimbuild target to skip npe2 docs rendering is a pragmatic temporary solution until npe2 completes its Pydantic V2 migration. The comment clearly explains the reason.pyproject.toml (2)
27-27: LGTM!The Python 3.14 classifier correctly declares support for the new Python version, aligning with the PR's migration objectives.
65-66: Both dependencies are correctly added and actively used in the codebase.Verification confirms:
pydantic-settings>=2.0.0: Used for importingBaseSettingsandSettingsConfigDictin the Pydantic V2 migration (src/napari/_pydantic_compat.py:45)pydantic-extra-types>=2.0.0: Used for theColortype in src/napari/utils/theme.py, which defines multiple color-typed fields for the theme configurationBoth minimum versions are compatible with
pydantic>=2.2.0.src/napari/utils/colormaps/colormap_utils.py (1)
246-251: LGTM!The migration from
.dict()to.model_dump()is correct for Pydantic V2 compatibility. The function logic remains unchanged, and this aligns with the consistent usage ofmodel_dump()across the codebase (e.g., inviewer_model.pyand_base.py).src/napari/settings/_application.py (2)
9-9: LGTM!Import update correctly includes
ConfigDictandfield_validatorfrom the Pydantic compatibility layer.
304-315: LGTM!The validator migration to Pydantic V2 is correct:
@field_validator('window_state')replaces@validator('window_state', allow_reuse=True)@classmethoddecorator is properly added as required by Pydantic V2model_config = ConfigDict(use_enum_values=False)replaces the innerConfigclass patternThe validator logic itself remains unchanged.
src/napari/utils/color.py (2)
27-39: LGTM!The Pydantic V2 core schema implementation for
ColorValueis correct. The serialization transformer properly converts numpy arrays to lists for JSON compatibility, which addresses the JSON serialization support mentioned in the PR objectives.
102-110: Consider adding serialization for consistency withColorValue.
ColorValueincludes a serialization transformer for JSON compatibility, butColorArraydoes not. IfColorArrayfields are serialized (e.g., viamodel_dump(mode='json')), numpy arrays may not serialize correctly.Was this omission intentional, or should serialization be added for consistency?
🔎 Suggested serialization addition
@classmethod def __get_pydantic_core_schema__( cls, source_type: Any, handler: GetCoreSchemaHandler, ) -> CoreSchema: return core_schema.no_info_before_validator_function( cls.validate, core_schema.any_schema(), + serialization=core_schema.plain_serializer_function_ser_schema( + lambda x: x.tolist() if isinstance(x, np.ndarray) else list(x), + info_arg=False, + ), )src/napari/layers/utils/text_manager.py (5)
90-90: LGTM!The migration from
PositiveFloattoAnnotated[float, Field(gt=0)]is the correct Pydantic V2 pattern and maintains the same validation semantics.
95-95: LGTM!Using
Field(default_factory=lambda: np.array(0.0))is the correct approach for mutable defaults in Pydantic V2. This fixes the TextManager translation default as mentioned in the PR objectives.
331-331: LGTM!Migration from
.dict()to.model_dump()is correct for Pydantic V2 compatibility.
367-367: LGTM!Migration from
.copy()to.model_copy()is correct for Pydantic V2 compatibility.
378-380: LGTM!The validator migration is correct:
@field_validator('blending', mode='before')replaces the previous@validatorpattern@classmethodis properly added as required by Pydantic V2The blending validation logic remains unchanged.
src/napari/utils/theme.py (5)
86-97: LGTM!The
syntax_stylevalidator migration is correct with proper@field_validatorand@classmethoddecorators.
99-108: LGTM!The
font_sizevalidator migration is correct.
110-118: LGTM!The
to_rgb_dictmethod correctly usessuper().model_dump()instead of the deprecateddict()method, and the docstring is appropriately updated.
269-269: LGTM!Migration from
.copy()to.model_copy()is correct for Pydantic V2.
417-436: Good compatibility handling for npe2.The
_model_dumphelper function correctly handles the case where npe2 models may still be using Pydantic V1 (with.dict()) while napari has migrated to V2 (with.model_dump()). This is important since the PR notes that npe2 still depends on Pydantic V1.src/napari/_qt/dialogs/preferences_dialog.py (4)
17-17: LGTM!Import updated to use
FieldInfoinstead ofModelField, which is the correct Pydantic V2 type for field metadata.
96-111: LGTM!The dialog rebuilding logic is correctly updated for Pydantic V2:
- Uses
model_dump(exclude={'schema_version'})instead of custom dict handling- Iterates over
model_fields.items()instead of__fields__.values()- Properly extracts the annotation from
field_info.annotation- Correctly checks if the annotation is a BaseModel subclass
115-136: LGTM!The
_add_pagemethod signature and implementation are correctly updated to work with the new Pydantic V2 field introspection pattern.
176-232: LGTM!The
_get_page_dictmethod is correctly updated:
- Uses
model_json_schema()instead ofschema()for JSON schema generation- Properly iterates over
model_fieldsfor nested model introspection- Uses
model_dump(mode='json')to ensure enums serialize as their string valuessrc/napari/layers/utils/color_manager.py (4)
53-62: LGTM!The Pydantic V2 core schema implementation for
ColorPropertiesis correct, usingno_info_before_validator_functionto wrap the_validatemethod.
65-87: LGTM!The validation method is correctly updated:
- Renamed from
validate_typeto_validate(private convention)- Uses
PydanticCustomErrorinstead ofValueErrorfor Pydantic V2 compatibility- Error messages are clear and descriptive
165-186: LGTM!The field validators are correctly migrated with
@field_validatorand@classmethoddecorators. The validation logic remains unchanged.
188-215: LGTM!The model validator migration is well-implemented:
@model_validator(mode='after')correctly replaces@root_validator- Returns
Selfas required by Pydantic V2's after-mode validators- Uses
object.__setattr__to bypassvalidate_assignmentand avoid infinite recursion- The color validation logic is preserved through the v2 helper functions
src/napari/settings/_fields.py (2)
6-8: Pydantic V2 imports look correct.The imports for
GetCoreSchemaHandler,GetJsonSchemaHandler,JsonSchemaValue,CoreSchema, andcore_schemaare the proper V2 APIs for custom type definitions.
24-42: Theme schema implementation is correct for Pydantic V2.The use of
no_info_before_validator_functionto run validation before the string schema, combined with the JSON schema hook that dynamically populates the enum with available themes, follows the recommended V2 pattern for custom types.src/napari/components/dims.py (2)
111-122: Field validator migration is correct.The conversion from V1
@validatorto V2@field_validatorwithmode='before'and@classmethoddecorator is properly done. The tuple conversion for multiple fields is preserved.
155-249: Model validator migration usingobject.__setattr__is appropriate.The migration from
root_validatortomodel_validator(mode='after')correctly:
- Returns
Selftype and returnsselfat the end- Uses
object.__setattr__to bypassvalidate_assignmentand avoid infinite recursion- Operates on instance attributes directly rather than a values dict
This is a valid and recommended pattern for V2 when you need to modify multiple fields atomically without re-triggering validation.
src/napari/components/viewer_model.py (4)
112-113: Good workaround for thedictmethod name conflict.Using
builtins.dictaliased toDictprevents the type hint from conflicting with thedict()method defined on the class. This is a clean solution.
186-195: Correct migration fromallow_mutation=Falsetofrozen=True.The
frozen=Trueparameter in Pydantic V2 correctly replaces the V1allow_mutation=Falsebehavior, preventing these fields from being reassigned after initialization.
345-358: Theme validator correctly migrated tofield_validator.The
@field_validator('theme')with@classmethodfollows the V2 pattern. The validation logic remains unchanged and properly raisesValueErrorfor invalid themes.
292-295: Type ignore comments are appropriate for mixin-provided attributes.The
# type: ignore[attr-defined]comments are necessary because the mouse callback lists (mouse_wheel_callbacks, etc.) are provided by theMousemapProvidermixin, and the type checker cannot see them onselfduring static analysis.src/napari/_pydantic_compat.py (2)
132-149:display_errorsfunction handles both V1 and V2 error formats well.The implementation gracefully handles both dict-style (V1) and object-style error formats by checking for the
getmethod, ensuring compatibility during the migration period.
214-267: ConstrainedInt stub provides V2 compatibility for V1-style constrained types.The implementation of
__get_pydantic_core_schema__and validation logic is correct. The static analysis hints about long error messages (TRY003) are acceptable for this compatibility stub since these are internal validation messages.src/napari/settings/_base.py (4)
40-88: Custom_exclude_defaults_eventedhandles Pydantic V2 limitation correctly.This function addresses a real issue where Pydantic V2's
exclude_defaultscompares private attributes (__pydantic_private__), which always differ between instances ofEventedModel. The manual comparison using only publicmodel_fieldsis the correct approach.
180-210: Initialization flow correctly merges settings from multiple sources.The precedence order (environment > file > explicit values) is correctly implemented. Using
copy.deepcopybefore mutation prevents issues when validation creates temporary instances.One minor observation: the
copy_moduleimport could be moved to the top of the file for consistency.
523-533: Validation inconfig_file_settings_sourcecreates temporary instance withconfig_path=None.This correctly avoids re-reading the config file during validation. The strict mode check properly re-raises validation errors when enabled.
597-617:_restore_config_datanow guards against missing keys.The added checks on lines 606-610 properly handle cases where keys might be excluded (e.g., by
exclude_defaults) and restores from defaults when appropriate. This is a good defensive improvement.
| "npe2>=0.7.7; python_version < '3.14'", | ||
| "npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix invalid PEP 508 syntax causing build failure.
The conditional npe2 dependency using a git URL with an environment marker is invalid PEP 508 syntax and causes the build to fail. Direct URL references cannot be combined with environment markers in the same dependency specification.
The pipeline error confirms this:
project.dependencies[13] must be pep508. Offending value: 'npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14''
Possible solutions:
-
Use dependency groups (if Python 3.14 support can be optional initially):
- Move the git-based npe2 to a separate dependency group
- Keep standard npe2 in core dependencies with
python_version < '3.14'marker
-
Remove the environment marker from the git URL and handle Python 3.14 compatibility through documentation or constraints files
-
Wait for npe2 to publish a Pydantic V2-compatible release to PyPI and use standard version markers
-
Use PEP 508 extras or conditional imports if the feature is optional
🔎 Example fix using dependency groups approach
dependencies = [
...
- "npe2>=0.7.7; python_version < '3.14'",
- "npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'",
+ "npe2>=0.7.7",
...
]Then add a new dependency group:
[dependency-groups]
py314 = [
"npe2 @ git+https://github.com/brisvag/npe2.git@py314",
]Or add to optional dependencies:
[project.optional-dependencies]
py314 = [
"npe2 @ git+https://github.com/brisvag/npe2.git@py314",
]Note: This approach would require users on Python 3.14 to explicitly install the extra until npe2 releases a compatible version.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "npe2>=0.7.7; python_version < '3.14'", | |
| "npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'", | |
| "npe2>=0.7.7", |
🤖 Prompt for AI Agents
In pyproject.toml around lines 56 to 57, the dependency line 'npe2 @
git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'' uses an
invalid PEP 508 combination of a direct URL and an environment marker; fix by
removing the environment marker from the direct URL and moving the git-based
npe2 into a separate dependency group or optional dependency (e.g., add a new
[dependency-groups] or [project.optional-dependencies] entry for Python 3.14
that contains "npe2 @ git+https://github.com/brisvag/npe2.git@py314"), and keep
the existing "npe2>=0.7.7; python_version < '3.14'" in core dependencies.
| try: | ||
| from pydantic_extra_types.color import Color | ||
| except ImportError: | ||
| # Fallback if pydantic-extra-types not installed | ||
| Color = None # type: ignore[misc, assignment] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Color type fallback to None may cause runtime errors.
If pydantic-extra-types is not installed and code attempts to use Color, it will be None, leading to confusing TypeError exceptions. Consider raising a more informative error or using a stub class.
🔎 Proposed improvement for better error handling
# Color type moved to pydantic-extra-types
try:
from pydantic_extra_types.color import Color
except ImportError:
# Fallback if pydantic-extra-types not installed
- Color = None # type: ignore[misc, assignment]
+ class Color: # type: ignore[no-redef]
+ """Placeholder for Color when pydantic-extra-types is not installed."""
+ def __init__(self, *args, **kwargs):
+ raise ImportError(
+ "Color requires pydantic-extra-types. "
+ "Install it with: pip install pydantic-extra-types"
+ )🤖 Prompt for AI Agents
In src/napari/_pydantic_compat.py around lines 48-52, the fallback sets Color =
None which causes unclear TypeError later; replace the None fallback with a
lightweight stub class named Color that raises a clear ImportError (or
RuntimeError) on instantiation with a message instructing to install
pydantic-extra-types (and optionally how to pin compatible versions), and add a
typing-ignore comment for linters if needed so the module still imports cleanly
but any runtime attempt to use Color fails with the informative error instead of
a confusing NoneType error.
| def _validate_cycle_mode_v2(model: ColorManager) -> tuple[np.ndarray, dict]: | ||
| """V2-compatible version of _validate_cycle_mode that works with model instances.""" | ||
| # Store original values before they get modified | ||
| original_values = { | ||
| 'color_mode': model.color_mode, | ||
| 'color_properties': model.color_properties, | ||
| 'categorical_colormap': model.categorical_colormap, | ||
| 'colors': model.colors, | ||
| 'current_color': model.current_color, | ||
| 'continuous_colormap': model.continuous_colormap, | ||
| 'contrast_limits': model.contrast_limits, | ||
| } | ||
| # Create a copy for mutation | ||
| values = dict(original_values) | ||
| colors, updated_values = _validate_cycle_mode(values) | ||
| # Return only the fields that changed | ||
| updated = {} | ||
| for key in [ | ||
| 'color_properties', | ||
| 'categorical_colormap', | ||
| 'colors', | ||
| 'current_color', | ||
| ]: | ||
| if key in updated_values and updated_values[ | ||
| key | ||
| ] is not original_values.get(key): | ||
| updated[key] = updated_values[key] | ||
| return colors, updated | ||
|
|
||
|
|
||
| def _validate_colormap_mode_v2(model: ColorManager) -> tuple[np.ndarray, dict]: | ||
| """V2-compatible version of _validate_colormap_mode that works with model instances.""" | ||
| # Store original values before they get modified | ||
| original_values = { | ||
| 'color_mode': model.color_mode, | ||
| 'color_properties': model.color_properties, | ||
| 'categorical_colormap': model.categorical_colormap, | ||
| 'colors': model.colors, | ||
| 'current_color': model.current_color, | ||
| 'continuous_colormap': model.continuous_colormap, | ||
| 'contrast_limits': model.contrast_limits, | ||
| } | ||
| # Create a copy for mutation | ||
| values = dict(original_values) | ||
| colors, updated_values = _validate_colormap_mode(values) | ||
| # Return only the fields that changed | ||
| updated = {} | ||
| for key in [ | ||
| 'color_properties', | ||
| 'colors', | ||
| 'current_color', | ||
| 'contrast_limits', | ||
| ]: | ||
| if key in updated_values and updated_values[ | ||
| key | ||
| ] is not original_values.get(key): | ||
| updated[key] = updated_values[key] | ||
| return colors, updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for _validate_cycle_mode and _validate_colormap_mode implementations
rg -n -B 2 -A 30 'def _validate_cycle_mode\(' src/napari/layers/utils/color_manager.py
echo "---"
rg -n -B 2 -A 30 'def _validate_colormap_mode\(' src/napari/layers/utils/color_manager.pyRepository: derekthirstrup/napari
Length of output: 68
🏁 Script executed:
#!/bin/bash
# Search for the validation functions more broadly
rg '_validate_cycle_mode|_validate_colormap_mode' src/napari/layers/utils/color_manager.py -n -B 2 -A 20Repository: derekthirstrup/napari
Length of output: 4891
🏁 Script executed:
#!/bin/bash
# Find and examine the actual implementations in color_manager_utils
find . -name "color_manager_utils.py" -type fRepository: derekthirstrup/napari
Length of output: 113
🏁 Script executed:
#!/bin/bash
# Read the color_manager_utils.py file to find the implementations
wc -l ./src/napari/layers/utils/color_manager_utils.pyRepository: derekthirstrup/napari
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Read the entire color_manager_utils.py file
cat -n ./src/napari/layers/utils/color_manager_utils.pyRepository: derekthirstrup/napari
Length of output: 5762
🏁 Script executed:
#!/bin/bash
# Let me verify how colors are handled - check if colors is actually part of the returned values dict
# Look at the return statements more carefully
rg -n 'return colors, values' ./src/napari/layers/utils/color_manager_utils.py
rg -n 'return colors, updated' ./src/napari/layers/utils/color_manager.pyRepository: derekthirstrup/napari
Length of output: 186
🏁 Script executed:
#!/bin/bash
# Check the ColorProperties class definition to understand if it's mutable
rg -n -B 5 -A 15 'class ColorProperties' src/napari/layers/utils/color_manager.pyRepository: derekthirstrup/napari
Length of output: 659
🏁 Script executed:
#!/bin/bash
# Let me check if any numpy arrays or objects are mutated in place within the validation functions
# Look for assignments that modify the array itself rather than creating new ones
rg -n '\[.*\]\s*=' ./src/napari/layers/utils/color_manager_utils.py
rg -n '\.append|\.extend|\.insert|\.remove' ./src/napari/layers/utils/color_manager_utils.pyRepository: derekthirstrup/napari
Length of output: 377
🏁 Script executed:
#!/bin/bash
# Let me trace through to understand the flow better
# Check if ColorProperties or other objects are being modified in place
# Look at how color_properties is used in the validation functions
rg -n 'color_properties' ./src/napari/layers/utils/color_manager_utils.py -A 2Repository: derekthirstrup/napari
Length of output: 1229
🏁 Script executed:
#!/bin/bash
# Double-check: does _validate_cycle_mode or _validate_colormap_mode ever set values['colors']?
rg "values\['colors'\]" ./src/napari/layers/utils/color_manager_utils.py
# Also check if they modify any passed-in objects in place
rg "\.append|\.pop|\.clear|__setitem__" ./src/napari/layers/utils/color_manager_utils.pyRepository: derekthirstrup/napari
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Let me verify the actual contents more carefully
# Check what keys are actually modified in the values dict
cat -n ./src/napari/layers/utils/color_manager_utils.py | grep -A 30 "def _validate_cycle_mode"Repository: derekthirstrup/napari
Length of output: 1228
🏁 Script executed:
#!/bin/bash
# Check the actual flow - does cmap.map() return a new array or modify in place?
# Let me also check if there are any imports that might give hints
rg -n "from.*colormap|import.*colormap" ./src/napari/layers/utils/color_manager_utils.py -A 2Repository: derekthirstrup/napari
Length of output: 159
🏁 Script executed:
#!/bin/bash
# Let me verify how the comparison actually works by examining a specific case
# Check if 'colors' is ever actually set in the values dict by the validation functions
cat -n ./src/napari/layers/utils/color_manager_utils.py | tail -30
echo "---"
# Verify the flow: colors is returned separately, never in values dict
rg "values\[" ./src/napari/layers/utils/color_manager_utils.py | grep -v "values\['color_properties'\]" | grep -v "values\['continuous_colormap'\]" | grep -v "values\['categorical_colormap'\]" | grep -v "values\['contrast_limits'\]" | grep -v "values\['current_color'\]"Repository: derekthirstrup/napari
Length of output: 1240
🏁 Script executed:
#!/bin/bash
# Check all keys that are explicitly set in the validation functions
rg "values\[" ./src/napari/layers/utils/color_manager_utils.py | grep -v "values\['" | head -20
echo "---"
rg "values\['" ./src/napari/layers/utils/color_manager_utils.pyRepository: derekthirstrup/napari
Length of output: 748
🏁 Script executed:
#!/bin/bash
# Let me verify the actual behavior by checking what gets returned and set
# The key insight: colors is returned separately, not in the values dict
# Let's verify the flow one more time to be absolutely sure
python3 << 'EOF'
# Simulate what happens in the code
original_values = {
'color_properties': 'obj1',
'colors': 'obj2',
'categorical_colormap': 'obj3',
'current_color': 'obj4',
}
# This simulates what _validate_cycle_mode does - it takes a copy
values = dict(original_values)
# In _validate_cycle_mode, it would do:
values['current_color'] = 'new_color'
values['categorical_colormap'] = 'obj3' # reassigned but same object
# Now check the comparison logic from _validate_cycle_mode_v2
updated = {}
for key in ['color_properties', 'categorical_colormap', 'colors', 'current_color']:
if key in values and values[key] is not original_values.get(key):
updated[key] = values[key]
print(f"{key}: {values[key]} is not {original_values.get(key)} -> included")
else:
print(f"{key}: No change detected")
print(f"\nUpdated dict: {updated}")
EOFRepository: derekthirstrup/napari
Length of output: 264
🏁 Script executed:
#!/bin/bash
# This is critical - let me verify if 'colors' is ever actually set in the values dict
# by either _validate_cycle_mode or _validate_colormap_mode
echo "=== Checking _validate_colormap_mode ==="
cat -n ./src/napari/layers/utils/color_manager_utils.py | sed -n '76,118p'Repository: derekthirstrup/napari
Length of output: 1773
The identity comparison approach is actually sound, but the change detection checks for 'colors' which the validation functions never set in the values dict.
The validation functions return colors as a separate value from the modified values dict. They only modify specific keys within values (like current_color, contrast_limits, categorical_colormap), but never colors. Since colors is returned separately and handled directly in the calling code, the redundant check in the loop is ineffectual but not a functional problem.
The identity comparison itself will work correctly because the validation functions don't mutate objects in place—they create new numpy arrays and reassign dict keys with either new values or the same object references.
🤖 Prompt for AI Agents
In src/napari/layers/utils/color_manager.py around lines 597 to 654, the
change-detection loops include the key 'colors' but the validation helpers never
set 'colors' in the returned updated_values (they return colors separately),
making that check redundant; remove 'colors' from both loops (the
_validate_cycle_mode_v2 and _validate_colormap_mode_v2 change-detection lists)
so the code only compares keys the validators actually mutate, and optionally
add a brief comment noting that 'colors' is returned separately by the
validators.
| def _load_env_settings(self) -> tuple[dict[str, Any], dict[str, Any]]: | ||
| """Load settings from environment variables. | ||
| Supports both flat (NAPARI_FIELD) and nested (NAPARI_SECTION_FIELD) paths, | ||
| as well as custom env names defined in field json_schema_extra. | ||
| Returns | ||
| ------- | ||
| tuple[dict, dict] | ||
| A tuple of (parsed_values, raw_values). parsed_values are for model | ||
| initialization, raw_values are for caching (to identify env-provided settings). | ||
| """ | ||
| import json | ||
|
|
||
| parsed: dict[str, Any] = {} | ||
| raw: dict[str, Any] = {} | ||
| env_prefix = getattr(type(self), '_env_prefix', 'NAPARI_').upper() | ||
|
|
||
| env_vars: Mapping[str, str | None] = { | ||
| k.upper(): v for k, v in os.environ.items() | ||
| } | ||
|
|
||
| # Check for direct field mappings (flat access) | ||
| # Use type(self).model_fields to avoid deprecation warning in V2.11+ | ||
| for field_name, field_info in type(self).model_fields.items(): | ||
| env_name = f'{env_prefix}{field_name.upper()}' | ||
| if env_name in env_vars: | ||
| val = env_vars[env_name] | ||
| if val is not None: | ||
| raw[field_name] = val | ||
| # Try to parse as JSON for complex types | ||
| try: | ||
| parsed[field_name] = json.loads(val) | ||
| except (json.JSONDecodeError, TypeError): | ||
| parsed[field_name] = val | ||
|
|
||
| # Check for nested field access (e.g., NAPARI_APPEARANCE_THEME) | ||
| nested_prefix = f'{env_prefix}{field_name.upper()}_' | ||
| for env_name, env_val in env_vars.items(): | ||
| if env_name.startswith(nested_prefix) and env_val is not None: | ||
| nested_path = env_name[len(nested_prefix) :].lower() | ||
| if field_name not in parsed: | ||
| parsed[field_name] = {} | ||
| raw[field_name] = {} | ||
| elif not isinstance(parsed[field_name], dict): | ||
| continue # Already set to non-dict value | ||
| raw[field_name][nested_path] = env_val | ||
| # Try to parse nested value as JSON | ||
| try: | ||
| parsed[field_name][nested_path] = json.loads(env_val) | ||
| except (json.JSONDecodeError, TypeError): | ||
| parsed[field_name][nested_path] = env_val | ||
|
|
||
| # Check for custom env names in nested model fields (json_schema_extra) | ||
| annotation = field_info.annotation | ||
| if annotation is not None: | ||
| try: | ||
| if hasattr(annotation, 'model_fields'): | ||
| for ( | ||
| nested_name, | ||
| nested_info, | ||
| ) in annotation.model_fields.items(): | ||
| extra = nested_info.json_schema_extra or {} | ||
| if isinstance(extra, dict) and 'env' in extra: | ||
| custom_env = extra['env'].upper() | ||
| if custom_env in env_vars: | ||
| val = env_vars[custom_env] | ||
| if val is None: | ||
| continue | ||
| if field_name not in parsed: | ||
| parsed[field_name] = {} | ||
| raw[field_name] = {} | ||
| elif not isinstance( | ||
| parsed[field_name], dict | ||
| ): | ||
| continue | ||
| raw[field_name][nested_name] = val | ||
| # Try to parse as JSON, but handle booleans specially | ||
| if val.lower() in ('true', '1', 'yes'): | ||
| parsed[field_name][nested_name] = True | ||
| elif val.lower() in ('false', '0', 'no'): | ||
| parsed[field_name][nested_name] = False | ||
| else: | ||
| try: | ||
| parsed[field_name][nested_name] = ( | ||
| json.loads(val) | ||
| ) | ||
| except ( | ||
| json.JSONDecodeError, | ||
| TypeError, | ||
| ): | ||
| parsed[field_name][nested_name] = ( | ||
| val | ||
| ) | ||
| except (TypeError, AttributeError): | ||
| pass | ||
|
|
||
| return parsed, raw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Environment variable loading is comprehensive but complex.
The _load_env_settings method handles:
- Flat env vars (
NAPARI_FIELD) - Nested env vars (
NAPARI_SECTION_FIELD) - Custom env names via
json_schema_extra - JSON parsing with boolean handling
However, there's an inconsistency: boolean parsing (true, false, etc.) is only applied for custom env names (lines 290-294), not for standard nested fields (lines 260-263).
🔎 Proposed fix for consistent boolean handling
# Try to parse nested value as JSON
- try:
- parsed[field_name][nested_path] = json.loads(env_val)
- except (json.JSONDecodeError, TypeError):
- parsed[field_name][nested_path] = env_val
+ if env_val.lower() in ('true', '1', 'yes'):
+ parsed[field_name][nested_path] = True
+ elif env_val.lower() in ('false', '0', 'no'):
+ parsed[field_name][nested_path] = False
+ else:
+ try:
+ parsed[field_name][nested_path] = json.loads(env_val)
+ except (json.JSONDecodeError, TypeError):
+ parsed[field_name][nested_path] = env_val🤖 Prompt for AI Agents
In src/napari/settings/_base.py around lines 212 to 309, the method
_load_env_settings parses JSON for nested env vars but does not normalize
boolean-like strings for standard nested fields (the custom-env branch does),
causing inconsistent boolean handling; update the nested parsing block (the loop
iterating env_vars and handling nested_prefix around lines ~250-270) to treat
'true','1','yes' as True and 'false','0','no' as False before attempting
json.loads, mirroring the logic used for custom env names, and ensure
raw[field_name][nested_path] continues to store the original string.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @derekthirstrup. * #1 (comment) The following files were modified: * `src/napari/_pydantic_compat.py` * `src/napari/_qt/dialogs/preferences_dialog.py` * `src/napari/_vispy/layers/scalar_field.py` * `src/napari/components/dims.py` * `src/napari/components/viewer_model.py` * `src/napari/layers/utils/color_manager.py` * `src/napari/layers/utils/text_manager.py` * `src/napari/settings/_application.py` * `src/napari/settings/_base.py` * `src/napari/settings/_fields.py` * `src/napari/utils/color.py` * `src/napari/utils/colormaps/colormap_utils.py` * `src/napari/utils/misc.py` * `src/napari/utils/theme.py`
- _qprocessors.py: Replace .copy(update=...) with .model_copy(update=...) - widgets.py: Check both top-level and nested json_schema_extra for file_extension - test_points.py, test_shapes.py: Accept TypeError for Pydantic V2 validation errors - _yaml.py: Add type ignores for model_dump kwargs - colormap.py: Use keyword args for CyclicLabelColormap construction - _set.py: Implement __get_pydantic_core_schema__ for EventedSet - info.py: Add fallback for model_dump/dict compatibility with npe2 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Summary
Complete Pydantic V1 to V2 migration for napari, enabling Python 3.14 compatibility.
@validator→@field_validator,@root_validator→@model_validator)pick_equality_operator()forArray | NonetypesColorValuecustom type.dict()→.model_dump(), etc.)All 994 tests pass on Python 3.12.
Test plan
Key Changes
Bug Fixes for V2 Compatibility
src/napari/utils/misc.pyget_origin()/get_args()to unwrap Union typessrc/napari/utils/color.pyserializationto__get_pydantic_core_schema__src/napari/layers/utils/color_manager.pysrc/napari/layers/utils/text_manager.py0not validatedField(default_factory=lambda: np.array(0.0))Validator Migration Patterns
Deprecated Method Replacements
.dict()→.model_dump().json()→.model_dump_json().copy()→.model_copy()__fields__→model_fieldsupdate_forward_refs()→model_rebuild()Python 3.14 Status
Core napari Pydantic V2 code verified working on Python 3.14. Full test suite blocked by
npe2dependency which still uses Pydantic V1 (pydantic.v1). The npe2 package will need its own migration for complete Python 3.14 support.Summary by CodeRabbit
New Features
Chores
User-facing changes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.