Skip to content

Conversation

@derekthirstrup
Copy link
Owner

@derekthirstrup derekthirstrup commented Dec 30, 2025

Summary

Complete Pydantic V1 to V2 migration for napari, enabling Python 3.14 compatibility.

  • Migrated all validators from V1 to V2 patterns (@validator@field_validator, @root_validator@model_validator)
  • Fixed Union type handling in pick_equality_operator() for Array | None types
  • Added JSON serialization support for ColorValue custom type
  • Fixed ColorManager validation change detection bugs
  • Updated all deprecated method calls (.dict().model_dump(), etc.)

All 994 tests pass on Python 3.12.

Test plan

  • Run full test suite on Python 3.12 (994 passed, 2 xfailed pre-existing)
  • Run tests on Python 3.13
  • Run tests on Python 3.14 (blocked by npe2 Pydantic V1 dependency)

Key Changes

Bug Fixes for V2 Compatibility

File Issue Fix
src/napari/utils/misc.py Union types caused "truth value of array is ambiguous" Added get_origin()/get_args() to unwrap Union types
src/napari/utils/color.py ColorValue couldn't serialize to JSON Added serialization to __get_pydantic_core_schema__
src/napari/layers/utils/color_manager.py Validation didn't detect changes Store original values before mutation
src/napari/layers/utils/text_manager.py Translation default 0 not validated Use Field(default_factory=lambda: np.array(0.0))

Validator Migration Patterns

# V1 → V2 field validator
@validator('field', pre=True)  →  @field_validator('field', mode='before')

# V1 → V2 root validator  
@root_validator(pre=True)  →  @model_validator(mode='before')

# V1 → V2 custom type
__get_validators____get_pydantic_core_schema__

# V1 → V2 custom errors
ValueError('msg')  →  PydanticCustomError('value_error', 'msg: {val}', {'val': v})

Deprecated Method Replacements

  • .dict().model_dump()
  • .json().model_dump_json()
  • .copy().model_copy()
  • __fields__model_fields
  • update_forward_refs()model_rebuild()

Python 3.14 Status

Core napari Pydantic V2 code verified working on Python 3.14. Full test suite blocked by npe2 dependency which still uses Pydantic V1 (pydantic.v1). The npe2 package will need its own migration for complete Python 3.14 support.

Summary by CodeRabbit

  • New Features

    • Added official Python 3.14 support and compatibility adjustments.
  • Chores

    • Switched docs CI to a slimmer build target.
    • Added a label-triggered benchmark workflow and updated benchmark workflow invocation and environment settings.
  • User-facing changes

    • Improved settings, theme and color serialization/handling for newer validation library behavior; preferences dialog and viewer serialization updated transparently.
  • Tests

    • Relaxed some validation test expectations to accept updated error types.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 30, 2025

Walkthrough

Migrates the codebase from Pydantic v1 to v2 across compatibility layer, models, validators, serialization, and settings; updates CI/workflows and dependency constraints for Python 3.14; and applies corresponding API/behavior adaptions in components, layers, utilities, dialogs, and tests.

Changes

Cohort / File(s) Summary
CI/CD & Workflows
.circleci/config.yml, .github/workflows/benchmarks.yml, .github/workflows/benchmarks_report.yml, .github/workflows/benchmark_labels_trigger.yml
CircleCI docs build target changed from make htmlmake slim (skip npe2 docs). Benchmarks workflow trigger moved from pull_requestworkflow_call; added env vars OMP_NUM_THREADS, ASV_OPTIONS; new label-trigger workflow invokes benchmarks on PR label run-benchmarks.
Project Metadata & Dependencies
pyproject.toml
Added Python 3.14 classifier; conditional npe2 dependency for py3.14 via git URL; added pydantic-settings and pydantic-extra-types; replaced triangle with git-based package in optional/docs groups for py3.14 compatibility.
Pydantic Compatibility Layer
src/napari/_pydantic_compat.py
Rewritten from v1 shims to v2-native surface: swaps v1 imports for v2 constructs (ConfigDict, FieldInfo, TypeAdapter, core_schema), adds BaseSettings via pydantic_settings, many compatibility aliases/stubs (root/field validator aliases, errors/types/main/color shims), and expands __all__.
Settings & YAML
src/napari/settings/_application.py, src/napari/settings/_base.py, src/napari/settings/_fields.py, src/napari/settings/_yaml.py
Migrated validators to field_validator/v2 hooks; replaced Config class with model_config: ConfigDict; introduced EventedSettings and EventedConfigFileSettings with PrivateAttr-based config/file/env handling and _load_env_settings; model_dump-based serialization and YAML dumper resolution via model_config; Theme/Language/Version use v2 core/json schema hooks.
Component Models
src/napari/components/dims.py, src/napari/components/viewer_model.py
Validators converted to field_validator/model_validator; Dims validators now mutate in-place and return Self; ViewerModel uses frozen=True fields, adds model_dump/model_dump_json with backward-compatible dict/json aliases; theme validator switched to field_validator.
Layer Utilities & Managers
src/napari/layers/utils/color_manager.py, src/napari/layers/utils/text_manager.py
ColorProperties moved to __get_pydantic_core_schema__ with _validate; ColorManager validators → field_validator and model_validator with in-place updates via object.__setattr__; TextManager fields refined (size → Annotated float with Field(gt=0), translation default_factory), uses model_dump() and model_copy() for serialization/updates.
Dialogs & Qt integration
src/napari/_qt/dialogs/preferences_dialog.py, src/napari/_qt/_qapp_model/injection/_qprocessors.py
Preferences dialog now introspects model_fields and _add_page/_get_page_dict signatures updated to (field_name, field_info, annotation); serialization uses model_dump(mode='json'). Layer source copying uses model_copy(update=...) instead of .copy(update=...).
Colormaps & Color Utilities
src/napari/utils/color.py, src/napari/utils/colormaps/colormap.py, src/napari/utils/colormaps/colormap_utils.py
ColorValue/ColorArray validators replaced with __get_pydantic_core_schema__ (core_schema-based validators, serialization hooks). Colormap validators migrated to field_validator/model_validator; many internal serializations switched from .dict().model_dump(); some type/signature/field changes (e.g., DirectLabelColormap.color_dict types). colormap_utils uses model_dump() when building Vispy colormap.
Theme & Misc Utilities
src/napari/utils/theme.py, src/napari/utils/misc.py, src/napari/utils/info.py
Theme validators → field_validator; use model_copy() and model_dump(); added _model_dump compatibility helper when installing npe2 themes. pick_equality_operator gained Union handling. get_plugin_list now prefers model_dump() with fallbacks.
Events & Containers
src/napari/utils/events/containers/_set.py
EventedSet validation switched from v1 per-element ModelField validation to v2 __get_pydantic_core_schema__ with a simpler _validate that accepts sequence-like inputs and constructs the set; removed per-item ModelField-based validation.
Tests & Vendor Widgets
src/napari/layers/points/_tests/test_points.py, src/napari/layers/shapes/_tests/test_shapes.py, src/napari/_vendor/qt_json_builder/qt_jsonschema_form/widgets.py
Tests adjusted to accept TypeError or ValidationError for invalid text inputs under v2. FilepathSchemaWidget.file_filter now checks top-level file_extension before json_schema_extra fallback.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I hopped from v1 to v2 with a twitch and a cheer,

field_validator, model_dump—new grasses appear.
I nibbled through schemas, left shims on the lawn,
Kept old burrows warm so no code feels forlorn.
Hop, hop — compatibility lives on at the dawn. 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 47.75% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main objective of this pull request: migrating napari from Pydantic V1 to V2 specifically for Python 3.14 compatibility.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fdfeca9 and ae07bda.

📒 Files selected for processing (8)
  • src/napari/_qt/_qapp_model/injection/_qprocessors.py
  • src/napari/_vendor/qt_json_builder/qt_jsonschema_form/widgets.py
  • src/napari/layers/points/_tests/test_points.py
  • src/napari/layers/shapes/_tests/test_shapes.py
  • src/napari/settings/_yaml.py
  • src/napari/utils/colormaps/colormap.py
  • src/napari/utils/events/containers/_set.py
  • src/napari/utils/info.py
🔇 Additional comments (20)
src/napari/layers/points/_tests/test_points.py (1)

999-1000: LGTM! Test correctly updated for Pydantic V2 validation behavior.

The test now accepts both TypeError and ValidationError to handle Pydantic V2's validation behavior, where type mismatches may raise TypeError instead of ValidationError. This is consistent with the broader Pydantic V1 → V2 migration.

src/napari/layers/shapes/_tests/test_shapes.py (1)

367-368: LGTM! Test correctly updated for Pydantic V2 validation behavior.

The test now accepts both TypeError and ValidationError to handle Pydantic V2's validation behavior, mirroring the identical change in test_points.py. This ensures consistent error handling across layer tests during the Pydantic V1 → V2 migration.

src/napari/_vendor/qt_json_builder/qt_jsonschema_form/widgets.py (1)

376-384: LGTM! Backward-compatible Pydantic V2 migration.

The implementation correctly handles both Pydantic V2 (top-level file_extension) and V1 (nested under json_schema_extra) schema formats with appropriate fallback to "All Files (*)". The V2 path is checked first, aligning with the migration direction, and the comments clearly document the distinction between formats.

src/napari/_qt/_qapp_model/injection/_qprocessors.py (1)

133-133: Correct migration to Pydantic V2's model_copy.

The change from .copy(update=...) to .model_copy(update=...) correctly aligns with Pydantic V2's API. The update parameter semantics are preserved, and verification confirms layer.source is a properly migrated Pydantic V2 model with no similar patterns missed elsewhere in the codebase.

src/napari/utils/events/containers/_set.py (4)

3-3: LGTM! Import cleanup aligns with V2 migration.

Removing Sequence is appropriate since the simplified V2 validation no longer requires it.


16-17: LGTM! Correct Pydantic V2 imports.

The imports of GetCoreSchemaHandler and CoreSchema are appropriate for implementing __get_pydantic_core_schema__ in the V2 migration.


172-186: LGTM! Correct Pydantic V2 schema implementation.

The migration from __get_validators__ to __get_pydantic_core_schema__ follows the correct V2 pattern. The serialization schema appropriately converts the set to a list for JSON/dict output.


189-203: Confirm typed EventedSet usage requires explicit testing.

The sequence_like helper is available in napari._pydantic_compat and correctly checks for sequence-like types (list, tuple, set, frozenset). However, the simplified validation logic removes per-element type enforcement that was previously in place.

Typed EventedSet usage exists in the codebase (EventedSet['progress'] in src/napari/utils/progress.py, EventedSet[str] in src/napari/plugins/_plugin_manager.py, and the Selection[_T] subclass), so this validation change has real impact: elements assigned to typed sets will no longer be validated at the Pydantic level. This is a breaking change that could allow invalid elements to be silently added without triggering validation errors. Test that existing typed EventedSet fields continue to work as expected or explicitly document that per-element validation is no longer enforced.

src/napari/utils/info.py (1)

131-139: Well-designed compatibility shim for npe2 migration transition.

The progressive fallback pattern (model_dump → dict → sentinel message) appropriately handles npe2 versions across the Pydantic V1/V2 boundary. Both Pydantic versions support the include parameter with identical semantics when passed as a set, including nested field paths like 'package_metadata.version'. The feature detection via hasattr correctly adapts to available methods, and the fallback gracefully accommodates test doubles without serialization support.

src/napari/settings/_yaml.py (3)

66-68: LGTM: Documentation updated for Pydantic V2.

The docstring correctly reflects the V2 configuration pattern using model_config instead of the legacy Config class.


95-101: LGTM: Correct migration to model_config.

The change from self.__config__ to self.model_config.get('yaml_dumper', YamlDumper) correctly implements the Pydantic V2 configuration access pattern. The type ignore on the assignment (lines 98-100) is reasonable given the dynamic nature of the config lookup.


74-91: The # type: ignore[arg-type] comments are necessary and justified.

Pydantic's model_dump() expects concrete types (set[int | str] and dict[int | str, Any]), while this method intentionally accepts abstract types (AbstractSet[int | str] and Mapping[int | str, Any]) for a more flexible public API. The type ignores correctly bridge this intentional design trade-off. No changes needed.

src/napari/utils/colormaps/colormap.py (8)

16-22: LGTM: Pydantic V2 compatibility imports added.

The imports from napari._pydantic_compat provide the necessary V2 decorators and utilities for model configuration and validation.


105-158: LGTM: Correct migration to model_validator.

The validator has been properly migrated from V1's @validator to V2's @model_validator(mode='after'). Key changes:

  • Now runs after all fields are initialized, allowing direct access to self.colors and self.interpolation
  • Uses object.__setattr__ to bypass validation/frozen field constraints when setting default controls
  • Returns Self for proper initialization flow

This pattern is appropriate for V2 model validators that need to compute values based on multiple fields.


216-219: LGTM: Correct workaround for cached_property handling.

The model_config with ignored_types=(cached_property,) is a documented workaround for Pydantic's deepcopy behavior with cached properties. The comment correctly references the upstream issue.


235-235: LGTM: Correct V2 serialization method.

Changed from .dict() to .model_dump() per Pydantic V2 API.


312-319: LGTM: Correct field_validator migration.

The validator has been properly migrated from V1's @validator to V2's @field_validator with the required @classmethod decorator. The validation logic remains unchanged.


442-485: LGTM: Correct field_validator migration with cross-field access.

The validator has been properly migrated to V2:

  • @field_validator('color_dict', mode='before') with @classmethod
  • Changed from values['background_value'] to info.data['background_value'] for accessing other field values during validation
  • The mode='before' ensures validation runs before type coercion

This is the correct pattern for V2 validators that need access to other fields.


952-960: LGTM: Explicit keyword arguments for construction.

Changing from positional to keyword arguments (colors=...) improves clarity and aligns with Pydantic V2 best practices for model instantiation.


425-425: No action needed — type annotation change is correct.

The type annotation change from defaultdict to dict is intentional and accurate. Since the field is initialized with Field(default_factory=dict) which creates a regular dict, and all usage relies only on standard dict methods (.get(), .items(), .values()), the broader dict type correctly describes what the field contains. While the validator (lines 483-484) can preserve a defaultdict if one is passed, this is optional, and code does not depend on defaultdict-specific behavior. The change improves type correctness without any breaking consequences.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (25)
.github/workflows/circleci.yml-1-4 (1)

1-4: Fix the line-ending issue flagged by YAMLlint.

The static analysis tool reports a line-ending error at line 1: "wrong new line character: expected \n". Ensure the file uses Unix-style line endings (LF) throughout, not Windows-style (CRLF). This can typically be fixed by configuring your Git client or editor to use LF for this file.

.github/workflows/test_vendored.yml-1-1 (1)

1-1: Fix YAML file encoding: wrong newline character.

YAMLlint reports an encoding issue at the start of the file. Ensure the file uses Unix line endings (LF, \n) instead of Windows (CRLF) or other formats.

.github/workflows/test_vendored.yml-7-7 (1)

7-7: Remove extra whitespace after colon in cron schedule.

YAMLlint flags excessive whitespace after the colon in the cron line. YAML style convention calls for a single space.

🔎 Proposed fix
-    - cron:  '0 2 * * 1'    # Mondays, 2AM
+    - cron: '0 2 * * 1'    # Mondays, 2AM
examples/clipping_planes_interactive_.py-135-143 (1)

135-143: Remove duplicate assignment on line 138.

Line 138 duplicates the labels_layer.mouse_pan = False assignment from line 137. This is likely a copy-paste error. Verify the intended target layer for line 138 or remove the duplicate if both assignments are unnecessary.

🔎 Likely fix: remove the duplicate
   # Disable interactivity during plane drag
   volume_layer.mouse_pan = False
   labels_layer.mouse_pan = False
-  labels_layer.mouse_pan = False
   points_layer.mouse_pan = False
.pre-commit-config.yaml-1-1 (1)

1-1: Change line endings from CRLF to LF.

The file uses Windows-style line endings (CRLF, \r\n) throughout. Pre-commit configuration should use Unix-style line endings (LF, \n).

.github/workflows/test_translations.yml-6-6 (1)

6-6: Fix YAMLlint formatting issue: excessive whitespace after colon.

Line 6 has two spaces after the colon, which violates YAMLlint's colons rule. Correct to single space for consistency.

-    - cron:  '0 1 * * *'
+    - cron: '0 1 * * *'
.github/workflows/test_translations.yml-1-1 (1)

1-1: Fix CRLF line endings in the workflow file.

The file uses CRLF (\r\n) line endings instead of LF (\n). Convert the entire file to use consistent LF line endings.

examples/dev/slicing/random_shapes.py-61-62 (1)

61-62: Consider using tempfile module for secure temporary file handling.

Using /tmp/ directly is insecure for production code. While acceptable for development examples, consider using Python's tempfile module for improved security and platform compatibility.

🔎 Suggested improvement
+import tempfile
 
 test_filename = '/tmp/napari_example_shapes.npz'

Then update line 61:

-test_filename = '/tmp/napari_example_shapes.npz'
+test_filename = os.path.join(tempfile.gettempdir(), 'napari_example_shapes.npz')

Committable suggestion skipped: line range outside the PR's diff.

examples/dev/gui_notifications_threaded.py-12-12 (1)

12-12: Add explicit stacklevel to warnings.warn().

When calling warnings.warn() from a helper function, always include stacklevel=2 (or higher) so the warning points to the caller, not the helper. This improves debugging.

🔎 Proposed fix
-    warnings.warn('Warning in another thread')
+    warnings.warn('Warning in another thread', stacklevel=2)
examples/dev/gui_notifications.py-15-15 (1)

15-15: Add explicit stacklevel to warnings.warn().

When calling warnings.warn() from a helper function, always include stacklevel=2 (or higher) so the warning points to the caller, not the helper.

🔎 Proposed fix
 def warn_():
-    warnings.warn('warning!')
+    warnings.warn('warning!', stacklevel=2)
.github/labeler.yml-15-22 (1)

15-22: Duplicate glob patterns in the qt label configuration.

Lines 18-19 and 20-21 contain duplicate entries:

  • napari/_qt/**/*.py appears twice
  • napari/qt/**/*.py appears twice

This doesn't break functionality but adds unnecessary noise.

🔎 Proposed fix
 qt:
   - changed-files:
     - any-glob-to-any-file:
       - 'napari/_qt/**/*.py'
-      - 'napari/_qt/**/*.py'
       - 'napari/qt/**/*.py'
-      - 'napari/qt/**/*.py'
.github/workflows/make_release.yml-1-8 (1)

1-8: Fix newline character format.

Static analysis indicates the file uses incorrect newline characters (likely CRLF instead of LF). Ensure the file uses Unix-style line endings (\n) for consistency.

MANIFEST.in-4-7 (1)

4-7: Duplicate include directive.

Line 7 (recursive-include src/napari *.pyi) is an exact duplicate of line 4. This redundancy should be removed.

🔎 Suggested fix
 recursive-include src/napari *.pyi
 recursive-include src/napari _tests/*.py
 recursive-include src/napari_builtins _tests/*.py
-recursive-include src/napari *.pyi
 recursive-include src/napari *.png *.svg *.qss *.gif *.ico *.icns
examples/dev/poly-hole.py-18-23 (1)

18-23: The opts parameter is accepted but silently ignored.

The fallback triangulate function accepts opts to match the external library's signature, but it's never used. When called with opts='p' (line 80), this option is silently discarded, which may cause behavioral differences compared to the triangle library.

Consider either:

  1. Documenting that options are ignored in the vispy fallback, or
  2. Raising a warning when unsupported options are passed.
🔎 Suggested improvement
 def triangulate(params, opts=None):
+    if opts is not None:
+        import warnings
+        warnings.warn("triangulate options are ignored when using vispy fallback", stacklevel=2)
     vertices_raw = params['vertices']
     edges = params['segments']
     tri = Triangulation(vertices_raw, edges)
     tri.triangulate()
     return {'vertices': tri.pts, 'triangles': tri.tris}
.github/workflows/test_comprehensive.yml-1-136 (1)

1-136: Fix newline character issue.

YAMLlint reports incorrect newline characters. Ensure the file uses LF (\n) line endings.

.github/workflows/benchmarks_report.yml-1-99 (1)

1-99: Fix newline character issue.

YAMLlint reports that the file uses incorrect newline characters. Ensure the file uses LF (\n) line endings rather than CRLF (\r\n).

This can typically be fixed by configuring your editor or running:

dos2unix .github/workflows/benchmarks_report.yml
.github/workflows/deploy_docs.yml-1-48 (1)

1-48: Fix newline character issue.

YAMLlint reports incorrect newline characters. Ensure the file uses LF (\n) line endings.

.github/workflows/reusable_run_tox_test.yml-163-163 (1)

163-163: Fix extra space before colon in if conditions.

The static analysis tool correctly identifies extra spaces before the colon on lines 163 and 174. This should be if: without the space for YAML consistency.

🔎 Proposed fix
-        if : ${{ env.SECOND != 'none' }}
+        if: ${{ env.SECOND != 'none' }}
-        if : ${{ env.THIRD != 'none' }}
+        if: ${{ env.THIRD != 'none' }}

Also applies to: 174-174

examples/features_table_widget.py-12-12 (1)

12-12: Typo: "viceversa" should be "vice versa".

🔎 Proposed fix
-on the layer and viceversa.
+on the layer and vice versa.
.github/workflows/pr_dependency.yml-1-16 (1)

1-16: Incorrect line ending characters detected.

Similar to the dependabot.yml file, this workflow uses CRLF line endings instead of LF. Consider converting to Unix-style line endings for consistency.

sed -i 's/\r$//' .github/workflows/pr_dependency.yml
.github/dependabot.yml-1-16 (1)

1-16: Incorrect line ending characters detected.

The static analysis indicates this file uses CRLF (\r\n) line endings instead of the expected LF (\n). This can cause inconsistencies in cross-platform development and CI environments.

Please convert the file to use Unix-style line endings (LF). Most editors support this, or you can use:

sed -i 's/\r$//' .github/dependabot.yml
.github/workflows/edit_pr_description.yml-1-40 (1)

1-40: Line endings should use LF (\n) instead of CRLF.

Same issue as other workflow files - this file uses incorrect line ending characters. Ensure consistent LF line endings across all YAML workflow files.

.github/workflows/test_prereleases.yml-1-106 (1)

1-106: Line endings should use LF (\n) instead of CRLF.

The static analysis tool reports this file uses incorrect line ending characters. For cross-platform consistency in git repositories, YAML files should use Unix-style LF (\n) line endings rather than Windows-style CRLF (\r\n).

Consider adding or updating a .gitattributes file to enforce LF line endings for YAML files:

*.yml text eol=lf
*.yaml text eol=lf
.github/workflows/benchmarks.yml-1-216 (1)

1-216: Line endings should use LF (\n) instead of CRLF.

Consistent with other workflow files in this PR, this file uses incorrect line ending characters. Consider running a bulk fix across all YAML files in the repository.

.github/ISSUE_TEMPLATE/design_related.md-21-21 (1)

21-21: Minor typo: "etc" should be "etc." in American English.

The abbreviation "etc" requires a period.

🔎 Proposed fix
-- [ ] **Visual:** creating mockups, icons, etc
+- [ ] **Visual:** creating mockups, icons, etc.
🧹 Nitpick comments (22)
examples/dynamic-projections-dask.py (2)

42-42: Remove unused noqa directive.

The noqa: E402 comment is flagged as unused because E402 (module-level import not at top) is not enabled in the Ruff configuration. Since the suppression has no effect, the comment can be safely removed.

🔎 Proposed fix
-from magicgui import magicgui  # noqa: E402
+from magicgui import magicgui

48-48: Prefer tuple unpacking over concatenation.

Use the unpacking operator * for a more Pythonic and concise syntax.

🔎 Proposed fix
-    window_shape = (size,) + (arr.shape[1:])
+    window_shape = (size, *arr.shape[1:])
examples/export_rois.py (1)

72-73: Optional: Normalize line continuation formatting for consistency.

The line continuation on line 72–73 is syntactically valid but uses an unusual indentation pattern. Consider reformatting to align with the project's style conventions.

🔎 Suggested formatting adjustment
- screenshot_rois_scaled = viewer.export_rois(rois, scale=2
-                                      )
+ screenshot_rois_scaled = viewer.export_rois(rois, scale=2)

Alternatively, if the longer line is intentional for readability:

- screenshot_rois_scaled = viewer.export_rois(rois, scale=2
-                                      )
+ screenshot_rois_scaled = viewer.export_rois(
+     rois, scale=2
+ )
examples/interactive_move_rectangle_3d.py (1)

1-73: Scope check: This file appears orthogonal to the Pydantic V1→V2 migration.

This example script contains no Pydantic models, validators, custom types, or deprecated method calls (.dict(), .json(), etc.). The PR objectives focus on internal utilities and layer managers in src/napari—not examples. If this file was reformatted as part of a broader style pass, that's reasonable, but verify that:

  1. No unintended functional changes were introduced (the AI summary claims formatting-only).
  2. napari's public API (Viewer, shapes_layer) hasn't changed in ways that would break this example (e.g., method signatures, property access).

The code logic itself looks correct: rectangle data initialization, normal vector computation, viewer setup, mouse drag callback, and main entry point all appear intact.

To confirm the example still works post-migration, please verify:

  • napari.Viewer(ndisplay=3) still accepts the ndisplay parameter
  • viewer.add_shapes(), viewer.camera, viewer.text_overlay properties are still accessible and compatible
  • The @shapes_layer.mouse_drag_callbacks.append decorator pattern still works
  • Running this example on Python 3.12 (or 3.14 once npe2 is updated) does not raise errors

If you'd like, I can generate a shell script to verify that this example is syntactically correct and runs without import or attribute errors, though actual execution would require napari to be installed.

examples/custom_mouse_functions.py (1)

24-25: Callback signatures match framework API; unused parameters are intentional.

The unused parameters (viewer on line 24, layer on lines 59 and 76) are part of napari's callback handler contract. Not all handlers use both parameters, which is a normal pattern in event-driven systems.

If you prefer stricter linting, you could adopt the _parameter convention for explicitly unused parameters (e.g., _viewer, _layer), but this is optional for example code.

Also applies to: 59-59, 76-76

.github/FUNDING.yml (1)

1-2: Consider scope alignment.

This file is a funding configuration unrelated to the stated PR objective of completing Pydantic V1→V2 migration. If this change is incidental (e.g., from a cleanup commit), verify that line-ending normalization is intentional and coordinated across the repository.

examples/mgui_dask_delayed_.py (1)

25-38: Combine the two if __name__ == '__main__': blocks into one.

The code has two separate if __name__ == '__main__': blocks (lines 25-35 and 37-38). While technically valid in Python, this is non-idiomatic and reduces clarity. Best practice is to combine setup and execution code into a single block.

🔎 Proposed refactor to combine the blocks
 if __name__ == '__main__':
     from dask.distributed import Client
 
     client = Client()
 
     @magicgui(client={'bind': client})
     def widget(client, nz: int = 1000) -> Future[ImageData]:
         return client.submit(_slow_function, nz)
 
     viewer = napari.Viewer()
     viewer.window.add_dock_widget(widget, area='right')
-
-if __name__ == '__main__':
     napari.run()
examples/dev/slicing/random_points.py (1)

7-9: Module docstring placement is non-standard.

The docstring appears after the imports, but PEP 257 recommends placing module docstrings at the very top of the file, before any imports. This is a pre-existing style issue, not introduced by this change.

🔎 Suggested fix
+"""
+Stress the points layer by generating a large number of points.
+"""
+
 import argparse

 import numpy as np

 import napari

-"""
-Stress the points layer by generating a large number of points.
-"""
-
 parser = argparse.ArgumentParser()
examples/dev/leaking_check.py (1)

43-47: Consider adding error handling for referrer chain traversal.

The nested gc.get_referrers() calls with hard-coded indexing ([0], [1]) will raise IndexError if the expected referrers don't exist or if li() returns None. Since this is a dev utility, adding basic checks could make debugging more robust.

Optional: Add defensive checks
 print(gc.get_referrers(li()))
-print(gc.get_referrers(li())[1])
-print(gc.get_referrers(gc.get_referrers(gc.get_referrers(li())[0])))
-res = gc.get_referrers(gc.get_referrers(gc.get_referrers(li())[0])[0])
-print(res)
+referrers = gc.get_referrers(li())
+if len(referrers) > 1:
+    print(referrers[1])
+if referrers:
+    level2 = gc.get_referrers(referrers[0])
+    print(gc.get_referrers(level2))
+    if level2:
+        res = gc.get_referrers(level2[0])
+        print(res)
examples/dev/demo_shape_creation.py (2)

73-73: Fix spacing inconsistency in list.

Missing space after the comma before 22.

🔎 Proposed fix
-    coords = coords[:, [0, 8, 16,22]]
+    coords = coords[:, [0, 8, 16, 22]]

93-96: Normalize list spacing for consistency.

The RGBA color arrays on lines 93 and 95 lack spaces after commas, inconsistent with other lists in the file (e.g., line 73).

🔎 Proposed fix
-    'face_color': 'class' if args.properties else [1,1,1,1],
+    'face_color': 'class' if args.properties else [1, 1, 1, 1],
     'face_color_cycle': color_cycle,
-    'edge_color': 'class' if args.properties else [1,1,1,1],
+    'edge_color': 'class' if args.properties else [1, 1, 1, 1],
examples/minimum_blending.py (1)

27-33: Inconsistent spacing around = in keyword arguments.

Lines 30-31 use spaces around = while line 32 has inconsistent spacing (no space before, space after). Per PEP 8, keyword arguments should not have spaces around =:

contrast_limits=[[1110, 23855], [1600, 50000]],
colormap=['I Purple', 'I Orange'],
blending=['translucent_no_depth', 'minimum']

Given this PR includes formatting normalization, consider standardizing the spacing for consistency.

🔎 Proposed fix for consistent keyword argument spacing
-viewer.add_image(data.cells3d(),
-                            name=['membrane', 'nuclei'],
-                            channel_axis=1,
-                            contrast_limits = [[1110, 23855], [1600, 50000]],
-                            colormap = ['I Purple', 'I Orange'],
-                            blending= ['translucent_no_depth', 'minimum']
-                            )
+viewer.add_image(data.cells3d(),
+                 name=['membrane', 'nuclei'],
+                 channel_axis=1,
+                 contrast_limits=[[1110, 23855], [1600, 50000]],
+                 colormap=['I Purple', 'I Orange'],
+                 blending=['translucent_no_depth', 'minimum']
+                 )
examples/dev/overlays.py (1)

50-64: Event handler callbacks require event parameter by framework contract—consider marking as intentionally unused.

The Ruff lint flags (ARG002) on the event parameters are false positives in the context of event handler patterns. However, since the event parameter is genuinely unused in these methods, you can make the intent explicit by renaming to a leading underscore (e.g., _event), which signals to linters that the parameter is accepted by contract but intentionally unused.

🔎 Suggested convention change
-    def _on_color_change(self, event=None):
+    def _on_color_change(self, _event=None):
         self.node.color = self.overlay.color

-    def _on_position_change(self, event=None):
+    def _on_position_change(self, _event=None):
         # we can overload the position changing to account for the size, so that the dot
         # always sticks to the edge; there are `offset` attributes specifically for this
         self.x_offset = self.y_offset = self.overlay.size / 2
         super()._on_position_change()

-    def _on_size_change(self, event=None):
+    def _on_size_change(self, _event=None):
         self.node.radius = self.overlay.size / 2
         self.x_size = self.overlay.size
         self.y_size = self.overlay.size
         # trigger position update since the radius changed
         self._on_position_change()
examples/dev/slicing/janelia_s3_n5_multiscale.py (1)

1-14: Consider moving module docstring before imports per PEP 257.

The module docstring (lines 6-13) is currently placed after the imports. Per PEP 257, module docstrings should appear at the very beginning of the file, before any import statements. While this doesn't affect functionality, it would improve consistency with Python conventions.

🔎 Suggested structure
+"""
+The sample data here is Interphase HeLa Cell [https://openorganelle.janelia.org/datasets/jrc_hela-3],
+from HHMI's OpenOrganelle [https://openorganelle.janelia.org].
+
+The data are hosted by Open Data on AWS on S3.
+
+This tests access to multi-scale remote data.
+"""
+
 import dask.array as da
 import zarr

 import napari
-
-"""
-The sample data here is Interphase HeLa Cell [https://openorganelle.janelia.org/datasets/jrc_hela-3],
-from HHMI's OpenOrganelle [https://openorganelle.janelia.org].
-
-The data are hosted by Open Data on AWS on S3.
-
-This tests access to multi-scale remote data.
-"""
.github/CONTRIBUTING.md (1)

17-19: Minor: Trailing whitespace on line 19.

There appears to be trailing whitespace after "not directly." on line 19.

🔎 Proposed fix
 - `reusable_run_tox_test.yml` uses our constraint files to install the
   compatible dependencies for each test environment which may differ
-  by OS and qt versions. It is called from `test_pull_request.yml` and `test_comprehensive.yml`, not directly. 
+  by OS and qt versions. It is called from `test_pull_request.yml` and `test_comprehensive.yml`, not directly.
examples/add_surface_2D.py (1)

20-20: Inconsistent spacing around commas inside the tuple.

The spaces before commas (data , and faces ,) are unconventional per PEP 8 style. Consider removing the extra spaces for consistency.

🔎 Suggested fix
-layer = viewer.add_surface((data , faces , values))
+layer = viewer.add_surface((data, faces, values))
dockerfile (1)

1-6: Consider parameterizing platform flag in FROM statement.

The FROM --platform=linux/amd64 uses a constant value. For better portability and flexibility, consider using a build argument (e.g., --build-arg TARGETPLATFORM) or omitting the flag to allow automatic platform detection, unless amd64 is a hard requirement.

examples/3D_paths.py (2)

18-20: Inconsistent indentation reduces readability.

The arguments are over-indented relative to the function call. Consider aligning with PEP 8 continuation style.

🔎 Suggested formatting
-blobs = data.binary_blobs(
-            length=128, blob_size_fraction=0.05, n_dim=3, volume_fraction=0.05
-        )
+blobs = data.binary_blobs(
+    length=128, blob_size_fraction=0.05, n_dim=3, volume_fraction=0.05
+)

26-29: Inconsistent indentation in nested array definition.

The inner array elements have varying indentation levels which reduces readability. Consider consistent formatting.

🔎 Suggested formatting
-path = np.array([np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [20, 5, 15],
-    [56, 70, 21], [127, 127, 127]]),
-    np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [0, 5, 15],
-        [0, 70, 21], [0, 127, 127]])])
+path = np.array([
+    np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [20, 5, 15],
+              [56, 70, 21], [127, 127, 127]]),
+    np.array([[0, 0, 0], [0, 10, 10], [0, 5, 15], [0, 5, 15],
+              [0, 70, 21], [0, 127, 127]])
+])
examples/action_manager.py (2)

106-109: Minor formatting inconsistency in dictionary.

The dictionary values have extra leading whitespace creating inconsistent indentation compared to typical Python style.

🔎 Suggested fix
 settings = {
-        'napari:activate_points_add_mode' : '=',
-        'napari:activate_points_select_mode': '-',
+    'napari:activate_points_add_mode': '=',
+    'napari:activate_points_select_mode': '-',
 }

112-114: Minor: Inconsistent spacing in loop body.

There's an extra space before the = in the assignment.

🔎 Suggested fix
 for action, key in settings.items():
-   _old_shortcut = action_manager.unbind_shortcut(action)
-   action_manager.bind_shortcut(action, key)
+    _old_shortcut = action_manager.unbind_shortcut(action)
+    action_manager.bind_shortcut(action, key)
examples/live_tiffs_.py (1)

111-114: Consider clarifying the for-else construct.

The else clause on a for loop executes when the loop completes without a break. Here it yields None after processing all files, which is intentional for signaling the thread worker. However, this Python idiom is often unfamiliar to readers.

A more explicit alternative:

         # yield every file to process as a dask.delayed function object.
         for p in sorted(files_to_process, key=alphanumeric_key):
             yield delayed(imread)(os.path.join(path, p))
-        else:
-            yield
+        
+        # Yield None if no files were processed this iteration
+        if not files_to_process:
+            yield None

This is optional—the current code is correct, just less common.

Comment on lines 1 to 244
# mypy: disable-error-code="attr-defined"
import warnings
from collections.abc import Generator
from copy import deepcopy

import numpy as np
from magicgui import magicgui

import napari
from napari._vispy.mouse_event import NapariMouseEvent
from napari._vispy.overlays.interaction_box import VispySelectionBoxOverlay
from napari._vispy.utils.visual import overlay_to_visual
from napari.components.overlays import SelectionBoxOverlay
from napari.components.overlays.interaction_box import InteractionBoxHandle
from napari.layers import Image
from napari.layers.utils.interaction_box import (
generate_interaction_box_vertices,
get_nearby_handle,
)


# define a model for the selection box overlay;
# we subclass from SelectionBoxOverlay to get the
# default behavior of the selection box;
class SelectionBoxNoRotation(SelectionBoxOverlay):
"""Selection box overlay with no rotation handle."""


# we also need to define an equivalent vispy overlay;
# again, we subclass from VispySelectionBoxOverlay
class VispySelectionBoxNoRotation(VispySelectionBoxOverlay):
"""Vispy selection box overlay with no rotation handle."""

# the _on_bounds_change method is the same as in the
# original VispySelectionBoxOverlay, but we set
# rotation to False to not draw the rotation handle
def _on_bounds_change(self) -> None:
if self.layer._slice_input.ndisplay == 2:
top_left, bot_right = self.overlay.bounds
self.node.set_data(
# invert axes for vispy
top_left[::-1],
bot_right[::-1],
handles=self.overlay.handles,
selected=self.overlay.selected_handle,
# by setting rotation to False,
# the circle handle will not be drawn
rotation=False,
)


# before we can use the new overlay, we have to update
# the overlay_to_visual mapping to include our new overlay;
# this is necessary so that the correct vispy overlay
# is used when the overlay is created
overlay_to_visual[SelectionBoxNoRotation] = VispySelectionBoxNoRotation

viewer = napari.Viewer()

# we add an image layer with random data;
data = np.random.randint(0, 255, size=(1024, 512), dtype=np.uint8)
image = viewer.add_image(
data,
name='image',
)

# just for type checking
assert isinstance(image, Image)

# we recover the bounds of the image layer;
# this method will ensure that the overlay is drawn
# correctly in the viewer
# TODO: this half pixel offset should be done on the visual side actually
layer_bounds = ((0, 0), data.shape)
image._overlays['selection_no_rotation'] = SelectionBoxNoRotation(
bounds=layer_bounds, handles=True
)

# with some adjustments, the selection box overlay
# can be interacted with via mouse events;
# we need to first setup the mouse event handlers
# to allow for the interaction with the overlay;


# this callback will handle the mouse events of
# dragging and dropping the selection box handles;
# it will check if the mouse is in range of one of the
# overlay handles; if it is, we will set the selected handle
# to the handle that is closest to the mouse position;
# then we will set the bounds of the overlay to the
# new position of the mouse;
def resize_selection_box(
layer: Image, event: 'NapariMouseEvent'
) -> 'Generator[None, None, None]':
"""Resize the selection box based on mouse movement.
Parameters
----------
layer : DetectorLayer
The layer to resize the selection box for.
event : NapariMouseEvent
The event triggered by mouse movement.
Yields
------
None
This is a generator function that handles mouse dragging.
"""
if len(event.dims_displayed) != 2:
return

# Get the selected handle
selected_handle = layer._overlays['selection_no_rotation'].selected_handle
if selected_handle is None or selected_handle in [
InteractionBoxHandle.INSIDE,
InteractionBoxHandle.ROTATION,
]:
# If no handle is selected or the selected handle
# is INSIDE or ROTATION, do nothing
return

top_left, bot_right = (
list(x)
for x in deepcopy(layer._overlays['selection_no_rotation'].bounds)
)

layer_bounds = image._display_bounding_box_augmented([0, 1])

# to prevent the event from being passed down to the
# pan-zoom event handler, set the event as handled;
event.handled = True

yield

# Main event loop for handling drag events
while event.type == 'mouse_move':
mouse_pos = layer.world_to_data(event.position)[event.dims_displayed]
clipped_y = np.clip(mouse_pos[0], *layer_bounds[0])
clipped_x = np.clip(mouse_pos[1], *layer_bounds[1])

# based on the new mouse position, we recalculate the bounds
# of the overlay; we need to ensure that the new bounds are within
# the bounds of the image
match selected_handle:
case InteractionBoxHandle.TOP_LEFT:
top_left[0] = clipped_y
top_left[1] = clipped_x
case InteractionBoxHandle.TOP_CENTER:
top_left[0] = clipped_y
case InteractionBoxHandle.TOP_RIGHT:
top_left[0] = clipped_y
bot_right[1] = clipped_x
case InteractionBoxHandle.CENTER_LEFT:
top_left[1] = clipped_x
case InteractionBoxHandle.CENTER_RIGHT:
bot_right[1] = clipped_x
case InteractionBoxHandle.BOTTOM_LEFT:
bot_right[0] = clipped_y
top_left[1] = clipped_x
case InteractionBoxHandle.BOTTOM_CENTER:
bot_right[0] = clipped_y
case InteractionBoxHandle.BOTTOM_RIGHT:
bot_right[0] = clipped_y
bot_right[1] = clipped_x
case _:
pass

# now we update the bounds of the overlay
# to trigger the visual update;
layer._overlays['selection_no_rotation'].bounds = deepcopy(
(tuple(top_left), tuple(bot_right))
)
yield


# this callback will hightlight the overlay handles
# when the mouse hovers over them;
def highlight_roi_box_handles(layer: Image, event: NapariMouseEvent) -> None:
"""Highlight the hovered handle of a selection box.
Parameters
----------
layer : Image
The layer to highlight the selection box for.
event : NapariMouseEvent
The event triggered by mouse movement.
"""
# the event is not handled by the viewer
# if the number of displayed dimensions is not 2
# this is a requirement for the overlay to be displayed
if len(event.dims_displayed) != 2:
return

# we work in data space so we're axis aligned which simplifies calculation
# same as Layer.world_to_data
world_to_data = (
layer._transforms[1:].set_slice(layer._slice_input.displayed).inverse
)

# interaction box calculations all happen in vispy coordinates (zyx)
pos = np.array(world_to_data(event.position))[event.dims_displayed][::-1]

top_left, bot_right = layer._overlays['selection_no_rotation'].bounds
handle_coords = generate_interaction_box_vertices(
top_left[::-1], bot_right[::-1], handles=True
)
nearby_handle = get_nearby_handle(pos, handle_coords)

# if the selected handle is INSIDE or ROTATION, we don't want to
# highlight the handles, so we return without doing anything
if nearby_handle in [
InteractionBoxHandle.INSIDE,
InteractionBoxHandle.ROTATION,
]:
nearby_handle = None

# set the selected vertex of the box to the nearby_handle (can also be INSIDE or None)
layer._overlays['selection_no_rotation'].selected_handle = nearby_handle


# after defining the callbacks, we need to connect them to our layer;
# mouse_move_callbacks is a list of callbacks invoked when the mouse
# hovers over the layer;
# mouse_drag_callbacks is a list of callbacks invoked when the
# mouse is pressed, moved and released;
image.mouse_move_callbacks.append(highlight_roi_box_handles)
image.mouse_drag_callbacks.append(resize_selection_box)


# we use a simple magicgui widget to allow
# the toggling of the selection box overlay
# as demonstration
@magicgui(auto_call=True)
def toggle_overlay(
viewer: napari.Viewer, toggle_selection_box: bool = False
) -> None:
with warnings.catch_warnings():
warnings.simplefilter('ignore')
viewer.layers['image']._overlays[
'selection_no_rotation'
].visible = toggle_selection_box


# add the widget to the viewer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether these example files import or use Pydantic models
echo "=== Checking for Pydantic imports in controllable_overlay.py ==="
grep -n "pydantic\|BaseModel\|validator\|field_validator" examples/dev/controllable_overlay.py || echo "No Pydantic imports found"

echo -e "\n=== Checking for Pydantic usage patterns ==="
grep -n "\.dict()\|\.model_dump()\|\.json()\|\.model_dump_json()\|__fields__\|model_fields" examples/dev/controllable_overlay.py || echo "No Pydantic method calls found"

Repository: derekthirstrup/napari

Length of output: 232


⚠️ PR scope mismatch: This example file is not part of the Pydantic V1→V2 migration.

This file contains no Pydantic imports, model definitions, or method calls (e.g., .dict(), .model_dump(), .json()). The PR objectives describe a comprehensive Pydantic V2 migration, yet this example file has no involvement in that work.

Clarify whether inclusion is intentional due to indirect Pydantic dependencies through napari's internal modules, or if the file was included by mistake. If there are transitive Pydantic dependencies, document that context.

Comment on lines 23 to 24
err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error)
warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix mismatched arguments to QtViewerPushButton.

The arguments to QtViewerPushButton don't align with the constructor signature. Per the reference code, the signature is __init__(button_name: str, tooltip: str = '', slot=None, action: str = '', ...).

Line 23 passes None as button_name (should be a string), and the third argument 'new Error' is passed to slot (should be a callable, not a string). The function make_error is passed to action (should be a string, not a callable).

The argument order and types need correction.

🔎 Proposed fix
-err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error)
-warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning)
+err_btn = QtViewerPushButton('error', 'new Error', make_error)
+warn_btn = QtViewerPushButton('warning', 'new Warn', make_warning)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
err_btn = QtViewerPushButton(None, 'warning', 'new Error', make_error)
warn_btn = QtViewerPushButton(None, 'warning', 'new Warn', make_warning)
err_btn = QtViewerPushButton('error', 'new Error', make_error)
warn_btn = QtViewerPushButton('warning', 'new Warn', make_warning)
🤖 Prompt for AI Agents
In examples/dev/gui_notifications_threaded.py around lines 23-24, the
QtViewerPushButton arguments are mismatched: pass a string as button_name and a
callable as slot, and ensure action is a string. Change the calls to use the
button name first (e.g., "new Error" / "new Warn"), tooltip second (e.g.,
"warning"), slot third as the callable (make_error / make_warning), and provide
a string action (e.g., "error" / "warn" or empty string) as the fourth argument.

Comment on lines 10 to 11
import something_that_does_not_exist
return something_that_does_not_exist.fun(x, y)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove broken import that will crash the example.

Line 10 imports a module that does not exist, and line 11 attempts to call a method on it. This will raise an ImportError at runtime and prevent the example from running. If the intent is to demonstrate error handling in threads, use a real module or raise an error explicitly without the bogus import.

🔎 Proposed fix
 def raise_():
     x = 1
     y = 'a string'
-    import something_that_does_not_exist
-    return something_that_does_not_exist.fun(x, y)
+    raise RuntimeError('Error in another thread')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import something_that_does_not_exist
return something_that_does_not_exist.fun(x, y)
def raise_():
x = 1
y = 'a string'
raise RuntimeError('Error in another thread')
🤖 Prompt for AI Agents
In examples/dev/gui_notifications.py around lines 10-11, remove the broken
import of something_that_does_not_exist and the call to its .fun method; replace
with either a valid, real import and function call (e.g., use a standard library
module or a helper function defined in this repo) or explicitly raise an
exception to demonstrate thread error handling (e.g., raise RuntimeError("demo
error") inside the thread) so the example no longer fails with ImportError at
runtime.



viewer = napari.Viewer()
layer_buttons = viewer.window._qt_viewer.layerButtons
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid accessing private attributes.

Line 19 accesses viewer.window._qt_viewer.layerButtons. The _qt_viewer attribute (with underscore prefix) is a private implementation detail. Use the public API qt_viewer instead to ensure forward compatibility.

🔎 Proposed fix
-layer_buttons = viewer.window._qt_viewer.layerButtons
+layer_buttons = viewer.window.qt_viewer.layerButtons
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
layer_buttons = viewer.window._qt_viewer.layerButtons
layer_buttons = viewer.window.qt_viewer.layerButtons
🤖 Prompt for AI Agents
In examples/dev/gui_notifications.py around line 19, the code accesses the
private attribute viewer.window._qt_viewer.layerButtons; change this to use the
public API by replacing it with viewer.window.qt_viewer.layerButtons (or
viewer.qt_viewer.layerButtons if the public accessor is on viewer directly in
this context), ensuring you reference the public qt_viewer attribute instead of
the underscored _qt_viewer so the code uses the supported API and remains
forward-compatible.

# Description

Currently, adding a label triggers the benchmark workflow. It is then
skipped conditionally, but add an entry to the list of actions in the
GitHub interface.

<img width="1382" height="931" alt="obraz"
src="https://github.com/user-attachments/assets/0dc3320b-a060-4261-8be7-f1ed0d3346b7"
/>

This PR moves label trigger to a separate workflow that will contain all
skipped
# References and relevant issues

closes napari#8521

# Description

It looks like a bug is introduced in napari#7150. There was a wrong assumption
that for `ScalarField` the `data.ndim` is equal to `layer.ndim`.

This PR comes back to using `layer.ndim` to properly expand data
dimensionality.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bdc9a51 and a809851.

📒 Files selected for processing (1)
  • .github/workflows/build_docs.yml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/build_docs.yml

[error] 1-1: wrong new line character: expected \n

(new-lines)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: ubuntu-22.04 / ubuntu-22.04 py 3.10 headless cov
  • GitHub Check: windows-2022 / windows-2022 py 3.12 pyqt5 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.13 pyqt6 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.12 pyqt5 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.11 pyqt5 cov
  • GitHub Check: windows-2022 / windows-2022 py 3.13 pyqt5 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.11 pyside6 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.10 pyqt5 cov
  • GitHub Check: ubuntu-22.04 / ubuntu-22.04 py 3.10 pyqt5 min_req cov
  • GitHub Check: windows-2022 / windows-2022 py 3.11 pyqt5 cov
  • GitHub Check: windows-2022 / windows-2022 py 3.10 pyqt5 cov
  • GitHub Check: ubuntu-latest / ubuntu-latest py 3.13 pyqt5 cov
  • GitHub Check: test examples / ubuntu-latest py 3.12 py312-linux-pyqt6-examples-cov cov
  • GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
  • GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
  • GitHub Check: pip install / ubuntu-latest 3.12 pip install
  • GitHub Check: build_docs / Build & Upload Docs Artifact
  • GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)
  • GitHub Check: build1 (Docker, napari, napari/napari)
🔇 Additional comments (1)
.github/workflows/build_docs.yml (1)

29-31: Reasonable temporary workaround for blocking Pydantic V2 incompatibility.

The change to slimfast target is a pragmatic solution to skip npe2 docs prep, which fails due to Pydantic V2 schema incompatibility in npe2's _docs/render.py. The slimfast target uses prep-stubs instead of prep-docs, avoiding the problematic step. The comments and PR reference provide clear context for this temporary measure until npe2 migrates to Pydantic V2.

Comment on lines 1 to 31
name: Build PR Docs

on:
push:
branches:
- docs
tags:
- 'v*'
workflow_dispatch:
workflow_call:
pull_request:
paths:
- '.github/workflows/build_docs.yml'
- 'resources/constraints/*docs*'

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
build_docs:
uses: napari/shared-workflows/.github/workflows/build_docs.yml@main
secrets: inherit
with:
main-repo: ${{ github.repository }}
main-ref: ${{ github.sha }}
docs-repo: napari/docs
docs-ref: main
# Use slimfast to skip npe2 docs prep until npe2 has Pydantic V2 compatibility
# See: https://github.com/napari/npe2/pull/419
make_target: "slimfast"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix line ending format.

YAMLlint reports incorrect line endings in this file. Ensure the file uses LF (\n) line endings instead of CRLF (\r\n) for consistency with Unix conventions and to avoid potential issues with git and CI tools.

🔎 Fix line endings

Convert the file to use LF line endings. In most editors:

  • VS Code: Click the line ending indicator in the status bar and select "LF"
  • vim/neovim: :set ff=unix then :w
  • Command line: dos2unix .github/workflows/build_docs.yml

Or configure git to handle line endings automatically via .gitattributes:

*.yml text eol=lf
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 1-1: wrong new line character: expected \n

(new-lines)

🤖 Prompt for AI Agents
In .github/workflows/build_docs.yml around lines 1 to 31, the file currently
uses CRLF line endings; change the file to use LF line endings (\n) throughout
(e.g., via your editor’s line-ending setting or a conversion tool) and commit
the change, and add/update a .gitattributes entry to enforce LF for YAML files
(so future commits preserve Unix-style line endings).

# References and relevant issues

closes napari#8506

# Description

When calculating mitter vectors, initialize the output array for
division with zeros to avoid random values.

This resolves the problem pointed out by the warning introduced in NumPy
2.4.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/build_docs.yml (1)

1-1: Line ending issue already flagged.

The CRLF line ending issue has already been identified in a previous review comment.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a809851 and 089f0ff.

📒 Files selected for processing (1)
  • .github/workflows/build_docs.yml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/build_docs.yml

[error] 1-1: wrong new line character: expected \n

(new-lines)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
  • GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
  • GitHub Check: build_docs / Build & Upload Docs Artifact
  • GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)

Comment on lines 29 to 34
# Use slimfast to skip npe2 docs prep until npe2 has Pydantic V2 compatibility
# Note: docs build may still fail due to Pydantic V2's model_json_schema
# docstrings having formatting issues that Sphinx treats as errors.
# This requires sphinx configuration changes in napari/docs repo.
# See: https://github.com/napari/npe2/pull/419
make_target: "slimfast"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and read the lines around 29-34
head -40 .github/workflows/build_docs.yml | tail -20

Repository: derekthirstrup/napari

Length of output: 817


🏁 Script executed:

# Search for "slimfast" in the repository to understand what this target does
rg "slimfast" --type yaml --type makefile -B 2 -A 2

Repository: derekthirstrup/napari

Length of output: 97


🌐 Web query:

napari npe2 pull request 419 Pydantic V2 compatibility

💡 Result:

Do you want a link, a summary of the changes in PR napari#419, or the current merge/status (and whether it adds Pydantic v2 compatibility)?


🏁 Script executed:

# Search for slimfast without file type restrictions
rg "slimfast" -B 2 -A 2

Repository: derekthirstrup/napari

Length of output: 47


🌐 Web query:

github.com/napari/npe2/pull/419 Pydantic V2

💡 Result:

Do you want a summary of that specific GitHub pull request (napari/npe2 PR napari#419), the changes needed for Pydantic v2 in npe2, or a link to the PR page so you can view it?


🌐 Web query:

site:github.com/napari/npe2/pull/419

💡 Result:

I couldn't find a GitHub Pull Request at site:github.com/napari/npe2/pull/419. The npe2 repository is here if you want me to search it directly or check a different PR number: https://github.com/napari/npe2. [1]

Would you like me to (a) search the npe2 repo for PRs mentioning a specific topic, (b) list recent npe2 PRs, or (c) open a different PR number/URL?


🌐 Web query:

npe2 Pydantic V2 compatibility status 2024 2025

💡 Result:

Short summary (2024–2025):

  • Napari moved to Pydantic v2 as a minimum dependency starting with the 0.6.x series — napari requires Pydantic >= 2.2 and dropped Pydantic v1 support in that transition. [1][2]
  • During 0.6.x napari auto-converts older npe1 plugins to npe2, but some plugins that still depend on Pydantic v1 may break; napari documents this and recommends plugin authors migrate to npe2/Pydantic v2. [2][3]
  • Pydantic v2 itself continued receiving updates through 2024–2025 (performance/bugfix releases); some v2 versions had compatibility issues with newer Python (e.g., pre-2.8.0 v2 had pydantic-core/Python 3.13 incompatibilities), so upgrading to recent Pydantic v2.x is recommended. [4][5]

If you want, I can:

  • list specific napari plugin compatibility notes or plugins known to be broken, or
  • check whether a particular plugin or environment (Python version) will work with Pydantic v2.

🏁 Script executed:

# Look for documentation about the shared-workflows repository
rg "shared-workflows" -A 5 -B 5

Repository: derekthirstrup/napari

Length of output: 47


🌐 Web query:

napari shared-workflows build_docs slimfast make target

💡 Result:

Short answer: there is no make target named build_docs — use the docs-related targets instead (most relevant: slimfast or docs). CI uses the slimfast docs-only build by default; to request a different docs target on a PR comment write:
@napari-bot make
where can be: html, html-noplot, docs, slimfast, slimgallery (and -live variants for local/pixi). Locally you can run the same targets with make or pixi (recommended: pixi run slimfast). [1][2]

Sources:

  • napari docs contributing (lists make targets, slimfast default, and @napari-bot trigger). [1]
  • napari/docs README (make/pixi usage and slimfast/html targets). [2]

🌐 Web query:

github.com/napari/shared-workflows build_docs.yml make_target

💡 Result:

Do you want me to (A) locate and show the build_docs.yml file in napari/shared-workflows, or (B) explain what the make_target input/variable does and where it's used in the napari docs CI? Which one?


Reasonable workaround for Pydantic V2 migration.

The change to use the "slimfast" make target is a pragmatic temporary solution to avoid npe2 Pydantic V2 compatibility issues during the migration. The "slimfast" target is a documented, standard make target in napari used for docs-only builds. The comments provide clear context and rationale.

Note: The referenced PR link (napari/npe2#419) could not be verified as accessible. If this PR exists in a different location or under a different number, please update the comment reference for future clarity.

🤖 Prompt for AI Agents
.github/workflows/build_docs.yml lines 29-34: the workflow comment references a
PR link that could be incorrect or unreachable; verify whether
https://github.com/napari/npe2/pull/419 is the correct source for the Pydantic
V2/sphinx discussion and update the comment to point to the correct PR/issue or
remove the link if none exists; if you confirm the link, add a brief note with
the PR title or change the URL to the canonical issue/PR in the napari/npe2 repo
so future readers can locate the discussion.

@derekthirstrup derekthirstrup self-assigned this Dec 31, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 089f0ff and 4250680.

📒 Files selected for processing (3)
  • .circleci/config.yml
  • pyproject.toml
  • src/napari/_qt/_qapp_model/injection/_qprocessors.py
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.circleci/config.yml

[error] 1-1: wrong new line character: expected \n

(new-lines)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
  • GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
  • GitHub Check: Import lint
  • GitHub Check: build_docs / Build & Upload Docs Artifact
  • GitHub Check: build1 (Docker, napari-xpra, napari/napari-xpra)
  • GitHub Check: build1 (Docker, napari, napari/napari)
🔇 Additional comments (2)
.circleci/config.yml (2)

1-9: LGTM!

The header comments and configuration structure are clear and well-documented.


48-49: Clarify the discrepancy between CircleCI's 'slim' and build_docs.yml's 'slimfast' target.

The switch to a reduced docs build target is appropriate as a temporary workaround during the Pydantic V2 migration. However, note that .github/workflows/build_docs.yml specifies make_target: "slimfast" while this CircleCI config uses make slim. Verify these targets are equivalent or update one for consistency. The build_docs.yml comments already document that docs may still encounter Pydantic V2 formatting issues requiring sphinx configuration changes in the napari/docs repository (see napari/npe2#419).

build-docs:
jobs:
- build-docs
# As much as possible, this file should be kept in sync with:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix line endings to use LF instead of CRLF.

The static analysis tool reports that this file uses incorrect line endings (CRLF instead of LF). This should be corrected for consistency with Unix/Linux conventions and better Git handling.

You can fix this by running:

#!/bin/bash
# Convert CRLF to LF line endings
dos2unix .circleci/config.yml
# Or using sed:
# sed -i 's/\r$//' .circleci/config.yml
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 1-1: wrong new line character: expected \n

(new-lines)

🤖 Prompt for AI Agents
.file .circleci/config.yml lines 1-1: the file uses CRLF line endings; convert
them to LF and commit the change. Fix by running a line-ending conversion (e.g.,
dos2unix or sed to remove trailing CR characters), verify the file now has LF
endings, set repository/git config (core.autocrlf=false or appropriate
.gitattributes) to prevent reintroduction, then add, commit, and push the
updated .circleci/config.yml.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.github/workflows/benchmarks.yml (2)

109-114: Critical: Pull request context unavailable in workflow_call.

When GITHUB_EVENT_NAME == workflow_call, the expressions github.event.pull_request.* will be undefined because workflow_call events don't automatically inherit the calling workflow's event context. The benchmark_labels_trigger.yml workflow doesn't pass PR context as inputs, so these variables will be empty:

  • github.event.pull_request.number (line 110)
  • github.event.pull_request.base.sha (line 111)
  • github.event.pull_request.base.label (line 113)
  • HEAD_LABEL from line 97 (github.event.pull_request.head.label)

This will cause benchmark execution to fail or produce incorrect baseline/contender refs.

🔎 Proposed fix: Define and use workflow inputs

Add input definitions to the workflow_call trigger:

 on:
   workflow_call:
+    inputs:
+      pr_number:
+        type: string
+        required: false
+      base_sha:
+        type: string
+        required: false
+      head_sha:
+        type: string
+        required: false
+      base_label:
+        type: string
+        required: false
+      head_label:
+        type: string
+        required: false

Then update the conditional logic to use inputs:

-          if [[ $GITHUB_EVENT_NAME == workflow_call ]]; then
-            EVENT_NAME="PR #${{ github.event.pull_request.number }}"
-            BASE_REF=${{ github.event.pull_request.base.sha }}
+          if [[ $GITHUB_EVENT_NAME == workflow_call ]] && [[ -n "${{ inputs.pr_number }}" ]]; then
+            EVENT_NAME="PR #${{ inputs.pr_number }}"
+            BASE_REF=${{ inputs.base_sha }}
             CONTENDER_REF=${GITHUB_SHA}
-            echo "Baseline:  ${BASE_REF} (${{ github.event.pull_request.base.label }})"
-            echo "Contender: ${CONTENDER_REF} ($HEAD_LABEL)"
+            echo "Baseline:  ${BASE_REF} (${{ inputs.base_label }})"
+            echo "Contender: ${CONTENDER_REF} (${{ inputs.head_label }})"

Also update HEAD_LABEL at line 97:

-          HEAD_LABEL: ${{ github.event.pull_request.head.label }}
+          HEAD_LABEL: ${{ inputs.head_label || github.event.pull_request.head.label }}

179-191: Critical: Same pull request context issue for PR comment generation.

The conditional at line 179 checks for workflow_call but then attempts to access github.event.pull_request.number at line 181, which will be undefined for the same reason explained in the previous comment.

Update this section to use workflow inputs as well:

-          if [[ $GITHUB_EVENT_NAME == workflow_call ]]; then
+          if [[ $GITHUB_EVENT_NAME == workflow_call ]] && [[ -n "${{ inputs.pr_number }}" ]]; then
 
-          echo "${{ github.event.pull_request.number }}" > .asv/results/pr_number
+          echo "${{ inputs.pr_number }}" > .asv/results/pr_number
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4250680 and 2354ea8.

📒 Files selected for processing (4)
  • .github/workflows/benchmark_labels_trigger.yml
  • .github/workflows/benchmarks.yml
  • .github/workflows/benchmarks_report.yml
  • src/napari/_pydantic_compat.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Initial test (3.13, ubuntu-latest, pyqt6, no_cov) / ubuntu-latest py 3.13 pyqt6 no_cov
  • GitHub Check: Initial test (3.10, ubuntu-latest, pyqt5, no_cov) / ubuntu-latest py 3.10 pyqt5 no_cov
🔇 Additional comments (2)
.github/workflows/benchmarks_report.yml (1)

14-14: LGTM! Correctly extends trigger coverage.

The addition of "Benchmark Trigger by Label" to the workflow_run trigger list ensures that benchmark reports are generated regardless of which workflow invoked the benchmarks.

.github/workflows/benchmarks.yml (1)

26-27: LGTM! Good additions for benchmark robustness.

The OMP_NUM_THREADS and ASV_OPTIONS environment variables enhance benchmark stability by disabling OpenMP parallelism and configuring asv behavior (split reports, show errors, 1.5x performance threshold, 900s timeout).

Comment on lines +11 to +15
jobs:
trigger-benchmarks:
if: github.event.label.name == 'run-benchmarks'
uses: ./.github/workflows/benchmarks.yml
secrets: inherit
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: PR context not available in called workflow.

When this workflow invokes benchmarks.yml via workflow_call, the called workflow won't have access to github.event.pull_request.* context. However, benchmarks.yml lines 109-114 and 179-191 expect this context when GITHUB_EVENT_NAME == workflow_call:

# benchmarks.yml line 110-114
EVENT_NAME="PR #${{ github.event.pull_request.number }}"
BASE_REF=${{ github.event.pull_request.base.sha }}
CONTENDER_REF=${GITHUB_SHA}

These values will be empty/undefined, causing the benchmark run to fail.

🔎 Proposed fix: Pass PR context as workflow inputs

Update benchmark_labels_trigger.yml to pass PR context:

 jobs:
   trigger-benchmarks:
     if: github.event.label.name == 'run-benchmarks'
     uses: ./.github/workflows/benchmarks.yml
+    with:
+      pr_number: ${{ github.event.pull_request.number }}
+      base_sha: ${{ github.event.pull_request.base.sha }}
+      head_sha: ${{ github.event.pull_request.head.sha }}
+      base_label: ${{ github.event.pull_request.base.label }}
+      head_label: ${{ github.event.pull_request.head.label }}
     secrets: inherit

Then update benchmarks.yml to accept and use these inputs:

 on:
   workflow_call:
+    inputs:
+      pr_number:
+        type: string
+      base_sha:
+        type: string
+      head_sha:
+        type: string
+      base_label:
+        type: string
+      head_label:
+        type: string

And update the conditional logic in benchmarks.yml to use these inputs instead of github.event.pull_request.*.

🤖 Prompt for AI Agents
In .github/workflows/benchmark_labels_trigger.yml around lines 11-15 the called
workflow benchmarks.yml will not have github.event.pull_request.* available when
invoked via workflow_call; update this file to forward PR context as explicit
inputs (PR number, base SHA, contender/head SHA, and any event name) to the
called workflow, and then update benchmarks.yml to declare and consume those
inputs (using them when GITHUB_EVENT_NAME == workflow_call) with fallbacks to
github.event.pull_request.* when invoked from a PR event; ensure conditional
logic that builds EVENT_NAME, BASE_REF and CONTENDER_REF uses the passed inputs
instead of directly reading github.event.pull_request.*

This commit migrates napari from Pydantic V1 to V2 native API,
enabling Python 3.14 support.

## Key Changes

### Validator Migration (V1 → V2 patterns)

1. **`@validator` → `@field_validator`**: All field validators updated with `mode='before'`
   for pre-processing behavior

2. **`@root_validator` → `@model_validator`**: Root validators converted with proper
   `mode='before'` or `mode='after'` depending on when validation should occur

3. **`PydanticCustomError`**: Custom validation errors now use V2's error format

4. **`__get_pydantic_core_schema__`**: Custom types now implement this classmethod
   instead of `__get_validators__`

### Bug Fixes for V2 Compatibility

1. **Union Type Equality (`src/napari/utils/misc.py`)**:
   - `pick_equality_operator()` now handles Union types like `Array | None`
   - Uses `get_origin()` and `get_args()` to unwrap and check type arguments

2. **ColorValue JSON Serialization (`src/napari/utils/color.py`)**:
   - Added `serialization` parameter to `__get_pydantic_core_schema__`

3. **ColorManager Validation (`src/napari/layers/utils/color_manager.py`)**:
   - Fixed validation to work with V2's validation model

4. **TextManager Translation Field (`src/napari/layers/utils/text_manager.py`)**:
   - Changed to use `Field(default_factory=...)` for array defaults

### Deprecated Method Replacements

- `.dict()` → `.model_dump()`
- `.json()` → `.model_dump_json()`
- `.copy()` → `.model_copy()`
- `__fields__` → `model_fields`
- `update_forward_refs()` → `model_rebuild()`

### Dependencies

- Added `pydantic-settings>=2.0.0` (BaseSettings moved to separate package)
- Added `pydantic-extra-types>=2.0.0` (Color type moved to separate package)
- npe2: Conditional dependency for Python 3.14 compatibility
- triangle: Using git version for Python 3.14 compatibility

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@derekthirstrup derekthirstrup force-pushed the py314-pydantic-v2-migration branch from 4321361 to fdfeca9 Compare December 31, 2025 20:04
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
pyproject.toml (1)

211-211: Same concern about git-based triangle in docs dependencies.

The git-based triangle dependency is consistently applied across optional, docs, and dependency-groups sections. The same considerations from the optional dependencies apply here: verify compatibility across all Python versions 3.10-3.14 and consider whether this should be conditional.

Also applies to: 303-303

🧹 Nitpick comments (4)
pyproject.toml (1)

132-132: Consider making the triangle dependency conditional for Python 3.14 only.

The PyPI version of triangle lacks wheels for Python 3.14 (only supports 3.10–3.13), necessitating the git source. However, the dependency is currently unconditional, forcing all Python versions to install from source. Since your project already uses conditional syntax elsewhere (e.g., npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'), apply the same pattern here:

"triangle @ git+https://github.com/drufat/triangle.git; python_version >= '3.14'"

This allows Python 3.10–3.13 users to benefit from pre-built wheels, while ensuring 3.14 users get the compatible source version.

src/napari/settings/_fields.py (1)

220-235: Version uses any_schema() which may be too permissive.

Using core_schema.any_schema() allows any input type to pass through to the validator. This works because Version.validate calls _from_obj which handles multiple input types (str, bytes, dict, tuple, list, Version). However, consider whether str_schema() with a union or more specific schema would provide better error messages for invalid inputs.

src/napari/components/viewer_model.py (1)

360-387: Serialization methods with backward compatibility aliases.

The implementation provides:

  • model_dump_json() and model_dump() as the V2 primary methods
  • json() and dict() as deprecated aliases for backward compatibility

Consider adding deprecation warnings to the compatibility aliases to guide users toward the V2 API.

🔎 Proposed enhancement with deprecation warnings
     # Compatibility aliases for Pydantic V1 API
     def json(self, **kwargs):
         """Deprecated: Use model_dump_json() instead."""
+        import warnings
+        warnings.warn(
+            "json() is deprecated, use model_dump_json() instead",
+            DeprecationWarning,
+            stacklevel=2,
+        )
         return self.model_dump_json(**kwargs)

     def dict(self, **kwargs):
         """Deprecated: Use model_dump() instead."""
+        import warnings
+        warnings.warn(
+            "dict() is deprecated, use model_dump() instead",
+            DeprecationWarning,
+            stacklevel=2,
+        )
         return self.model_dump(**kwargs)
src/napari/_pydantic_compat.py (1)

70-71: Deprecated aliases should include deprecation warnings.

The validator and root_validator aliases are marked as deprecated in comments but don't emit warnings. Adding runtime warnings would help users migrate.

🔎 Proposed enhancement with deprecation warnings
-# These are deprecated and will be removed. Use the V2 equivalents.
-validator = field_validator  # deprecated: use @field_validator
-root_validator = model_validator  # deprecated: use @model_validator
+import warnings
+import functools
+
+def _deprecated_alias(new_func, old_name, new_name):
+    @functools.wraps(new_func)
+    def wrapper(*args, **kwargs):
+        warnings.warn(
+            f"{old_name} is deprecated, use {new_name} instead",
+            DeprecationWarning,
+            stacklevel=2,
+        )
+        return new_func(*args, **kwargs)
+    return wrapper
+
+# These are deprecated and will be removed. Use the V2 equivalents.
+validator = _deprecated_alias(field_validator, 'validator', 'field_validator')
+root_validator = _deprecated_alias(model_validator, 'root_validator', 'model_validator')
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2354ea8 and fdfeca9.

📒 Files selected for processing (15)
  • .circleci/config.yml
  • pyproject.toml
  • src/napari/_pydantic_compat.py
  • src/napari/_qt/dialogs/preferences_dialog.py
  • src/napari/components/dims.py
  • src/napari/components/viewer_model.py
  • src/napari/layers/utils/color_manager.py
  • src/napari/layers/utils/text_manager.py
  • src/napari/settings/_application.py
  • src/napari/settings/_base.py
  • src/napari/settings/_fields.py
  • src/napari/utils/color.py
  • src/napari/utils/colormaps/colormap_utils.py
  • src/napari/utils/misc.py
  • src/napari/utils/theme.py
🧰 Additional context used
🧬 Code graph analysis (8)
src/napari/utils/colormaps/colormap_utils.py (2)
src/napari/components/viewer_model.py (1)
  • model_dump (370-378)
src/napari/settings/_base.py (1)
  • model_dump (324-358)
src/napari/layers/utils/text_manager.py (2)
src/napari/components/viewer_model.py (1)
  • model_dump (370-378)
src/napari/settings/_base.py (1)
  • model_dump (324-358)
src/napari/_qt/dialogs/preferences_dialog.py (3)
src/napari/components/viewer_model.py (1)
  • model_dump (370-378)
src/napari/settings/_base.py (1)
  • model_dump (324-358)
src/napari/_vendor/qt_json_builder/qt_jsonschema_form/form.py (1)
  • WidgetBuilder (19-139)
src/napari/components/viewer_model.py (2)
src/napari/components/_viewer_mouse_bindings.py (2)
  • dims_scroll (8-22)
  • double_click_to_zoom (25-44)
src/napari/settings/_base.py (1)
  • model_dump (324-358)
src/napari/settings/_fields.py (1)
src/napari/utils/theme.py (1)
  • available_themes (307-315)
src/napari/layers/utils/color_manager.py (1)
src/napari/_pydantic_compat.py (2)
  • _validate (253-267)
  • _validate (308-320)
src/napari/utils/theme.py (2)
src/napari/components/viewer_model.py (1)
  • model_dump (370-378)
src/napari/settings/_base.py (1)
  • model_dump (324-358)
src/napari/settings/_base.py (3)
src/napari/utils/events/evented_model.py (1)
  • EventedModel (165-449)
src/napari/_pydantic_compat.py (1)
  • SettingsError (155-156)
src/napari/utils/misc.py (1)
  • deep_update (653-661)
🪛 GitHub Actions: PR Test
pyproject.toml

[error] 1-1: Build failed: invalid pyproject.toml config: project.dependencies[13] must be pep508. Offending value: 'npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'' . Command: python -m build --outdir dist/.

🪛 Ruff (0.14.10)
src/napari/_pydantic_compat.py

255-255: Avoid specifying long messages outside the exception class

(TRY003)


258-258: Avoid specifying long messages outside the exception class

(TRY003)


260-260: Avoid specifying long messages outside the exception class

(TRY003)


262-262: Avoid specifying long messages outside the exception class

(TRY003)


264-264: Avoid specifying long messages outside the exception class

(TRY003)


266-266: Avoid specifying long messages outside the exception class

(TRY003)


310-310: Avoid specifying long messages outside the exception class

(TRY003)


313-313: Avoid specifying long messages outside the exception class

(TRY003)


315-315: Avoid specifying long messages outside the exception class

(TRY003)


317-317: Avoid specifying long messages outside the exception class

(TRY003)


319-319: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (42)
src/napari/utils/misc.py (2)

465-466: LGTM: Local imports for Union type support.

The local imports are appropriate for avoiding module-level overhead, especially since these helpers are only needed for the Union type handling path.


479-489: Good solution for unwrapping Optional types to find the correct operator.

The Union handling correctly addresses the ambiguous-array truth-value errors by unwrapping Array | None patterns and recursively finding the appropriate equality operator for the underlying type. The logic properly handles both typing.Union and types.UnionType (Python 3.10+ | syntax) and correctly skips NoneType.

The implementation returns the first non-default operator found when processing Union type arguments. While theoretically a Union could contain multiple non-None array types with different operators, this pattern does not appear in the codebase—actual unions are limited to cases like Union[Array, None] or simple scalar unions (e.g., Union[str, Path], Union[int, float]). The recursive approach is well-suited to the intended use case of unwrapping Optional types in Pydantic field annotations.

.circleci/config.yml (1)

48-49: Reasonable workaround for npe2 incompatibility.

The switch to the slim build target to skip npe2 docs rendering is a pragmatic temporary solution until npe2 completes its Pydantic V2 migration. The comment clearly explains the reason.

pyproject.toml (2)

27-27: LGTM!

The Python 3.14 classifier correctly declares support for the new Python version, aligning with the PR's migration objectives.


65-66: Both dependencies are correctly added and actively used in the codebase.

Verification confirms:

  • pydantic-settings>=2.0.0: Used for importing BaseSettings and SettingsConfigDict in the Pydantic V2 migration (src/napari/_pydantic_compat.py:45)
  • pydantic-extra-types>=2.0.0: Used for the Color type in src/napari/utils/theme.py, which defines multiple color-typed fields for the theme configuration

Both minimum versions are compatible with pydantic>=2.2.0.

src/napari/utils/colormaps/colormap_utils.py (1)

246-251: LGTM!

The migration from .dict() to .model_dump() is correct for Pydantic V2 compatibility. The function logic remains unchanged, and this aligns with the consistent usage of model_dump() across the codebase (e.g., in viewer_model.py and _base.py).

src/napari/settings/_application.py (2)

9-9: LGTM!

Import update correctly includes ConfigDict and field_validator from the Pydantic compatibility layer.


304-315: LGTM!

The validator migration to Pydantic V2 is correct:

  • @field_validator('window_state') replaces @validator('window_state', allow_reuse=True)
  • @classmethod decorator is properly added as required by Pydantic V2
  • model_config = ConfigDict(use_enum_values=False) replaces the inner Config class pattern

The validator logic itself remains unchanged.

src/napari/utils/color.py (2)

27-39: LGTM!

The Pydantic V2 core schema implementation for ColorValue is correct. The serialization transformer properly converts numpy arrays to lists for JSON compatibility, which addresses the JSON serialization support mentioned in the PR objectives.


102-110: Consider adding serialization for consistency with ColorValue.

ColorValue includes a serialization transformer for JSON compatibility, but ColorArray does not. If ColorArray fields are serialized (e.g., via model_dump(mode='json')), numpy arrays may not serialize correctly.

Was this omission intentional, or should serialization be added for consistency?

🔎 Suggested serialization addition
     @classmethod
     def __get_pydantic_core_schema__(
         cls,
         source_type: Any,
         handler: GetCoreSchemaHandler,
     ) -> CoreSchema:
         return core_schema.no_info_before_validator_function(
             cls.validate,
             core_schema.any_schema(),
+            serialization=core_schema.plain_serializer_function_ser_schema(
+                lambda x: x.tolist() if isinstance(x, np.ndarray) else list(x),
+                info_arg=False,
+            ),
         )
src/napari/layers/utils/text_manager.py (5)

90-90: LGTM!

The migration from PositiveFloat to Annotated[float, Field(gt=0)] is the correct Pydantic V2 pattern and maintains the same validation semantics.


95-95: LGTM!

Using Field(default_factory=lambda: np.array(0.0)) is the correct approach for mutable defaults in Pydantic V2. This fixes the TextManager translation default as mentioned in the PR objectives.


331-331: LGTM!

Migration from .dict() to .model_dump() is correct for Pydantic V2 compatibility.


367-367: LGTM!

Migration from .copy() to .model_copy() is correct for Pydantic V2 compatibility.


378-380: LGTM!

The validator migration is correct:

  • @field_validator('blending', mode='before') replaces the previous @validator pattern
  • @classmethod is properly added as required by Pydantic V2

The blending validation logic remains unchanged.

src/napari/utils/theme.py (5)

86-97: LGTM!

The syntax_style validator migration is correct with proper @field_validator and @classmethod decorators.


99-108: LGTM!

The font_size validator migration is correct.


110-118: LGTM!

The to_rgb_dict method correctly uses super().model_dump() instead of the deprecated dict() method, and the docstring is appropriately updated.


269-269: LGTM!

Migration from .copy() to .model_copy() is correct for Pydantic V2.


417-436: Good compatibility handling for npe2.

The _model_dump helper function correctly handles the case where npe2 models may still be using Pydantic V1 (with .dict()) while napari has migrated to V2 (with .model_dump()). This is important since the PR notes that npe2 still depends on Pydantic V1.

src/napari/_qt/dialogs/preferences_dialog.py (4)

17-17: LGTM!

Import updated to use FieldInfo instead of ModelField, which is the correct Pydantic V2 type for field metadata.


96-111: LGTM!

The dialog rebuilding logic is correctly updated for Pydantic V2:

  • Uses model_dump(exclude={'schema_version'}) instead of custom dict handling
  • Iterates over model_fields.items() instead of __fields__.values()
  • Properly extracts the annotation from field_info.annotation
  • Correctly checks if the annotation is a BaseModel subclass

115-136: LGTM!

The _add_page method signature and implementation are correctly updated to work with the new Pydantic V2 field introspection pattern.


176-232: LGTM!

The _get_page_dict method is correctly updated:

  • Uses model_json_schema() instead of schema() for JSON schema generation
  • Properly iterates over model_fields for nested model introspection
  • Uses model_dump(mode='json') to ensure enums serialize as their string values
src/napari/layers/utils/color_manager.py (4)

53-62: LGTM!

The Pydantic V2 core schema implementation for ColorProperties is correct, using no_info_before_validator_function to wrap the _validate method.


65-87: LGTM!

The validation method is correctly updated:

  • Renamed from validate_type to _validate (private convention)
  • Uses PydanticCustomError instead of ValueError for Pydantic V2 compatibility
  • Error messages are clear and descriptive

165-186: LGTM!

The field validators are correctly migrated with @field_validator and @classmethod decorators. The validation logic remains unchanged.


188-215: LGTM!

The model validator migration is well-implemented:

  • @model_validator(mode='after') correctly replaces @root_validator
  • Returns Self as required by Pydantic V2's after-mode validators
  • Uses object.__setattr__ to bypass validate_assignment and avoid infinite recursion
  • The color validation logic is preserved through the v2 helper functions
src/napari/settings/_fields.py (2)

6-8: Pydantic V2 imports look correct.

The imports for GetCoreSchemaHandler, GetJsonSchemaHandler, JsonSchemaValue, CoreSchema, and core_schema are the proper V2 APIs for custom type definitions.


24-42: Theme schema implementation is correct for Pydantic V2.

The use of no_info_before_validator_function to run validation before the string schema, combined with the JSON schema hook that dynamically populates the enum with available themes, follows the recommended V2 pattern for custom types.

src/napari/components/dims.py (2)

111-122: Field validator migration is correct.

The conversion from V1 @validator to V2 @field_validator with mode='before' and @classmethod decorator is properly done. The tuple conversion for multiple fields is preserved.


155-249: Model validator migration using object.__setattr__ is appropriate.

The migration from root_validator to model_validator(mode='after') correctly:

  1. Returns Self type and returns self at the end
  2. Uses object.__setattr__ to bypass validate_assignment and avoid infinite recursion
  3. Operates on instance attributes directly rather than a values dict

This is a valid and recommended pattern for V2 when you need to modify multiple fields atomically without re-triggering validation.

src/napari/components/viewer_model.py (4)

112-113: Good workaround for the dict method name conflict.

Using builtins.dict aliased to Dict prevents the type hint from conflicting with the dict() method defined on the class. This is a clean solution.


186-195: Correct migration from allow_mutation=False to frozen=True.

The frozen=True parameter in Pydantic V2 correctly replaces the V1 allow_mutation=False behavior, preventing these fields from being reassigned after initialization.


345-358: Theme validator correctly migrated to field_validator.

The @field_validator('theme') with @classmethod follows the V2 pattern. The validation logic remains unchanged and properly raises ValueError for invalid themes.


292-295: Type ignore comments are appropriate for mixin-provided attributes.

The # type: ignore[attr-defined] comments are necessary because the mouse callback lists (mouse_wheel_callbacks, etc.) are provided by the MousemapProvider mixin, and the type checker cannot see them on self during static analysis.

src/napari/_pydantic_compat.py (2)

132-149: display_errors function handles both V1 and V2 error formats well.

The implementation gracefully handles both dict-style (V1) and object-style error formats by checking for the get method, ensuring compatibility during the migration period.


214-267: ConstrainedInt stub provides V2 compatibility for V1-style constrained types.

The implementation of __get_pydantic_core_schema__ and validation logic is correct. The static analysis hints about long error messages (TRY003) are acceptable for this compatibility stub since these are internal validation messages.

src/napari/settings/_base.py (4)

40-88: Custom _exclude_defaults_evented handles Pydantic V2 limitation correctly.

This function addresses a real issue where Pydantic V2's exclude_defaults compares private attributes (__pydantic_private__), which always differ between instances of EventedModel. The manual comparison using only public model_fields is the correct approach.


180-210: Initialization flow correctly merges settings from multiple sources.

The precedence order (environment > file > explicit values) is correctly implemented. Using copy.deepcopy before mutation prevents issues when validation creates temporary instances.

One minor observation: the copy_module import could be moved to the top of the file for consistency.


523-533: Validation in config_file_settings_source creates temporary instance with config_path=None.

This correctly avoids re-reading the config file during validation. The strict mode check properly re-raises validation errors when enabled.


597-617: _restore_config_data now guards against missing keys.

The added checks on lines 606-610 properly handle cases where keys might be excluded (e.g., by exclude_defaults) and restores from defaults when appropriate. This is a good defensive improvement.

Comment on lines +56 to +57
"npe2>=0.7.7; python_version < '3.14'",
"npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix invalid PEP 508 syntax causing build failure.

The conditional npe2 dependency using a git URL with an environment marker is invalid PEP 508 syntax and causes the build to fail. Direct URL references cannot be combined with environment markers in the same dependency specification.

The pipeline error confirms this:

project.dependencies[13] must be pep508. Offending value: 'npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14''

Possible solutions:

  1. Use dependency groups (if Python 3.14 support can be optional initially):

    • Move the git-based npe2 to a separate dependency group
    • Keep standard npe2 in core dependencies with python_version < '3.14' marker
  2. Remove the environment marker from the git URL and handle Python 3.14 compatibility through documentation or constraints files

  3. Wait for npe2 to publish a Pydantic V2-compatible release to PyPI and use standard version markers

  4. Use PEP 508 extras or conditional imports if the feature is optional

🔎 Example fix using dependency groups approach
 dependencies = [
     ...
-    "npe2>=0.7.7; python_version < '3.14'",
-    "npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'",
+    "npe2>=0.7.7",
     ...
 ]

Then add a new dependency group:

[dependency-groups]
py314 = [
    "npe2 @ git+https://github.com/brisvag/npe2.git@py314",
]

Or add to optional dependencies:

[project.optional-dependencies]
py314 = [
    "npe2 @ git+https://github.com/brisvag/npe2.git@py314",
]

Note: This approach would require users on Python 3.14 to explicitly install the extra until npe2 releases a compatible version.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"npe2>=0.7.7; python_version < '3.14'",
"npe2 @ git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'",
"npe2>=0.7.7",
🤖 Prompt for AI Agents
In pyproject.toml around lines 56 to 57, the dependency line 'npe2 @
git+https://github.com/brisvag/npe2.git@py314; python_version >= '3.14'' uses an
invalid PEP 508 combination of a direct URL and an environment marker; fix by
removing the environment marker from the direct URL and moving the git-based
npe2 into a separate dependency group or optional dependency (e.g., add a new
[dependency-groups] or [project.optional-dependencies] entry for Python 3.14
that contains "npe2 @ git+https://github.com/brisvag/npe2.git@py314"), and keep
the existing "npe2>=0.7.7; python_version < '3.14'" in core dependencies.

Comment on lines +48 to +52
try:
from pydantic_extra_types.color import Color
except ImportError:
# Fallback if pydantic-extra-types not installed
Color = None # type: ignore[misc, assignment]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Color type fallback to None may cause runtime errors.

If pydantic-extra-types is not installed and code attempts to use Color, it will be None, leading to confusing TypeError exceptions. Consider raising a more informative error or using a stub class.

🔎 Proposed improvement for better error handling
 # Color type moved to pydantic-extra-types
 try:
     from pydantic_extra_types.color import Color
 except ImportError:
     # Fallback if pydantic-extra-types not installed
-    Color = None  # type: ignore[misc, assignment]
+    class Color:  # type: ignore[no-redef]
+        """Placeholder for Color when pydantic-extra-types is not installed."""
+        def __init__(self, *args, **kwargs):
+            raise ImportError(
+                "Color requires pydantic-extra-types. "
+                "Install it with: pip install pydantic-extra-types"
+            )
🤖 Prompt for AI Agents
In src/napari/_pydantic_compat.py around lines 48-52, the fallback sets Color =
None which causes unclear TypeError later; replace the None fallback with a
lightweight stub class named Color that raises a clear ImportError (or
RuntimeError) on instantiation with a message instructing to install
pydantic-extra-types (and optionally how to pin compatible versions), and add a
typing-ignore comment for linters if needed so the module still imports cleanly
but any runtime attempt to use Color fails with the informative error instead of
a confusing NoneType error.

Comment on lines +597 to +654
def _validate_cycle_mode_v2(model: ColorManager) -> tuple[np.ndarray, dict]:
"""V2-compatible version of _validate_cycle_mode that works with model instances."""
# Store original values before they get modified
original_values = {
'color_mode': model.color_mode,
'color_properties': model.color_properties,
'categorical_colormap': model.categorical_colormap,
'colors': model.colors,
'current_color': model.current_color,
'continuous_colormap': model.continuous_colormap,
'contrast_limits': model.contrast_limits,
}
# Create a copy for mutation
values = dict(original_values)
colors, updated_values = _validate_cycle_mode(values)
# Return only the fields that changed
updated = {}
for key in [
'color_properties',
'categorical_colormap',
'colors',
'current_color',
]:
if key in updated_values and updated_values[
key
] is not original_values.get(key):
updated[key] = updated_values[key]
return colors, updated


def _validate_colormap_mode_v2(model: ColorManager) -> tuple[np.ndarray, dict]:
"""V2-compatible version of _validate_colormap_mode that works with model instances."""
# Store original values before they get modified
original_values = {
'color_mode': model.color_mode,
'color_properties': model.color_properties,
'categorical_colormap': model.categorical_colormap,
'colors': model.colors,
'current_color': model.current_color,
'continuous_colormap': model.continuous_colormap,
'contrast_limits': model.contrast_limits,
}
# Create a copy for mutation
values = dict(original_values)
colors, updated_values = _validate_colormap_mode(values)
# Return only the fields that changed
updated = {}
for key in [
'color_properties',
'colors',
'current_color',
'contrast_limits',
]:
if key in updated_values and updated_values[
key
] is not original_values.get(key):
updated[key] = updated_values[key]
return colors, updated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for _validate_cycle_mode and _validate_colormap_mode implementations
rg -n -B 2 -A 30 'def _validate_cycle_mode\(' src/napari/layers/utils/color_manager.py
echo "---"
rg -n -B 2 -A 30 'def _validate_colormap_mode\(' src/napari/layers/utils/color_manager.py

Repository: derekthirstrup/napari

Length of output: 68


🏁 Script executed:

#!/bin/bash
# Search for the validation functions more broadly
rg '_validate_cycle_mode|_validate_colormap_mode' src/napari/layers/utils/color_manager.py -n -B 2 -A 20

Repository: derekthirstrup/napari

Length of output: 4891


🏁 Script executed:

#!/bin/bash
# Find and examine the actual implementations in color_manager_utils
find . -name "color_manager_utils.py" -type f

Repository: derekthirstrup/napari

Length of output: 113


🏁 Script executed:

#!/bin/bash
# Read the color_manager_utils.py file to find the implementations
wc -l ./src/napari/layers/utils/color_manager_utils.py

Repository: derekthirstrup/napari

Length of output: 117


🏁 Script executed:

#!/bin/bash
# Read the entire color_manager_utils.py file
cat -n ./src/napari/layers/utils/color_manager_utils.py

Repository: derekthirstrup/napari

Length of output: 5762


🏁 Script executed:

#!/bin/bash
# Let me verify how colors are handled - check if colors is actually part of the returned values dict
# Look at the return statements more carefully
rg -n 'return colors, values' ./src/napari/layers/utils/color_manager_utils.py
rg -n 'return colors, updated' ./src/napari/layers/utils/color_manager.py

Repository: derekthirstrup/napari

Length of output: 186


🏁 Script executed:

#!/bin/bash
# Check the ColorProperties class definition to understand if it's mutable
rg -n -B 5 -A 15 'class ColorProperties' src/napari/layers/utils/color_manager.py

Repository: derekthirstrup/napari

Length of output: 659


🏁 Script executed:

#!/bin/bash
# Let me check if any numpy arrays or objects are mutated in place within the validation functions
# Look for assignments that modify the array itself rather than creating new ones
rg -n '\[.*\]\s*=' ./src/napari/layers/utils/color_manager_utils.py
rg -n '\.append|\.extend|\.insert|\.remove' ./src/napari/layers/utils/color_manager_utils.py

Repository: derekthirstrup/napari

Length of output: 377


🏁 Script executed:

#!/bin/bash
# Let me trace through to understand the flow better
# Check if ColorProperties or other objects are being modified in place
# Look at how color_properties is used in the validation functions
rg -n 'color_properties' ./src/napari/layers/utils/color_manager_utils.py -A 2

Repository: derekthirstrup/napari

Length of output: 1229


🏁 Script executed:

#!/bin/bash
# Double-check: does _validate_cycle_mode or _validate_colormap_mode ever set values['colors']?
rg "values\['colors'\]" ./src/napari/layers/utils/color_manager_utils.py
# Also check if they modify any passed-in objects in place
rg "\.append|\.pop|\.clear|__setitem__" ./src/napari/layers/utils/color_manager_utils.py

Repository: derekthirstrup/napari

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Let me verify the actual contents more carefully
# Check what keys are actually modified in the values dict
cat -n ./src/napari/layers/utils/color_manager_utils.py | grep -A 30 "def _validate_cycle_mode"

Repository: derekthirstrup/napari

Length of output: 1228


🏁 Script executed:

#!/bin/bash
# Check the actual flow - does cmap.map() return a new array or modify in place?
# Let me also check if there are any imports that might give hints
rg -n "from.*colormap|import.*colormap" ./src/napari/layers/utils/color_manager_utils.py -A 2

Repository: derekthirstrup/napari

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Let me verify how the comparison actually works by examining a specific case
# Check if 'colors' is ever actually set in the values dict by the validation functions
cat -n ./src/napari/layers/utils/color_manager_utils.py | tail -30
echo "---"
# Verify the flow: colors is returned separately, never in values dict
rg "values\[" ./src/napari/layers/utils/color_manager_utils.py | grep -v "values\['color_properties'\]" | grep -v "values\['continuous_colormap'\]" | grep -v "values\['categorical_colormap'\]" | grep -v "values\['contrast_limits'\]" | grep -v "values\['current_color'\]"

Repository: derekthirstrup/napari

Length of output: 1240


🏁 Script executed:

#!/bin/bash
# Check all keys that are explicitly set in the validation functions
rg "values\[" ./src/napari/layers/utils/color_manager_utils.py | grep -v "values\['" | head -20
echo "---"
rg "values\['" ./src/napari/layers/utils/color_manager_utils.py

Repository: derekthirstrup/napari

Length of output: 748


🏁 Script executed:

#!/bin/bash
# Let me verify the actual behavior by checking what gets returned and set
# The key insight: colors is returned separately, not in the values dict
# Let's verify the flow one more time to be absolutely sure

python3 << 'EOF'
# Simulate what happens in the code
original_values = {
    'color_properties': 'obj1',
    'colors': 'obj2',
    'categorical_colormap': 'obj3',
    'current_color': 'obj4',
}

# This simulates what _validate_cycle_mode does - it takes a copy
values = dict(original_values)

# In _validate_cycle_mode, it would do:
values['current_color'] = 'new_color'
values['categorical_colormap'] = 'obj3'  # reassigned but same object

# Now check the comparison logic from _validate_cycle_mode_v2
updated = {}
for key in ['color_properties', 'categorical_colormap', 'colors', 'current_color']:
    if key in values and values[key] is not original_values.get(key):
        updated[key] = values[key]
        print(f"{key}: {values[key]} is not {original_values.get(key)} -> included")
    else:
        print(f"{key}: No change detected")
        
print(f"\nUpdated dict: {updated}")
EOF

Repository: derekthirstrup/napari

Length of output: 264


🏁 Script executed:

#!/bin/bash
# This is critical - let me verify if 'colors' is ever actually set in the values dict
# by either _validate_cycle_mode or _validate_colormap_mode

echo "=== Checking _validate_colormap_mode ==="
cat -n ./src/napari/layers/utils/color_manager_utils.py | sed -n '76,118p'

Repository: derekthirstrup/napari

Length of output: 1773


The identity comparison approach is actually sound, but the change detection checks for 'colors' which the validation functions never set in the values dict.

The validation functions return colors as a separate value from the modified values dict. They only modify specific keys within values (like current_color, contrast_limits, categorical_colormap), but never colors. Since colors is returned separately and handled directly in the calling code, the redundant check in the loop is ineffectual but not a functional problem.

The identity comparison itself will work correctly because the validation functions don't mutate objects in place—they create new numpy arrays and reassign dict keys with either new values or the same object references.

🤖 Prompt for AI Agents
In src/napari/layers/utils/color_manager.py around lines 597 to 654, the
change-detection loops include the key 'colors' but the validation helpers never
set 'colors' in the returned updated_values (they return colors separately),
making that check redundant; remove 'colors' from both loops (the
_validate_cycle_mode_v2 and _validate_colormap_mode_v2 change-detection lists)
so the code only compares keys the validators actually mutate, and optionally
add a brief comment noting that 'colors' is returned separately by the
validators.

Comment on lines +212 to +309
def _load_env_settings(self) -> tuple[dict[str, Any], dict[str, Any]]:
"""Load settings from environment variables.
Supports both flat (NAPARI_FIELD) and nested (NAPARI_SECTION_FIELD) paths,
as well as custom env names defined in field json_schema_extra.
Returns
-------
tuple[dict, dict]
A tuple of (parsed_values, raw_values). parsed_values are for model
initialization, raw_values are for caching (to identify env-provided settings).
"""
import json

parsed: dict[str, Any] = {}
raw: dict[str, Any] = {}
env_prefix = getattr(type(self), '_env_prefix', 'NAPARI_').upper()

env_vars: Mapping[str, str | None] = {
k.upper(): v for k, v in os.environ.items()
}

# Check for direct field mappings (flat access)
# Use type(self).model_fields to avoid deprecation warning in V2.11+
for field_name, field_info in type(self).model_fields.items():
env_name = f'{env_prefix}{field_name.upper()}'
if env_name in env_vars:
val = env_vars[env_name]
if val is not None:
raw[field_name] = val
# Try to parse as JSON for complex types
try:
parsed[field_name] = json.loads(val)
except (json.JSONDecodeError, TypeError):
parsed[field_name] = val

# Check for nested field access (e.g., NAPARI_APPEARANCE_THEME)
nested_prefix = f'{env_prefix}{field_name.upper()}_'
for env_name, env_val in env_vars.items():
if env_name.startswith(nested_prefix) and env_val is not None:
nested_path = env_name[len(nested_prefix) :].lower()
if field_name not in parsed:
parsed[field_name] = {}
raw[field_name] = {}
elif not isinstance(parsed[field_name], dict):
continue # Already set to non-dict value
raw[field_name][nested_path] = env_val
# Try to parse nested value as JSON
try:
parsed[field_name][nested_path] = json.loads(env_val)
except (json.JSONDecodeError, TypeError):
parsed[field_name][nested_path] = env_val

# Check for custom env names in nested model fields (json_schema_extra)
annotation = field_info.annotation
if annotation is not None:
try:
if hasattr(annotation, 'model_fields'):
for (
nested_name,
nested_info,
) in annotation.model_fields.items():
extra = nested_info.json_schema_extra or {}
if isinstance(extra, dict) and 'env' in extra:
custom_env = extra['env'].upper()
if custom_env in env_vars:
val = env_vars[custom_env]
if val is None:
continue
if field_name not in parsed:
parsed[field_name] = {}
raw[field_name] = {}
elif not isinstance(
parsed[field_name], dict
):
continue
raw[field_name][nested_name] = val
# Try to parse as JSON, but handle booleans specially
if val.lower() in ('true', '1', 'yes'):
parsed[field_name][nested_name] = True
elif val.lower() in ('false', '0', 'no'):
parsed[field_name][nested_name] = False
else:
try:
parsed[field_name][nested_name] = (
json.loads(val)
)
except (
json.JSONDecodeError,
TypeError,
):
parsed[field_name][nested_name] = (
val
)
except (TypeError, AttributeError):
pass

return parsed, raw
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Environment variable loading is comprehensive but complex.

The _load_env_settings method handles:

  1. Flat env vars (NAPARI_FIELD)
  2. Nested env vars (NAPARI_SECTION_FIELD)
  3. Custom env names via json_schema_extra
  4. JSON parsing with boolean handling

However, there's an inconsistency: boolean parsing (true, false, etc.) is only applied for custom env names (lines 290-294), not for standard nested fields (lines 260-263).

🔎 Proposed fix for consistent boolean handling
                     # Try to parse nested value as JSON
-                    try:
-                        parsed[field_name][nested_path] = json.loads(env_val)
-                    except (json.JSONDecodeError, TypeError):
-                        parsed[field_name][nested_path] = env_val
+                    if env_val.lower() in ('true', '1', 'yes'):
+                        parsed[field_name][nested_path] = True
+                    elif env_val.lower() in ('false', '0', 'no'):
+                        parsed[field_name][nested_path] = False
+                    else:
+                        try:
+                            parsed[field_name][nested_path] = json.loads(env_val)
+                        except (json.JSONDecodeError, TypeError):
+                            parsed[field_name][nested_path] = env_val
🤖 Prompt for AI Agents
In src/napari/settings/_base.py around lines 212 to 309, the method
_load_env_settings parses JSON for nested env vars but does not normalize
boolean-like strings for standard nested fields (the custom-env branch does),
causing inconsistent boolean handling; update the nested parsing block (the loop
iterating env_vars and handling nested_prefix around lines ~250-270) to treat
'true','1','yes' as True and 'false','0','no' as False before attempting
json.loads, mirroring the logic used for custom env names, and ensure
raw[field_name][nested_path] continues to store the original string.

@coderabbitai
Copy link

coderabbitai bot commented Dec 31, 2025

Note

Docstrings generation - SUCCESS
Generated docstrings for this pull request at #4

coderabbitai bot added a commit that referenced this pull request Dec 31, 2025
Docstrings generation was requested by @derekthirstrup.

* #1 (comment)

The following files were modified:

* `src/napari/_pydantic_compat.py`
* `src/napari/_qt/dialogs/preferences_dialog.py`
* `src/napari/_vispy/layers/scalar_field.py`
* `src/napari/components/dims.py`
* `src/napari/components/viewer_model.py`
* `src/napari/layers/utils/color_manager.py`
* `src/napari/layers/utils/text_manager.py`
* `src/napari/settings/_application.py`
* `src/napari/settings/_base.py`
* `src/napari/settings/_fields.py`
* `src/napari/utils/color.py`
* `src/napari/utils/colormaps/colormap_utils.py`
* `src/napari/utils/misc.py`
* `src/napari/utils/theme.py`
- _qprocessors.py: Replace .copy(update=...) with .model_copy(update=...)
- widgets.py: Check both top-level and nested json_schema_extra for file_extension
- test_points.py, test_shapes.py: Accept TypeError for Pydantic V2 validation errors
- _yaml.py: Add type ignores for model_dump kwargs
- colormap.py: Use keyword args for CyclicLabelColormap construction
- _set.py: Implement __get_pydantic_core_schema__ for EventedSet
- info.py: Add fallback for model_dump/dict compatibility with npe2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants