Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
5bde176
tracking: add a basic worker and hook tracking window controls
arashsm79 Oct 15, 2024
cf2f0e7
packaging: add imageio for testing cotracker
arashsm79 Oct 15, 2024
f05549c
tracking: hook up backward controls as well
arashsm79 Oct 15, 2024
0bbc3a6
tracking: add cotracker backend
arashsm79 Oct 21, 2024
79c0e63
tracking: get a reference to the keypointcontrol instance
arashsm79 Oct 21, 2024
7c28dc4
tracking: upgrade to cotracker3 and fix tracking keypoint coords
arashsm79 Oct 21, 2024
28ed829
tracking: add keypoints back to the original layer preserving their p…
arashsm79 Oct 21, 2024
67c1d0d
tracking: add backward tracking
arashsm79 Oct 22, 2024
9eb9ecf
tracking: add tracking bothways
arashsm79 Oct 22, 2024
fd010e0
tracking: improve progress report
arashsm79 Oct 22, 2024
89f894c
tracking: add track to end. improve controls layout.
arashsm79 Oct 29, 2024
d29154e
tracking: setup keybinginds for autotracking
arashsm79 Oct 29, 2024
50ff568
tracking: allow stopping the tracking
arashsm79 Nov 3, 2024
3e43c47
Revise README with new title and key bindings
arashsm79 Sep 15, 2025
f458ddc
remove debugpy statements
arashsm79 Nov 26, 2025
d0e3636
Add finalized tracking changes
C-Achard Jan 13, 2026
1794a7c
Fix broken README
C-Achard Jan 13, 2026
8549847
Fix unintended duplications from rebasing
C-Achard Jan 14, 2026
a14509d
Refactor KeypointControls trajectory plot checkbox
C-Achard Jan 14, 2026
c551951
Improve logging and debug handling in tracking modules
C-Achard Jan 16, 2026
33eeacb
Fix typo and remove redundant code in widgets
C-Achard Jan 16, 2026
016d611
Merge branch 'main' into cy/tracking-finalized
C-Achard Jan 16, 2026
4e10ab8
Added necessary extra dependencies for CI
C-Achard Jan 16, 2026
a8b913d
Update _widgets.py
C-Achard Jan 16, 2026
bf5a036
Add output validation to tracking models and tests
C-Achard Jan 16, 2026
e04a6b1
Clean up debug code and comments in tracking modules
C-Achard Jan 16, 2026
b816980
Update README.md
C-Achard Jan 16, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,45 @@ a video and the DLC project's `config.yaml` file (into which the crop dimensions
Then it suffices to add a `Shapes layer`, draw a `rectangle` in it with the desired area,
and hit the button `Store crop coordinates`; coordinates are automatically written to the configuration file.

### Point-tracker assisted labeling

**EXPERIMENTAL FEATURE**

This feature allows to speed up the labeling process by using a simple point tracker.
Please note this is still a very basic implementation, and **it WILL overwrite existing annotations**
in your napari-DLC Points layer!

Always **backup your data** before using this feature, and **try it on a copy of your data first.**
We cannot be held responsible for any lost annotations!

The current intended workflow would be to annotate a single frame, use the tracker to propagate annotations,
and manually correct any mistakes before saving.

Based on interest, we may polish the user experience and add more advanced tracking algorithms in the future.

**Basic usage:**

- The tracking widget is opened via the "Plugin > napari-deeplabcut: Tracking controls" menu.
- In the layer selection lists, select both the video layer and the Points layer to be used for tracking.
- Select the starting frame for tracking by moving the viewer slider to the desired frame.
- Select how many frames you want to track forward and backward (relative to current frame or in absolute terms, termed respectively Rel and Abs.).
- Use the track forward/backward/both buttons to run the tracker.

**Key bindings:**

- Tracking Controls
- **`l`** → Track **forward**
- **`k`** → Track **forward (to end)**
- **`h`** → Track **backward**
- **`j`** → Track **backward (to end)**

- Frame Navigation
- **`i`** → Move **forward one frame**
- **`u`** → Move **backward one frame**

**Known issues:**
- After several runs, keypoint attributions may get shuffled. Do not run the tracker several times without checking the results in between.
- Can only be run on plugin-controlled Points layers. Creating a new Points layer manually will not allow tracking on it.

## Contributing

Expand Down
2 changes: 2 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ testing =
pytest-cov
pytest-qt
tox
tracking =
torch

[options.package_data]
napari_deeplabcut =
Expand Down
219 changes: 219 additions & 0 deletions src/napari_deeplabcut/_tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
from skimage.io import imsave

from napari_deeplabcut import _writer, keypoints
from napari_deeplabcut.tracking._data import TrackingModelInputs, TrackingWorkerData, TrackingWorkerOutput
from napari_deeplabcut.tracking._models import AVAILABLE_TRACKERS, RawModelOutputs, TrackingModel

# os.environ["NAPARI_DLC_HIDE_TUTORIAL"] = "True" # no longer on by default

Expand All @@ -32,6 +34,10 @@ def viewer(make_napari_viewer_proxy):
"napari-deeplabcut",
"Keypoint controls",
)
tracking_dock_widget, tracking_plugin_widget = viewer.window.add_plugin_dock_widget(
"napari-deeplabcut",
"Tracking controls",
)

try:
yield viewer
Expand Down Expand Up @@ -151,3 +157,216 @@ def video_path(tmp_path_factory):
writer.write(frame)
writer.release()
return output_path


# --- Tracking fixtures ---
DUMMY_TRACKER_NAME = "TestTracker"


class DummyTracker(TrackingModel):
"""
Minimal tracker that:
- echoes inputs to outputs with a tiny deterministic transform,
- emits progress via the callback,
- honors stop_callback.
"""

name = DUMMY_TRACKER_NAME
info_text = "Dummy tracker for unit testing."

def load_model(self, device: str):
# No-op model; keep a simple config to emulate 'step' like CoTracker.
class _NoOpModel:
step = 3

return _NoOpModel()

def prepare_inputs(self, cfg: "TrackingWorkerData", **kwargs) -> TrackingModelInputs:
# Ensure video is (T, H, W, C) and keypoints is (K, 3) where columns: [frame_idx, x, y] or [id, x, y]
video = np.asarray(cfg.video)
queries = np.asarray(cfg.keypoints).copy()
metadata = {
"keypoint_range": cfg.keypoint_range,
"backward_tracking": getattr(cfg, "backward_tracking", False),
}
return TrackingModelInputs(video=video, keypoints=queries, metadata=metadata)

def run(self, inputs: TrackingModelInputs, progress_callback, stop_callback, **kwargs) -> RawModelOutputs:
# Fake progression per frame; stop if requested.
T = inputs.video.shape[0]
K = inputs.keypoints.shape[0]

# Produce tracks of shape (T, K, 2) with a deterministic offset (e.g., +1 pixel)
tracks = np.zeros((T, K, 2), dtype=float)
for t in range(T):
progress_callback(t, T)
if stop_callback():
# Return partial result up to t
tracks = tracks[: t + 1]
vis = np.ones_like(tracks[..., 0], dtype=bool) # visibility dummy
return RawModelOutputs(keypoints=tracks, keypoint_features={"visibility": vis})
# Use the input (x, y) for all K points and add a tiny drift proportional to t
tracks[t, :, 0] = inputs.keypoints[:, 1] + 0.1 * t # x
tracks[t, :, 1] = inputs.keypoints[:, 2] + 0.1 * t # y

vis = np.ones_like(tracks[..., 0], dtype=bool)
return RawModelOutputs(keypoints=tracks, keypoint_features={"visibility": vis})

def prepare_outputs(
self, model_outputs: RawModelOutputs, worker_inputs: "TrackingWorkerData" = None, **kwargs
) -> "TrackingWorkerOutput":
# Flatten (T, K, 2) -> (N, 3) with [frame_idx, x, y]
tracks = model_outputs.keypoints
T = tracks.shape[0]
K = tracks.shape[1]

T1, T2 = worker_inputs.keypoint_range
frame_ids = np.repeat(np.arange(T1, T1 + T), K)
flat = tracks.reshape(-1, 2)
keypoints = np.column_stack((frame_ids, flat)) # (N, 3)

# Minimal features: concat original per-keypoint features replicated per frame
keypoints_features = pd.concat(
[worker_inputs.keypoint_features] * T,
ignore_index=True,
)

return TrackingWorkerOutput(
keypoints=keypoints,
keypoint_features=keypoints_features,
)

def validate_outputs(self, inputs: TrackingModelInputs, outputs: "TrackingWorkerOutput") -> tuple[bool, str]:
"""
Validate DummyTracker outputs.

Expectations for DummyTracker:
- outputs.keypoints is an (N, 3) float array of [frame_idx, x, y]
- N == (T2 - T1) * K where:
T1, T2 = inputs.metadata["keypoint_range"]
T = T2 - T1 (number of frames produced)
K = inputs.keypoints.shape[0] (number of query points)
- frame_idx are integers in [T1, T2-1]
- x, y are finite. If video shape known, also check bounds: x∈[0,W), y∈[0,H)
- outputs.keypoint_features is a DataFrame with length N
and contains at least the columns present in worker_inputs.keypoint_features
(as repeated by the DummyTracker)
"""

# -------- Basic structure checks
kp = outputs.keypoints
if not isinstance(kp, np.ndarray):
return False, "outputs.keypoints must be a numpy array"

if kp.ndim != 2 or kp.shape[1] != 3:
return False, f"outputs.keypoints must have shape (N, 3); got {kp.shape}"

# -------- Expected length: N = (T2 - T1) * K
meta = inputs.metadata or {}
if (
"keypoint_range" not in meta
or not isinstance(meta["keypoint_range"], (tuple, list))
or len(meta["keypoint_range"]) != 2
):
return False, "inputs.metadata.keypoint_range must be a (T1, T2) tuple"

T1, T2 = meta["keypoint_range"]
if not (isinstance(T1, (int, np.integer)) and isinstance(T2, (int, np.integer)) and T2 >= T1):
return False, "Invalid keypoint_range; expected integers with T2 >= T1"

K = inputs.keypoints.shape[0]
expected_len = (T2 - T1) * K
if kp.shape[0] != expected_len:
return False, f"Expected (T*K)={expected_len} rows; got {kp.shape[0]}"

# -------- Frame index checks
frames = kp[:, 0]
# Allow float dtype but must be whole numbers
if not np.all(np.isfinite(frames)):
return False, "Frame indices contain non-finite values"

if not np.allclose(frames, np.round(frames)):
return False, "Frame indices must be integers"

frames_int = frames.astype(int)
if frames_int.min() < T1 or frames_int.max() > (T2 - 1):
return False, f"Frame indices out of range [{T1}, {T2 - 1}]"

# -------- Coordinate checks
xy = kp[:, 1:3]
if not np.all(np.isfinite(xy)):
return False, "Coordinates contain NaN/Inf"

# -------- Features checks
feats = outputs.keypoint_features
if not isinstance(feats, pd.DataFrame):
return False, "outputs.keypoint_features must be a pandas DataFrame"

if len(feats) != expected_len:
return False, f"keypoint_features length mismatch: expected {expected_len}, got {len(feats)}"

# When produced by DummyTracker, features are a concat of the input per frame
# Ensure at least the same columns are present and non-null
required_cols = []
try:
# worker_inputs.keypoint_features is replicated in DummyTracker.prepare_outputs
required_cols = list(self.cfg.keypoint_features.columns) # may exist on the tracker
except Exception:
# fallback to inputs.shape if not accessible; skip strict column match
pass

missing = [c for c in required_cols if c not in feats.columns]
if missing:
return False, f"Missing required feature columns: {missing}"

if required_cols:
if feats[required_cols].isna().any().any():
return False, "keypoint_features contain NaN in required columns"

return True, ""


@pytest.fixture(autouse=True)
def register_dummy_tracker():
"""
Auto-register DummyTracker for all tests and restore registry afterwards.
"""
prev = dict(AVAILABLE_TRACKERS)
AVAILABLE_TRACKERS[DUMMY_TRACKER_NAME] = {"class": DummyTracker}
try:
yield
finally:
AVAILABLE_TRACKERS.clear()
AVAILABLE_TRACKERS.update(prev)


@pytest.fixture
def track_worker_inputs():
"""
Provide minimal valid TrackingWorkerData with:
- 5-frame RGB video of 4x4 pixels,
- 2 keypoints,
- keypoint_range covering all frames,
- simple features DataFrame.
"""
video = np.zeros((5, 4, 4, 3), dtype=np.uint8)

keypoints = np.array(
[
[0, 10.0, 20.0],
[1, 30.0, 40.0],
Copy link

Copilot AI Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first column of the keypoints array appears to contain sequential IDs (0, 1) rather than frame indices. Based on the comment on line 11 stating '[0]: frame number in video', both keypoints should have frame index 0 since they're query points from the reference frame. Consider changing to [[0, 10.0, 20.0], [0, 30.0, 40.0]].

Suggested change
[1, 30.0, 40.0],
[0, 30.0, 40.0],

Copilot uses AI. Check for mistakes.
],
dtype=float,
)

keypoint_features = pd.DataFrame({"id": [0, 1], "name": ["kp0", "kp1"]})

# Build TrackingWorkerData
return TrackingWorkerData(
tracker_name=DUMMY_TRACKER_NAME,
video=video,
keypoints=keypoints,
keypoint_range=(0, 5), # frames 0..4
keypoint_features=keypoint_features,
backward_tracking=False,
)
2 changes: 1 addition & 1 deletion src/napari_deeplabcut/_tests/test_widgets.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ def test_guess_continuous():
assert not _widgets.guess_continuous(np.array(list("abc"))) # Strings → categorical


def test_keypoint_controls(viewer, qtbot):
def test_keypoint_controls(viewer):
controls = _widgets.KeypointControls(viewer)
controls.label_mode = "loop"
assert controls._radio_group.checkedButton().text() == "Loop"
Expand Down
Loading