- 
                Notifications
    
You must be signed in to change notification settings  - Fork 732
 
Description
Search before asking
- I have searched the jetson-containers issues and found no similar feature requests.
 
jetson-containers Component
Packages
Bug
I able to successfully build the nanoowl package using jetson-containers --skip-tests=all nanoowl. However, then I try to run tree_demo.py, after the first input prompt, for example [a face], the image on the webpage freezes. No bounding box is shown and any further prompts result in no output.
Upon killing the demo I get the following error:
INFO:root:Set prompt: [a face]
^CINFO:aiohttp.access:127.0.0.1 [16/Sep/2025:06:43:52 -0800] "GET /ws HTTP/1.1" 101 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36"
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web.py", line 438, in _run_app
    await asyncio.sleep(3600)
  File "/usr/lib/python3.10/asyncio/tasks.py", line 605, in sleep
    return await future
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/opt/nanoowl/examples/tree_demo/tree_demo.py", line 191, in <module>
    web.run_app(app, host=args.host, port=args.port)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web.py", line 530, in run_app
    loop.run_until_complete(main_task)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web.py", line 440, in _run_app
    await runner.cleanup()
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_runner.py", line 310, in cleanup
    await self._cleanup_server()
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_runner.py", line 399, in _cleanup_server
    await self._app.cleanup()
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 487, in cleanup
    await self.on_cleanup.send(self)
  File "/usr/local/lib/python3.10/dist-packages/aiosignal/__init__.py", line 52, in send
    await receiver(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 618, in _on_cleanup
    raise errors[0]
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 609, in _on_cleanup
    await it.__anext__()
  File "/opt/nanoowl/examples/tree_demo/tree_demo.py", line 181, in run_detection_loop
    await task
  File "/opt/nanoowl/examples/tree_demo/tree_demo.py", line 162, in detection_loop
    re, image = await loop.run_in_executor(None, _read_and_encode_image)
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/nanoowl/examples/tree_demo/tree_demo.py", line 143, in _read_and_encode_image
    detections = predictor.predict(
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
  File "/opt/nanoowl/nanoowl/tree_predictor.py", line 121, in predict
    owl_image_encodings[label_index] = self.owl_predictor.encode_rois(image_tensor, boxes[label_index])
  File "/opt/nanoowl/nanoowl/owl_predictor.py", line 267, in encode_rois
    roi_images, rois = self.extract_rois(image, rois, pad_square, padding_scale)
  File "/opt/nanoowl/nanoowl/owl_predictor.py", line 257, in extract_rois
    roi_images = roi_align(image, [rois], output_size=self.get_image_size())
  File "/usr/local/lib/python3.10/dist-packages/torchvision/ops/roi_align.py", line 258, in roi_align
    return torch.ops.torchvision.roi_align(
  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1243, in __call__
    return self._op(*args, **kwargs)
NotImplementedError: Could not run 'torchvision::roi_align' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::roi_align' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at /opt/torchvision/torchvision/csrc/ops/cpu/roi_align_kernel.cpp:390 [kernel]
Meta: registered at /dev/null:19 [kernel]
QuantizedCPU: registered at /opt/torchvision/torchvision/csrc/ops/quantized/cpu/qroi_align_kernel.cpp:283 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /opt/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:194 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:375 [backend fallback]
Named: registered at /opt/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradCPU: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradCUDA: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradHIP: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradXLA: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradMPS: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradIPU: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradXPU: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradHPU: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradVE: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradLazy: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradMTIA: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradMAIA: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradPrivateUse1: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradPrivateUse2: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradPrivateUse3: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradMeta: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
AutogradNestedTensor: registered at /opt/torchvision/torchvision/csrc/ops/autograd/roi_align_kernel.cpp:157 [autograd kernel]
Tracer: registered at /opt/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: registered at /opt/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.cpp:43 [kernel]
AutocastMTIA: fallthrough registered at /opt/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: registered at /opt/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.cpp:51 [kernel]
AutocastMPS: fallthrough registered at /opt/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: registered at /opt/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.cpp:35 [kernel]
FuncTorchBatched: registered at /opt/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
The main issue appears to be NotImplementedError: Could not run 'torchvision::roi_align' with arguments from the 'CUDA' backend. I think this is some issue with the Pytorch/Cuda compatibility. Pytorch 2.8.0 is currently installed in the container. In the past this demo worked, but the pytorch version was 2.5.0.
I've tried to manually create my own dockerfile to run the nanoowl repo, but https://pypi.jetson-ai-lab.io only has Pytorch 2.8.0.
Environment
┌───────────────────────┬────────────────────────┐
│ L4T_VERSION   36.4.4  │ JETPACK_VERSION  6.2.1 │
│ CUDA_VERSION  12.6    │ PYTHON_VERSION   3.10  │
│ SYSTEM_ARCH   aarch64 │ LSB_RELEASE      22.04 │
└───────────────────────┴────────────────────────┘
Additional
No response
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!