Skip to content

MemryX MX3 detector integration #17723

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 95 commits into
base: dev
Choose a base branch
from

Conversation

tim-memryx
Copy link

@tim-memryx tim-memryx commented Apr 15, 2025

Proposed change

This PR adds support for the MemryX MX3 AI accelerator as an option for object detectors (SDK link).

We've included options for the following models:

  • YOLOv9s (640)
  • YOLOX (640)
  • SSDLite-MobileNetV2 (320)
  • YOLO-NAS (320)

We've added a section to the main/Dockerfile that installs necessary dependencies inside the container and downloads "DFP" files for each model (this is what we call compiled models). But note that these are compiled from their upstream sources (OpenVINO and Ultralytics**) not retrained nor quantized, as the MX3 runs models with floating point math.

**Ultralytics note: we do not use any Python code, PyTorch source, nor even ONNX files from Ultralytics in this integration into Frigate. Only the weights exist in the form of our compiled DFPs. If that becomes an issue, we can remove the DFP download commands, or whatever else you would advise.

We've tested the containers across x86 systems, Raspberry Pi, and Orange Pi.


Now to summarize the changed/added files:

Setup Scripts & Dockerfile

New File: memryx/user_installation.sh

  • Purpose:
    Installs required MX3 drivers and core libraries on the container host machine.

  • Note:
    Assumes a Debian/Ubuntu host (or their derivative distros such as Raspberry Pi OS and Armbian).

Dockerfile Modifications: main/Dockerfile

  • Updates:
    Added section to install dependencies (core libraries + Python pip packages) and download DFPs.

Frigate Additions/Modifications

New File: frigate/detector/plugins/memryx.py

  • Purpose:
    Implements the detector using MX3 for the currently supported models. Called by async_run_detector.

Modified File: frigate/object_detection/base.py

  • Asynchronous async_run_detector() function:
    This function replaces the blocking call to detect_raw with two asynchronous Python threads: one that calls the detector's send_input and another that calls receive_output. This allows a purely async architecture such as the MX3 to reach maximum FPS.

    The async_run_detector() was made to be agnostic to the MemryX detector and potentially useful for others, while MemryX-specific functionality is kept separate in the detector plugin file.


Summary

This adds MemryX MX3 and multiple object detection models.

Please let us know if there are any changes you would like to see, as we're very excited to be added to Frigate!


Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New feature
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code
  • Documentation Update

Checklist

  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • The code has been formatted using Ruff (ruff format frigate)
    • We ran ruff on just the new/modified files

@NickM-27 NickM-27 changed the base branch from dev to 0.17 May 22, 2025 12:29
@NickM-27 NickM-27 changed the base branch from 0.17 to dev May 22, 2025 12:29
@NickM-27
Copy link
Collaborator

Let me know when the docs are ready for final review

@tim-memryx We did a build of this PR and found that the docker image is substantially larger, from ~1G to ~5G, is this expected? It will definitely be problematic for our default build to be this large, so we would likely want to push this to a separate docker build variant if that was the case.

@tim-memryx
Copy link
Author

Hi @NickM-27 ,

Yep, the docs are ready for review. Please let us know if they look okay or if we need any changes, thanks!

Regarding the size, that's all from the memryx package's compiler dependencies. It seems the torch whl on pypi comes with 3GB of nvidia dependencies -- also there's some unnecessary GUI packages for memryx that can be removed for another couple hundred MB.

We'll update our pip install commands in the dockerfile to clean this up now.

-v /path/to/your/config:/config \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='password' \
--add-host gateway.docker.internal:host-gateway \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users running docker compose will likely need to add an extra-hosts: section too, correct? If so, it would be good to add this above.

Can you explain more about what's going on internally with this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Yes, I’ve added it to the Compose file now. We use gateway.docker.internal:host-gateway to allow the container to communicate with the device arbitration daemon running on the host.

@@ -132,6 +132,71 @@ If you are using `docker run`, add this option to your command `--device /dev/ha

Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.

### MemryX MX3
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should go lower under the community supported section for now. You can also add it to the info card at the top to link to it

Due to the MX3's architecture, the maximum frames per second supported cannot be calculated as `1/inference time` and is measured separately. When deciding how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.


| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's not include fps here, it is not really clear to users because Frigate can run multiple detections on the same frame, and since we don't have this metric on other sections it isn't directly comparable

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We included the FPS information in case a user is reading the Coral TPU section that says:

You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at 1000/10=100, or 100 frames per second.

This calculation of 1000 / inference time (ms) = FPS isn't true for the MX3, so we listed them separately. For example, for yolov9s-320, 1000/16ms = 62.5 FPS but the chip is actually running at 382 FPS (around 6x difference).

In other words: the input-to-output latency of a single frame for yolov9s-320 is 16 ms, while the time-between-output-frames is 2.6 ms (1/382*1000).

What if we remove the FPS column and instead have a single "Inference Speed" column with time-between-frames latency, instead of input-to-output latency? This would remove the extra column while still giving the user a general feeling for the accelerator's inference capacity relative to CPUs?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This calculation of 1000 / inference time (ms) = FPS isn't true for the MX3, so we listed them separately. For example, for yolov9s-320, 1000/16ms = 62.5 FPS but the chip is actually running at 382 FPS (around 6x difference).

I'm not sure how that can be the case, if an inference as shown in the frigate UI is 16 ms then frigate will only be able to run 62.5 detections in that second, which is exactly what these tables should be presenting to the user.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently in the async_run_detector, the duration timestamps start when the input thread pushes a new frame to the hardware detector, and stop when the output thread receives the frame, so this would be "in-to-out" latency.

But the output frames to the user are updated more frequently than this, because the async pipeline has multiple frames in-flight at a time. The time between updates to the user is the "frame-to-frame" latency, from which we calculate FPS.

Note for the synchronous run_detector, "in-to-out" and "frame-to-frame" latencies are equal.

Since the goal is to show the users the detections per second, we can modify async_run_detector to report duration as time between outputs ("frame-to-frame")? Then we'll redo the benchmarks and have a single column in the docs.

Would this be okay?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I am mostly understanding, how much of the inference is truly parallelized though?

@hawkeye217
Copy link
Collaborator

Hey Tim. I went ahead and tested the latest code here in the PR. I re-ran the installation script, restarted, and I'm seeing this:

2025-05-22 18:17:55.694682445  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Model ZIP downloaded to /memryx_models/yolonas_320.zip. Extracting...
2025-05-22 18:17:55.705412761  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Assigned Model Path: /memryx_models/yolonas_320/yolo_nas_s.dfp
2025-05-22 18:17:55.705456125  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Assigned Post-processing Model Path: /memryx_models/yolonas_320/yolo_nas_s_post.onnx
2025-05-22 18:17:55.705485141  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Cleaned up ZIP file after extraction.
2025-05-22 18:17:55.705537673  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Initializing MemryX with model: /memryx_models/yolonas_320/yolo_nas_s.dfp on device PCIe:0
2025-05-22 18:17:55.705563653  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : dfp path: /memryx_models/yolonas_320/yolo_nas_s.dfp
2025-05-22 18:17:55.711591243  [2025-05-22 18:17:55] frigate.api.fastapi_app        INFO    : FastAPI started
2025-05-22 18:18:00.237433040  [INFO] Starting go2rtc healthcheck service...
2025-05-22 18:18:15.752262241  [2025-05-22 18:18:15] frigate.detectors.plugins.memryx ERROR   : Failed to initialize MemryX model: <_InactiveRpcError of RPC that terminated with:
2025-05-22 18:18:15.752264876  	status = StatusCode.UNAVAILABLE
2025-05-22 18:18:15.752266409  	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"
2025-05-22 18:18:15.752271399  	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-05-22T18:18:15.751641985-04:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"}"
2025-05-22 18:18:15.752272291  >
2025-05-22 18:18:15.753340894  Process detector:memryx:
2025-05-22 18:18:15.753341896  Traceback (most recent call last):
2025-05-22 18:18:15.753342848    File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2025-05-22 18:18:15.753343529      self.run()
2025-05-22 18:18:15.753344431    File "/opt/frigate/frigate/util/process.py", line 41, in run_wrapper
2025-05-22 18:18:15.753345122      return run(*args, **kwargs)
2025-05-22 18:18:15.753345823             ^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753346765    File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2025-05-22 18:18:15.753347597      self._target(*self._args, **self._kwargs)
2025-05-22 18:18:15.753349350    File "/opt/frigate/frigate/object_detection/base.py", line 187, in async_run_detector
2025-05-22 18:18:15.753351204      object_detector = AsyncLocalObjectDetector(detector_config=detector_config)
2025-05-22 18:18:15.753362215                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753363618    File "/opt/frigate/frigate/object_detection/base.py", line 58, in __init__
2025-05-22 18:18:15.753364910      self.detect_api = create_detector(detector_config)
2025-05-22 18:18:15.753366804                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753383426    File "/opt/frigate/frigate/detectors/__init__.py", line 18, in create_detector
2025-05-22 18:18:15.753385169      return api(detector_config)
2025-05-22 18:18:15.753387073             ^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753389718    File "/opt/frigate/frigate/detectors/plugins/memryx.py", line 137, in __init__
2025-05-22 18:18:15.753391792      self.accl = AsyncAccl(
2025-05-22 18:18:15.753393515                  ^^^^^^^^^^
2025-05-22 18:18:15.753395149    File "memryx/runtime/accl.py", line 1049, in memryx.runtime.accl.AsyncAccl.__init__
2025-05-22 18:18:15.753396391    File "memryx/runtime/accl.py", line 105, in memryx.runtime.accl.Accl.__init__
2025-05-22 18:18:15.753415508    File "memryx/runtime/accl.py", line 174, in memryx.runtime.accl.Accl._configure
2025-05-22 18:18:15.753416770    File "memryx/runtime/accl.py", line 204, in memryx.runtime.accl.Accl._init
2025-05-22 18:18:15.753418063    File "/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1181, in __call__
2025-05-22 18:18:15.753419355      return _end_unary_response_blocking(state, call, False, None)
2025-05-22 18:18:15.753420578             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753422071    File "/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
2025-05-22 18:18:15.753436318      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
2025-05-22 18:18:15.753437320      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753438562  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
2025-05-22 18:18:15.753439414  	status = StatusCode.UNAVAILABLE
2025-05-22 18:18:15.753441428  	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"
2025-05-22 18:18:15.753444273  	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-05-22T18:18:15.751641985-04:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"}"
2025-05-22 18:18:15.753452419  >
2025-05-22 18:18:25.643392950  [2025-05-22 18:18:25] frigate.watchdog               INFO    : Detection appears to have stopped. Exiting Frigate...

I do see the device showing up in dmesg:

[    2.631327] memryx: finished search for PCIe-connected devices
[    2.631693] memryx: module_init: kernel module loaded. char major Id(237).

I can see it in /dev/memx0 and I've passed it into the container in my docker-compose:

    devices:
      - /dev/memx0
    extra_hosts:
      - "gateway.docker.internal:host-gateway"

Can you let me know if I've missed something?

@NickM-27
Copy link
Collaborator

regarding the image size, it is better but still about twice the size of the image before this PR. It looks like torch-cpu is using a lot of it, is this required for inference?

@abinila4
Copy link

abinila4 commented May 23, 2025

Hey Tim. I went ahead and tested the latest code here in the PR. I re-ran the installation script, restarted, and I'm seeing this:

2025-05-22 18:17:55.694682445  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Model ZIP downloaded to /memryx_models/yolonas_320.zip. Extracting...
2025-05-22 18:17:55.705412761  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Assigned Model Path: /memryx_models/yolonas_320/yolo_nas_s.dfp
2025-05-22 18:17:55.705456125  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Assigned Post-processing Model Path: /memryx_models/yolonas_320/yolo_nas_s_post.onnx
2025-05-22 18:17:55.705485141  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Cleaned up ZIP file after extraction.
2025-05-22 18:17:55.705537673  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : Initializing MemryX with model: /memryx_models/yolonas_320/yolo_nas_s.dfp on device PCIe:0
2025-05-22 18:17:55.705563653  [2025-05-22 18:17:55] frigate.detectors.plugins.memryx INFO    : dfp path: /memryx_models/yolonas_320/yolo_nas_s.dfp
2025-05-22 18:17:55.711591243  [2025-05-22 18:17:55] frigate.api.fastapi_app        INFO    : FastAPI started
2025-05-22 18:18:00.237433040  [INFO] Starting go2rtc healthcheck service...
2025-05-22 18:18:15.752262241  [2025-05-22 18:18:15] frigate.detectors.plugins.memryx ERROR   : Failed to initialize MemryX model: <_InactiveRpcError of RPC that terminated with:
2025-05-22 18:18:15.752264876  	status = StatusCode.UNAVAILABLE
2025-05-22 18:18:15.752266409  	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"
2025-05-22 18:18:15.752271399  	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-05-22T18:18:15.751641985-04:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"}"
2025-05-22 18:18:15.752272291  >
2025-05-22 18:18:15.753340894  Process detector:memryx:
2025-05-22 18:18:15.753341896  Traceback (most recent call last):
2025-05-22 18:18:15.753342848    File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2025-05-22 18:18:15.753343529      self.run()
2025-05-22 18:18:15.753344431    File "/opt/frigate/frigate/util/process.py", line 41, in run_wrapper
2025-05-22 18:18:15.753345122      return run(*args, **kwargs)
2025-05-22 18:18:15.753345823             ^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753346765    File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2025-05-22 18:18:15.753347597      self._target(*self._args, **self._kwargs)
2025-05-22 18:18:15.753349350    File "/opt/frigate/frigate/object_detection/base.py", line 187, in async_run_detector
2025-05-22 18:18:15.753351204      object_detector = AsyncLocalObjectDetector(detector_config=detector_config)
2025-05-22 18:18:15.753362215                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753363618    File "/opt/frigate/frigate/object_detection/base.py", line 58, in __init__
2025-05-22 18:18:15.753364910      self.detect_api = create_detector(detector_config)
2025-05-22 18:18:15.753366804                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753383426    File "/opt/frigate/frigate/detectors/__init__.py", line 18, in create_detector
2025-05-22 18:18:15.753385169      return api(detector_config)
2025-05-22 18:18:15.753387073             ^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753389718    File "/opt/frigate/frigate/detectors/plugins/memryx.py", line 137, in __init__
2025-05-22 18:18:15.753391792      self.accl = AsyncAccl(
2025-05-22 18:18:15.753393515                  ^^^^^^^^^^
2025-05-22 18:18:15.753395149    File "memryx/runtime/accl.py", line 1049, in memryx.runtime.accl.AsyncAccl.__init__
2025-05-22 18:18:15.753396391    File "memryx/runtime/accl.py", line 105, in memryx.runtime.accl.Accl.__init__
2025-05-22 18:18:15.753415508    File "memryx/runtime/accl.py", line 174, in memryx.runtime.accl.Accl._configure
2025-05-22 18:18:15.753416770    File "memryx/runtime/accl.py", line 204, in memryx.runtime.accl.Accl._init
2025-05-22 18:18:15.753418063    File "/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1181, in __call__
2025-05-22 18:18:15.753419355      return _end_unary_response_blocking(state, call, False, None)
2025-05-22 18:18:15.753420578             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753422071    File "/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
2025-05-22 18:18:15.753436318      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
2025-05-22 18:18:15.753437320      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-05-22 18:18:15.753438562  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
2025-05-22 18:18:15.753439414  	status = StatusCode.UNAVAILABLE
2025-05-22 18:18:15.753441428  	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"
2025-05-22 18:18:15.753444273  	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-05-22T18:18:15.751641985-04:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:172.17.0.1:10000: Failed to connect to remote host: Timeout occurred: FD Shutdown"}"
2025-05-22 18:18:15.753452419  >
2025-05-22 18:18:25.643392950  [2025-05-22 18:18:25] frigate.watchdog               INFO    : Detection appears to have stopped. Exiting Frigate...

I do see the device showing up in dmesg:

[    2.631327] memryx: finished search for PCIe-connected devices
[    2.631693] memryx: module_init: kernel module loaded. char major Id(237).

I can see it in /dev/memx0 and I've passed it into the container in my docker-compose:

    devices:
      - /dev/memx0
    extra_hosts:
      - "gateway.docker.internal:host-gateway"

Can you let me know if I've missed something?

Hi @hawkeye217 ,

Thank you for the update. When you get a chance, could you please check the following to help troubleshoot the connection issue?

1. Service Status

Please check if the MemryX daemon is running and connected:

sudo service mxa-manager status

2. Listening Interface

Ensure the daemon is listening on all interfaces.
If it's currently bound to 127.0.0.1, you can update the config with:

sudo sed -i 's/^LISTEN_ADDRESS=.*/LISTEN_ADDRESS="0.0.0.0"/' /etc/memryx/mxa_manager.conf

After making the above changes, please restart the daemon:

sudo service mxa-manager restart

3. Firewall Rules (if using UFW)

If your system uses ufw, the firewall might be blocking access between the container and the host.
You can allow traffic from the Docker bridge network with:

sudo ufw allow from 172.17.0.0/24 to any

Please let me know if you're still seeing issues afterward — happy to help further!

@hawkeye217
Copy link
Collaborator

Here's what I see:

jhawkins@memryxPC:~/frigate/config$ sudo service mxa_manager status
Unit mxa_manager.service could not be found.
jhawkins@memryxPC:~/frigate/config$ ps aux | grep mxa
daemon      6949  0.1  0.0 2338784 19620 ?       Ssl  15:25   0:00 /usr/bin/mxa_manager
jhawkins   12447  0.0  0.0   9144  2244 pts/0    S+   15:28   0:00 grep --color=auto mxa
jhawkins@memryxPC:~/frigate/config$ cat /etc/memryx/mxa_manager.conf 
# address to listen on
#
# * defaults to 127.0.0.1 to limit to local traffic
#
# * set to 0.0.0.0 or an interface address to accept network traffic
LISTEN_ADDRESS="0.0.0.0"

# the daemon will use 3 ports in order starting with this number
#
# NOTE: be sure to make sure your client applications use these ports too!
BASE_PORT=10000
jhawkins@memryxPC:~/frigate/config$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
Anywhere                   ALLOW       172.17.0.0/24             
22/tcp                     ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)    

@abinila4
Copy link

sudo service mxa_manager status

I apologize. Could you please try:

sudo service mxa-manager status

@hawkeye217
Copy link
Collaborator

$ sudo service mxa-manager status
● mxa-manager.service - The MemryX MX3 device management daemon.
     Loaded: loaded (/usr/lib/systemd/system/mxa-manager.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-05-23 15:25:39 EDT; 10min ago
    Process: 6925 ExecStartPre=/bin/sleep 5 (code=exited, status=0/SUCCESS)
   Main PID: 6949 (mxa_manager)
      Tasks: 33 (limit: 37091)
     Memory: 4.8M (peak: 6.6M)
        CPU: 594ms
     CGroup: /system.slice/mxa-manager.service
             └─6949 /usr/bin/mxa_manager

May 23 15:25:34 memryxPC systemd[1]: Starting mxa-manager.service - The MemryX MX3 device management daemon....
May 23 15:25:39 memryxPC systemd[1]: Started mxa-manager.service - The MemryX MX3 device management daemon..
May 23 15:25:39 memryxPC mxa_manager[6949]: Server listening on 0.0.0.0:10000
May 23 15:25:39 memryxPC mxa_manager[6949]: Server listening on 0.0.0.0:10001
May 23 15:25:39 memryxPC mxa_manager[6949]: Server listening on 0.0.0.0:10002

@abinila4
Copy link

sudo service mxa-manager status

Thanks for sharing that — this confirms:

  1. mxa_manager is running
  2. It’s listening on 0.0.0.0:10000 (as well as 10001 and 10002)

Just to rule out any firewall-related issues, could you please try temporarily disabling UFW with:

sudo ufw disable

After that, can you please restart the container and check. Thank you.

@hawkeye217
Copy link
Collaborator

hawkeye217 commented May 23, 2025

Thanks for the assistance. Disabling ufw caused Frigate to start normally.

Disabling the firewall isn't really a long-term solution, so what firewall allow rules are needed to make this work correctly?

This may be a hangup for our very security-conscious and privacy-focused users, who run on a variety of platforms, operating systems, and hardware. None of the other detectors that Frigate supports needs to have network connections back to the host in any way, so it probably would be good to include exactly what the mxa_manager does as well as troubleshooting steps in the docs.

@hawkeye217
Copy link
Collaborator

Something else of note - it looks like the model files are downloading every time I restart Frigate.

[2025-05-23 17:39:36] frigate.detectors.plugins.memryx INFO : Model files not found. Downloading from https://developer.memryx.com/example_files/1p2_frigate/yolonas_320.zip

@tim-memryx
Copy link
Author

Hi @hawkeye217 ,

For firewall rules, we need the host to be able to accept connections from the docker virtual interface to the host for mxa_manager connections (default to ports 10000, 10001, and 10002). I think the issue might be that the docker interfaces can have varying IPs on different systems, or maybe ufw would classify this traffic as "forwarding" and block it by default?

mxa_manager's primary purpose is to support multiple processes [either on the host, in containers, or both] accessing the hardware at the same time, and it also manages device locking. But in Frigate's case, it's only being used for device lock management.

The network config can definitely be tricky/risky for users -- on that note, the upcoming SDK release (in 1 month or so) supports using unix domain sockets instead of TCP/IP, and defaults to this, which should make the docker config a lot easier since it's just a matter of adding a volume mount for the socket file.

@hawkeye217
Copy link
Collaborator

Thanks, Tim. That's great. We look forward to the upcoming release.

@github-actions github-actions bot added the stale label Jun 23, 2025
@NickM-27 NickM-27 added pinned and removed stale labels Jun 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants