Skip to content

Comments

Rewrite StandardDetector#1177

Merged
coretl merged 72 commits intomainfrom
detector-rewrite
Feb 18, 2026
Merged

Rewrite StandardDetector#1177
coretl merged 72 commits intomainfrom
detector-rewrite

Conversation

@coretl
Copy link
Collaborator

@coretl coretl commented Jan 9, 2026

Summary

This PR is a major architectural refactor of StandardDetector that replaces the inheritance-heavy design with a composition-based approach. Instead of using DetectorController and DetectorWriter, it introduces three separate logic classes (DetectorTriggerLogic, DetectorArmLogic, and DetectorDataLogic) that can be mixed and matched to build detectors with different behaviors.

The refactor also renames trigger types for clarity:

  • EDGE_TRIGGEREXTERNAL_EDGE
  • CONSTANT_GATE/VARIABLE_GATEEXTERNAL_LEVEL

This change makes it significantly easier to support advanced use cases like:

  • Detectors with multiple HDF writers for different ROIs
  • Combining file writing with signal reading from stats plugins
  • Continuously acquiring detectors

Fixes #1006, fixes #948, fixes #991, fixes #1037, fixes #776, fixes #983, fixes #454

Implemented on top of #1158

Breaking Changes

All detector implementations need updating:

Detector Controller → Trigger Logic + Arm Logic

# old
class SimController(DetectorController):
    def __init__(self, driver: ADBaseIO):
        self.driver = driver
        
    async def prepare(self, trigger_info: TriggerInfo):
        assert trigger_info.trigger == TriggerInfo.INTERNAL, "Can only do internal"
        await self.driver.num_images.set(trigger_info.number_of_events)
        
    async def arm(self):
        await self.driver.acquire.set(True)
        
    async def wait_for_idle(self):
        await wait_for_value(self.driver.acquire, False, timeout=DEFAULT_TIMEOUT)
        
    async def disarm(self):
        await self.driver.acquire.set(False)

# new  
class SimTriggerLogic(DetectorTriggerLogic):
    def __init__(self, driver: ADBaseIO):
        self.driver = driver
        
    # Also prepare_edge and prepare_level if hardware triggering supported
    async def prepare_internal(self, num: int, livetime: float, deadtime: float):
        await self.driver.num_images.set(num)

# if ADArmLogic is not suitable
class SimArmLogic(DetectorArmLogic):
    def __init__(self, driver: ADBaseIO):
        self.driver = driver
        
    async def arm(self):
        await self.driver.acquire.set(True)
        
    async def wait_for_idle(self):
        await wait_for_value(self.driver.acquire, False, timeout=DEFAULT_TIMEOUT)
        
    async def disarm(self):
        await self.driver.acquire.set(False)

Detector Writer → Data Logic

# old
class ADHDFWriter(DetectorWriter):
    async def open(self, name: str, exposures_per_event: int = 1):
        # Setup file writing
        return describe_dict
        
# new
class ADHDFDataLogic(DetectorDataLogic):
    async def prepare_unbounded(self, detector_name: str):
        # Setup file writing  
        return StreamResourceDataProvider(...)

TriggerInfo Updates

# old
TriggerInfo(
    number_of_events=10,
    trigger=DetectorTrigger.EDGE_TRIGGER,
    livetime=0.1,
    deadtime=0.01,
)

# new  
TriggerInfo(
    number_of_events=10,
    trigger=DetectorTrigger.EXTERNAL_EDGE,
    livetime=0.1,
    deadtime=0.01,
)

Complete SimDetector Example

# old - controller and writer_cls.with_io

from ophyd_async.epics import adcore

class SimDetector(adcore.AreaDetector[SimController]):
    def __init__(
        self,
        prefix: str,
        path_provider: PathProvider,
        drv_suffix="cam1:",
        writer_cls: type[adcore.ADWriter] = adcore.ADHDFWriter,
        fileio_suffix: str | None = None,
        name="",
        config_sigs: Sequence[SignalR] = (),
        plugins: dict[str, adcore.NDPluginBaseIO] | None = None,
    ):
        driver = adcore.ADBaseIO(prefix + drv_suffix)
        controller = SimController(driver)        
        writer = writer_cls.with_io(
            prefix,
            path_provider,
            dataset_source=driver,
            fileio_suffix=fileio_suffix,
            plugins=plugins,
        )        
        super().__init__(
            controller=controller,
            writer=writer,
            plugins=plugins,
            name=name,
            config_sigs=config_sigs,
        )

# new - handled by the baseclass
from ophyd_async.epics import adcore

class SimDetector(adcore.AreaDetector[adcore.ADBaseIO]):
    """Create an ADSimDetector AreaDetector instance."""
    
    def __init__(
        self,
        prefix: str,
        path_provider: PathProvider | None = None,
        driver_suffix="cam1:",
        writer_type: ADWriterType | None = ADWriterType.HDF,
        writer_suffix: str | None = None,
        plugins: dict[str, NDPluginBaseIO] | None = None,
        config_sigs: Sequence[SignalR] = (),
        name: str = "",
    ) -> None:
        driver = adcore.ADBaseIO(prefix + driver_suffix)
        super().__init__(
            prefix=prefix,
            driver=driver,
            arm_logic=adcore.ADArmLogic(driver),
            trigger_logic=SimDetectorTriggerLogic(driver),
            path_provider=path_provider,
            writer_type=writer_type,
            writer_suffix=writer_suffix,
            plugins=plugins,
            config_sigs=config_sigs,
            name=name,
        )

Reading Stats Without Files

# old - not easily supported

# new - use PluginSignalDataLogic
detector = SimDetector(prefix, writer_type=None)
detector.add_detector_logics(PluginSignalDataLogic(driver, stats.total))
# Now stats.total appears in read() without file writing

Multiple Data Streams

# old - required complex inheritance

# new - add multiple data logics
detector = AreaDetector(
    driver=driver,
    arm_logic=ADArmLogic(driver),
    writer_type=None,  # Don't create default writer
)
# Add separate HDF writers for different ROIs
detector.add_detector_logics(
    ADHDFDataLogic(..., datakey_suffix="-roi1"),
    ADHDFDataLogic(..., datakey_suffix="-roi2"),
)

@coretl
Copy link
Collaborator Author

coretl commented Jan 12, 2026

Use cases still to support:

  • N HDF writers in the same detector that write different sized ROIs to the same stream, need to support multiple DataLogics or multiple HDF writers in a single DataLogic
  • Lambda detector "event mode": writing raw frames and N summed frames to different streams, need to support an "do nothing" DetectorArmLogic and DetectorTriggerLogic
  • Support read() giving stats plugin PVs at the same time as optionally file writing the arrays

@DominicOram
Copy link
Contributor

DominicOram commented Jan 16, 2026

@olliesilvester, @coretl, @shihab-dls and I discussed how the Jungfrau would fit into this.

The particularly hard thing on the jungfrau is pedestal mode. This is a mode for taking darks where you set a number of frames F and a number of loops L and the device will create 2*F*L dark frames. There are separate PVs for the frames (pedestal_mode_frames) and loops (pedestal_mode_loops) as well as setting pedestal mode (pedestal_mode_state). When you are in pedestal mode the device does not allow you to set number of loops/frames or trigger type

The conclusions of how we would do this were:

  • The plan will set number of pedestal frames, loops and the soft signal for telling the detector you are in pedestal mode
  • The plan will prepare the detector with internal trigger of 2*F*L frames
  • The prepare of the detector will check the soft pedestal signal, validate that the detector is in the correct state to do the pedestal and then set the real pedestal mode on the detector

@coretl will do this refactor

@coretl
Copy link
Collaborator Author

coretl commented Jan 16, 2026

@claude only review src/ophyd_async/core/_detector.py and src/ophyd_async/epics changes. Unit tests for core and fastcs are not updated yet so failures are expected

@claude
Copy link

claude bot commented Jan 16, 2026

Claude finished @coretl's task in 4m 0s —— View job


Code Review Complete

I've reviewed src/ophyd_async/core/_detector.py and the src/ophyd_async/epics changes as requested. Here are the key findings:

Critical Issues

  • Line 225: The ... ellipsis statement causes "Statement has no effect" security warnings. Should use pass instead.

Architecture & Design

  • Logic detection pattern (lines 159, 228): The _trigger_logic_supported and _data_logic_supported functions use __func__ comparison which may not work reliably. Consider using explicit protocol checks or ABC.
  • Type matching (line 301): Using match/case for type checking is non-idiomatic. isinstance() checks would be clearer.
  • Enum factory method (line 306 in _data_logic.py): ADWriterType.make_detector() is complex logic in an enum. Consider extracting to a factory function.

Code Quality

  • Typos:
    • "DetectorCobntroller" in adandor.py:57
    • "Vimbda" in advimba.py:43
    • "on it's own" in adaravis.py:69
  • Comments:

Potential Bugs

  • Type ignore (line 272 in _data_logic.py): Suppressing type error for directory_uri - should be handled properly
  • Data provider reuse (line 372 in _detector.py): Only checks collections_per_event when reusing providers. Should verify other parameters too?
  • Default parameter (line 19 in _arm_logic.py): driver_armed_signal defaults to driver.acquire which requires driver to be initialized

Suggestions

  • Break down complex validation in ADContAcqTriggerLogic.prepare_internal into helper methods
  • Consider consolidating get_ndarray_resource_info calls to reduce duplication
  • Use assertions for "should never happen" cases instead of error_if_none
  • Document the optional method override pattern for DetectorTriggerLogic and DetectorDataLogic

All inline comments have been added to specific locations. The refactor looks solid overall - the new composition-based architecture with separate arm/trigger/data logic is much cleaner than the previous inheritance-heavy approach.

Tasks:

  • Check for repository guidelines
  • Review src/ophyd_async/core/_detector.py
  • Review src/ophyd_async/epics/adcore changes
  • Review src/ophyd_async/epics detector implementations
  • Provide inline comments for specific issues
  • Summarize findings

@coretl
Copy link
Collaborator Author

coretl commented Jan 28, 2026

@jwlodek I've included the color mode things in here too

Copy link
Member

@jwlodek jwlodek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a read through everything, overall I like it. It seems to me like a much more intuitive way of setting up customized detector setups than complex inheritance. Mentioned a few minor things in suggestions.

It could exist already, but is there a way to clear out the logics configured for a detector? I'd be interested to see if we could have a few sets of logics that we apply for different plans.

I will try and give it a test with some real hardware by the end of this week.

@coretl
Copy link
Collaborator Author

coretl commented Jan 29, 2026

It could exist already, but is there a way to clear out the logics configured for a detector? I'd be interested to see if we could have a few sets of logics that we apply for different plans.

There isn't at the moment, the idea being that the detector subclasses with writer_type should have the arm and trigger logic fixed for that sort of detector. I think I'd prefer different detector instances adding different writers rather than mutable objects. Maybe we could discuss this in a future issue though.

Copy link
Member

@jwlodek jwlodek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a chance to try this out today with a new detector I'm integrating. For the most part it was pretty straightforward. One thing I ran into is that it seems like I needed to prime the HDF plugin before an acquisition - the initial acquisition produced a chunk_shape of [1, 0, 0] which tiled refused to ingest. Looking through the code I don't think that should be the case, so I'll double check tomorrow - I'm writing the AD driver as I go so perhaps a bug there, didn't have a chance to check for sure.

While looking at the shape/chunk_shape code though, I noticed that we also need to handle plugins as shape sources, (and the CB plugin should be used for the ContAcqDetector), and I started to think about how to potentially handle non-color higher dimensional detectors like one that is on my plate to work on in the coming months.

prefix: str,
path_provider: PathProvider,
writer_suffix: str | None,
driver: ADBaseIO,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
driver: ADBaseIO,
array_io: NDArrayBaseIO,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to allow passing plugins here to use them as the source of the dataset size, name is just a suggestion, It can stay driver too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My expectation for those that use a plugin as a source of dataset size is that people would set writer=None, then construct the NDArrayDescription and writer themselves to add it on afterwards, otherwise this function is getting unwieldy

Comment on lines 311 to 314
description = NDArrayDescription(
shape_signals=[driver.array_size_y, driver.array_size_x],
data_type_signal=driver.data_type,
color_mode_signal=driver.color_mode,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description = NDArrayDescription(
shape_signals=[driver.array_size_y, driver.array_size_x],
data_type_signal=driver.data_type,
color_mode_signal=driver.color_mode,
if isinstance(array_io, ADBaseIO):
shape_signals = [array_io.array_size_y, array_io.array_size_y]
else:
shape_signals = [array_io.array_size0, array_io.array_size1, array_io.array_size2]
description = NDArrayDescription(
shape_signals=shape_signals,
data_type_signal=array_io.data_type,
color_mode_signal=array_io.color_mode,

See above. We need to pass dims 0-2, since depending on whether the image is color or not, X/Y will be 0 and 1 or 1 and 2.

Copy link
Member

@jwlodek jwlodek Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually also have a case that I'm working on that is a detector that produces three dimensional monochrome data: essentially it's a 264x264 array, with each "pixel" actually being a spectrum: https://pndetector.de/products-applications/product/the-pncdd-color-x-ray-camera-cxc/ (so we end up with 1024 x 264 x 264 data)

As such I wonder if this shape describing logic should remain in a provider / describer style callable class that takes an NDArrayBaseIO as an argument at init returns the dataset description to allow for easy overrides - hard-coding the x/y signals seems a bit inflexible for such cases.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that's interesting, is there an array_size_z PV for the third dimension?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coretl
Copy link
Collaborator Author

coretl commented Feb 3, 2026

I had a chance to try this out today with a new detector I'm integrating. For the most part it was pretty straightforward. One thing I ran into is that it seems like I needed to prime the HDF plugin before an acquisition - the initial acquisition produced a chunk_shape of [1, 0, 0] which tiled refused to ingest. Looking through the code I don't think that should be the case, so I'll double check tomorrow - I'm writing the AD driver as I go so perhaps a bug there, didn't have a chance to check for sure.

Hmm, that shouldn't be the case, for all the detectors I've seen they populate array_size_x and array_size_y before they start acquiring, which is the source for the chunk shape. Do you know if your driver does?

While looking at the shape/chunk_shape code though, I noticed that we also need to handle plugins as shape sources, (and the CB plugin should be used for the ContAcqDetector), and I started to think about how to potentially handle non-color higher dimensional detectors like one that is on my plate to work on in the coming months.

I've added a commit that looks at array_size_z too. For the CB plugin, I can't see any code that modifies the size of the array as it goes through, so can't we keep using the driver array_size_ PVs?

That just leaves things like ROI plugins, which I hoped people would instantiate by creating an NDArrayDescription themselves like:

logics.append(
adcore.ADHDFDataLogic(
description=adcore.NDArrayDescription(
shape_signals=(roi.size_y, roi.size_x),
data_type_signal=driver.data_type,
color_mode_signal=driver.color_mode,
),
path_provider=path_provider,
driver=driver,
writer=hdf,
datakey_suffix=f"-roi{i}",
)

Would this handle all your use cases?

@coretl
Copy link
Collaborator Author

coretl commented Feb 10, 2026

@jwlodek I've corrected the naming of the parameter in the StreamResource doc from file_template to template, does this fix your issue? If so, please could you approve if this is ready to merge?

@coretl coretl merged commit cc4f1d5 into main Feb 18, 2026
22 checks passed
@coretl coretl deleted the detector-rewrite branch February 18, 2026 16:35
@coretl coretl mentioned this pull request Feb 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment