Skip to content

Commit 8f7b565

Browse files
committed
Merge branch 'main' into 551_use_pvi_information
2 parents 4776910 + f50f67d commit 8f7b565

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+2042
-1023
lines changed

docs/tutorials.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,5 @@ tutorials/installation
1010
tutorials/using-devices
1111
tutorials/implementing-devices
1212
tutorials/writing-tests-for-devices
13+
tutorials/implementing-detectors
1314
```
Lines changed: 211 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,211 @@
1+
# Implementing File Writing Detectors
2+
3+
In [](./implementing-devices.md) we learned how to create Devices that talk to a control system to implement a particular behavior in bluesky plans. This behavior was based around the verbs from the [](#bluesky.protocols.Movable) and [](#bluesky.protocols.Readable) protocols, allowing us to use these Devices in a typical scan: moving them to a position, then acquiring data via the control system. We will now explore the [](#bluesky.protocols.WritesExternalAssets) protocol, and how it would be implemented for a File Writing Detector.
4+
5+
## Run the demo
6+
7+
We will return to our [](#ophyd_async.sim) devices we saw in [](./using-devices.md) for this tutorial, and dig a little deeper into what [Event Model Documents](inv:event-model#data_model) they produce. Let's run up our ipython shell again:
8+
9+
```
10+
$ ipython -i -m ophyd_async.sim
11+
Python 3.11.11 (main, Dec 4 2024, 20:38:25) [GCC 12.2.0]
12+
Type 'copyright', 'credits' or 'license' for more information
13+
IPython 8.30.0 -- An enhanced Interactive Python. Type '?' for help.
14+
15+
In [1]:
16+
```
17+
18+
## Run a grid scan and investigate the documents
19+
20+
Now let's run a grid scan on the point detector, and pass a callback to the RunEngine so it prints the documents that are emitted:
21+
```{eval-rst}
22+
.. ipython:: python
23+
:suppress:
24+
25+
from ophyd_async.sim.__main__ import *
26+
# Make the moves faster so docs build don't take too long
27+
RE(bps.mv(stage.x.velocity, 1000, stage.y.velocity, 1000))
28+
29+
.. ipython:: python
30+
31+
RE(bp.grid_scan([pdet], stage.x, 1, 2, 2, stage.y, 2, 3, 2), print)
32+
```
33+
We see a series of documents being emitted:
34+
- A [](#event_model.RunStart) document that tells us a scan is starting and what sort of scan it is, along with the names of the motors that will be moved.
35+
- An [](#event_model.EventDescriptor) document that tells us that the motor readbacks and detector channels will be all be read together in a single stream. It is used to make the column headings, but it contains more metadata about the Devices too, like their configuration.
36+
- For each point in the scan:
37+
- An [](#event_model.Event) document, containing the motor readbacks and detector channels with their timestamps. It is used to make each row of the table.
38+
- A [](#event_model.RunStop) document that tells us the scan has stopped, and gives us its status.
39+
40+
Now let's try the same thing, but this time with the blob detector:
41+
42+
```{eval-rst}
43+
.. ipython:: python
44+
:okwarning:
45+
46+
RE(bp.grid_scan([bdet], stage.x, 1, 2, 2, stage.y, 2, 3, 2), print)
47+
```
48+
This time we see some different documents:
49+
- The same [](#event_model.RunStart) document
50+
- A similar [](#event_model.EventDescriptor) document, but with `'external': 'STREAM:'` on the detector column headings.
51+
- A couple of [](#event_model.StreamResource) documents for each of those detector column headings giving an HDF file name and dataset within it where data will be written.
52+
- For each point in the scan:
53+
- A couple of [](#event_model.StreamDatum) documents with a range of indices that have been written to an HDF dataset referenced in the StreamResource document.
54+
- An [](#event_model.Event) document, containing the motor readbacks and timestamps. It is used to make each row of the table.
55+
- The same [](#event_model.RunStop) document
56+
57+
And we can run the plan with both detectors to see a document stream that combines both the previous example:
58+
59+
```{eval-rst}
60+
.. ipython:: python
61+
:okwarning:
62+
63+
RE(bp.grid_scan([bdet, pdet], stage.x, 1, 2, 2, stage.y, 2, 3, 2), print)
64+
```
65+
66+
## Simplify the plan to just use the detector
67+
68+
The above examples show what happens if you `trigger()` a detector at each point of a scan, in this case taking a single frame each time. Let's write our own simple plan that only triggers and reads from the detector, using the utility [`bps` (`bluesky.plan_stubs`)](inv:bluesky#stub_plans) and [`bpp` (`bluesky.preprocessors`)](inv:bluesky#preprocessors):
69+
70+
```{eval-rst}
71+
.. ipython:: python
72+
73+
@bpp.stage_decorator([bdet])
74+
@bpp.run_decorator()
75+
def my_count_plan():
76+
for i in range(2):
77+
yield from bps.trigger_and_read([bdet])
78+
79+
.. ipython:: python
80+
:okwarning:
81+
82+
RE(my_count_plan(), print)
83+
```
84+
85+
Here we see the same sort of documents as above, but with only detector information in it.
86+
87+
Note that on each trigger, only a single image is taken, at the default exposure of `0.1s`. If we would like a different exposure time, we can specify with a [](#TriggerInfo):
88+
89+
```{eval-rst}
90+
.. ipython:: python
91+
92+
from ophyd_async.core import TriggerInfo
93+
94+
@bpp.stage_decorator([bdet])
95+
@bpp.run_decorator()
96+
def my_count_plan_with_prepare():
97+
yield from bps.prepare(bdet, TriggerInfo(livetime=0.001), wait=True)
98+
for i in range(2):
99+
yield from bps.trigger_and_read([bdet])
100+
101+
.. ipython:: python
102+
:okwarning:
103+
104+
RE(my_count_plan_with_prepare(), print)
105+
```
106+
107+
This also moves the work of setting up the detector from the first call of `trigger()` to the `prepare()` call. We can also move the creation of the descriptor earlier, so there is no extra work to do on the first call to `trigger()`:
108+
109+
```{eval-rst}
110+
.. ipython:: python
111+
112+
from ophyd_async.core import TriggerInfo
113+
114+
@bpp.stage_decorator([bdet])
115+
@bpp.run_decorator()
116+
def my_count_plan_with_prepare():
117+
yield from bps.prepare(bdet, TriggerInfo(), wait=True)
118+
yield from bps.declare_stream(bdet, name="primary")
119+
for i in range(2):
120+
yield from bps.trigger_and_read([bdet])
121+
122+
.. ipython:: python
123+
:okwarning:
124+
125+
RE(my_count_plan_with_prepare(), print)
126+
```
127+
128+
## Run a fly scan and investigate the documents
129+
130+
The above demonstrates the detector portion of a step scan, letting the things you want to scan settle before taking data from the detector, and doing this at every point of the scan. Our filewriting detector also supports the ability to fly scan it, taking data while you are scanning other things. To do this, it implements the [](#bluesky.protocols.Flyable) protocol, which allows us to `kickoff()` a series of images, then wait until it is `complete()`:
131+
132+
```{eval-rst}
133+
.. ipython:: python
134+
135+
@bpp.stage_decorator([bdet])
136+
@bpp.run_decorator()
137+
def fly_plan():
138+
yield from bps.prepare(bdet, TriggerInfo(number_of_triggers=7), wait=True)
139+
yield from bps.declare_stream(bdet, name="primary")
140+
yield from bps.kickoff(bdet, wait=True)
141+
yield from bps.collect_while_completing(flyers=[bdet], dets=[bdet], flush_period=0.5)
142+
143+
.. ipython:: python
144+
:okwarning:
145+
146+
RE(fly_plan(), print)
147+
```
148+
149+
As before, we see the start, descriptor, and pair of stream resources, but this time we don't see any event documents. Also, even though we asked for 7 frames from each of the 2 streams, we only got 2 stream datums for each stream.
150+
151+
What is happening is that instead of triggering, waiting, and publishing a single frame, we are setting up the detector to take 7 frames without stopping, then at the `flush_period` of 0.5s emitting a stream datum with the frames that have been captured. If we inspect the stream datum documents for each stream we see that:
152+
- The first has `'indices': {'start': 0, 'stop': 4}`
153+
- The second has `'indices': {'start': 4, 'stop': 7}`
154+
155+
This behavior allows us to scale up the framerate of the detector without scaling up the number of documents emitted: whether the detector goes at 10Hz or 10MHz it still only emits one stream datum per stream per flush period, just with different numbers in the `indices` field.
156+
157+
## Look at the Device implementations
158+
159+
Now we'll have a look at the code to see how we implement one of these detectors:
160+
161+
### `SimBlobDetector`
162+
163+
```{literalinclude} ../../src/ophyd_async/sim/_blob_detector.py
164+
:language: python
165+
```
166+
167+
It derives from [](#StandardDetector) which is a utility baseclass that implements the protocols we have mentioned so far in this tutorial. It uses a pair of logic classes to provide behavior for each protocol verb:
168+
- [](#DetectorController) to setup the exposure and trigger mode of the detector, arm it, and wait for it to complete
169+
- [](#DetectorWriter) to tell the detector to open a file, describe the datasets it will write, and wait for it to have written a given number of frames
170+
171+
In this case, we have a Controller and Writer class written just for this simulation, both taking a reference to the pattern generator that provides methods for both detector control and file writing. In other cases the detector control and filewriting may be handled by different sub-devices that talk to different parts of the control system. The job of the top level detector class is to take the arguments that the controller and writer need and distribute them, passing the instances to the superclass init.
172+
173+
Now let's look at the underlying classes that define the detector behavior:
174+
175+
### `BlobDetectorControl`
176+
177+
First we have `BlobDetectorController`, a [](#DetectorController) subclass:
178+
179+
```{literalinclude} ../../src/ophyd_async/sim/_blob_detector_controller.py
180+
:language: python
181+
```
182+
183+
It's job is to control the acquisition process on the detector, starting and stopping the collection of data:
184+
- `prepare()` takes a [](#TriggerInfo) which details everything the detector needs to know about the upcoming acquisition. In this case we just store it for later use.
185+
- `arm()` starts the acquisition process that has been prepared. In this case we create a background task that will write our simulation images to file.
186+
- `wait_for_idle()` waits for that acquisition process to be complete.
187+
- `disarm()` interrupts the acquisition process, and then waits for it to complete.
188+
189+
### `BlobDetectorWriter`
190+
191+
Then we have `BlobDetectorWriter`, a [](#DetectorWriter) subclass:
192+
193+
```{literalinclude} ../../src/ophyd_async/sim/_blob_detector_writer.py
194+
:language: python
195+
```
196+
197+
Its job is to control the file writing process on the detector, which may or may not be coupled to the acquisition process:
198+
- `open()` tells the detector to open a file, and returns information about the datasets that it will write
199+
- `hints` gives a list of the datasets that are interesting to plot
200+
- `get_indices_written()` returns the last index written
201+
- `observe_indices_written()` repeatedly yields the last index written then waits for the next one to be written
202+
- `collect_stream_docs()` uses the [](#HDFDocumentComposer) to publish a document per dataset that says which frames have been written since the last call
203+
- `close()` tells the detector to close the file
204+
205+
## Conclusion
206+
207+
We have seen how to make a [](#StandardDetector) and how the [](#DetectorController) and [](#DetectorWriter) allow us to customise its behavior.
208+
209+
```{seealso}
210+
[](../how-to/implement-ad-detector.md) for writing an implementation of `StandardDetector` for an EPICS areaDetector
211+
```

pyproject.toml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,8 @@ type = "forbidden"
187187
forbidden_modules = ["ophyd_async.testing", "ophyd_async.sim"]
188188
source_modules = [
189189
"ophyd_async.plan_stubs",
190-
"ophyd_async.fastcs",
191-
"ophyd_async.epics",
192-
"ophyd_async.tango",
190+
"ophyd_async.fast.*",
191+
"ophyd_async.epics.*",
192+
"ophyd_async.tango.*",
193193
]
194+
ignore_imports = ["ophyd_async.tango.testing.* -> ophyd_async.testing"]

src/ophyd_async/core/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@
3535
)
3636
from ._settings import Settings, SettingsProvider
3737
from ._signal import (
38+
Ignore,
3839
Signal,
3940
SignalConnector,
4041
SignalR,
@@ -48,6 +49,7 @@
4849
soft_signal_r_and_setter,
4950
soft_signal_rw,
5051
wait_for_value,
52+
walk_config_signals,
5153
walk_rw_signals,
5254
)
5355
from ._signal_backend import (
@@ -130,6 +132,7 @@
130132
"set_and_wait_for_value",
131133
"set_and_wait_for_other_value",
132134
"walk_rw_signals",
135+
"walk_config_signals",
133136
# Readable
134137
"StandardReadable",
135138
"StandardReadableFormat",
@@ -179,4 +182,5 @@
179182
# Back compat - delete before 1.0
180183
"ConfigSignal",
181184
"HintedSignal",
185+
"Ignore",
182186
]

src/ophyd_async/core/_detector.py

Lines changed: 28 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ class DetectorTrigger(Enum):
5151
class TriggerInfo(BaseModel):
5252
"""Minimal set of information required to setup triggering on a detector."""
5353

54-
number_of_triggers: NonNegativeInt | list[NonNegativeInt]
54+
number_of_triggers: NonNegativeInt | list[NonNegativeInt] = Field(default=1)
5555
"""Number of triggers that will be sent, (0 means infinite).
5656
5757
Can be:
@@ -135,16 +135,23 @@ async def open(self, multiplier: int = 1) -> dict[str, DataKey]:
135135
:return: Output for ``describe()``
136136
"""
137137

138-
@abstractmethod
139-
def observe_indices_written(
140-
self, timeout=DEFAULT_TIMEOUT
141-
) -> AsyncGenerator[int, None]:
142-
"""Yield the index of each frame (or equivalent data point) as it is written."""
138+
@property
139+
def hints(self) -> Hints:
140+
"""The hints to be used for the detector."""
141+
return {}
143142

144143
@abstractmethod
145144
async def get_indices_written(self) -> int:
146145
"""Get the number of indices written."""
147146

147+
# Note: this method is really async, but if we make it async here then we
148+
# need to give it a body with a redundant yield statement, which is a bit
149+
# awkward. So we just leave it as a regular method and let the user
150+
# implement it as async.
151+
@abstractmethod
152+
def observe_indices_written(self, timeout: float) -> AsyncGenerator[int, None]:
153+
"""Yield the index of each frame (or equivalent data point) as it is written."""
154+
148155
@abstractmethod
149156
def collect_stream_docs(self, indices_written: int) -> AsyncIterator[StreamAsset]:
150157
"""Create Stream docs up to given number written."""
@@ -153,11 +160,6 @@ def collect_stream_docs(self, indices_written: int) -> AsyncIterator[StreamAsset
153160
async def close(self) -> None:
154161
"""Close writer, blocks until I/O is complete."""
155162

156-
@property
157-
def hints(self) -> Hints:
158-
"""The hints to be used for the detector."""
159-
return {}
160-
161163

162164
# Add type var for controller so we can define
163165
# StandardDetector[KinetixController, ADWriter] for example
@@ -272,8 +274,8 @@ async def trigger(self) -> None:
272274
)
273275
)
274276

275-
self._trigger_info = _ensure_trigger_info_exists(self._trigger_info)
276-
if self._trigger_info.trigger is not DetectorTrigger.INTERNAL:
277+
trigger_info = _ensure_trigger_info_exists(self._trigger_info)
278+
if trigger_info.trigger is not DetectorTrigger.INTERNAL:
277279
msg = "The trigger method can only be called with INTERNAL triggering"
278280
raise ValueError(msg)
279281

@@ -284,9 +286,7 @@ async def trigger(self) -> None:
284286
end_observation = indices_written + 1
285287

286288
async for index in self._writer.observe_indices_written(
287-
DEFAULT_TIMEOUT
288-
+ (self._trigger_info.livetime or 0)
289-
+ self._trigger_info.deadtime
289+
DEFAULT_TIMEOUT + (trigger_info.livetime or 0) + trigger_info.deadtime
290290
):
291291
if index >= end_observation:
292292
break
@@ -312,24 +312,26 @@ async def prepare(self, value: TriggerInfo) -> None:
312312
raise ValueError(msg)
313313
elif not value.deadtime:
314314
value.deadtime = self._controller.get_deadtime(value.livetime)
315-
self._trigger_info = value
316315
self._number_of_triggers_iter = iter(
317-
self._trigger_info.number_of_triggers
318-
if isinstance(self._trigger_info.number_of_triggers, list)
319-
else [self._trigger_info.number_of_triggers]
316+
value.number_of_triggers
317+
if isinstance(value.number_of_triggers, list)
318+
else [value.number_of_triggers]
320319
)
321320
self._describe, _ = await asyncio.gather(
322321
self._writer.open(value.multiplier), self._controller.prepare(value)
323322
)
324323
self._initial_frame = await self._writer.get_indices_written()
325324
if value.trigger != DetectorTrigger.INTERNAL:
326325
await self._controller.arm()
327-
self._fly_start = time.monotonic()
326+
self._trigger_info = value
328327

329328
@AsyncStatus.wrap
330329
async def kickoff(self):
331330
if self._trigger_info is None or self._number_of_triggers_iter is None:
332331
raise RuntimeError("Prepare must be called before kickoff!")
332+
if self._trigger_info.trigger == DetectorTrigger.INTERNAL:
333+
await self._controller.arm()
334+
self._fly_start = time.monotonic()
333335
try:
334336
self._frames_to_complete = next(self._number_of_triggers_iter)
335337
self._completable_frames += self._frames_to_complete
@@ -341,13 +343,13 @@ async def kickoff(self):
341343

342344
@WatchableAsyncStatus.wrap
343345
async def complete(self):
344-
self._trigger_info = _ensure_trigger_info_exists(self._trigger_info)
346+
trigger_info = _ensure_trigger_info_exists(self._trigger_info)
345347
indices_written = self._writer.observe_indices_written(
346-
self._trigger_info.frame_timeout
348+
trigger_info.frame_timeout
347349
or (
348350
DEFAULT_TIMEOUT
349-
+ (self._trigger_info.livetime or 0)
350-
+ (self._trigger_info.deadtime or 0)
351+
+ (trigger_info.livetime or 0)
352+
+ (trigger_info.deadtime or 0)
351353
)
352354
)
353355
try:
@@ -367,7 +369,7 @@ async def complete(self):
367369
break
368370
finally:
369371
await indices_written.aclose()
370-
if self._completable_frames >= self._trigger_info.total_number_of_triggers:
372+
if self._completable_frames >= trigger_info.total_number_of_triggers:
371373
self._completable_frames = 0
372374
self._frames_to_complete = 0
373375
self._number_of_triggers_iter = None

0 commit comments

Comments
 (0)