Skip to content

libertem_dectris: hard upper frame stack size limit in number of frames; optional latency limit#85

Merged
sk1p merged 4 commits intoLiberTEM:mainfrom
sk1p:dectris-size-limit
Jan 19, 2026
Merged

libertem_dectris: hard upper frame stack size limit in number of frames; optional latency limit#85
sk1p merged 4 commits intoLiberTEM:mainfrom
sk1p:dectris-size-limit

Conversation

@sk1p
Copy link
Member

@sk1p sk1p commented Jun 25, 2025

When requesting a frame stack, the internal logic did not have a hard limit on the number of frames per frame stack - this was handled separately in higher level logic.

Now, if one has very small frames (let's say almost all zeros, detector running with beam blanked), the background thread would have a very high latency until it starts to yield data.

This PR changes the behavior to have control over the limits in the background thread:

  • frame_stack_size is now also an upper limit of number of frames, in addition to estimating a slot size
  • Add a new parameter, max_latency_per_stack, to have a built in upper latency limit per frame stack, which is needed for example for good interactivity with "long" frame times (1ms+) without having to set a large frame_stack_size (so one doesn't need to reallocate shared memory when changing frame_stack_size

@sk1p sk1p force-pushed the dectris-size-limit branch from a65e3b9 to 574efe6 Compare January 13, 2026 17:26
@sk1p sk1p force-pushed the dectris-size-limit branch from 084fb57 to 21dbd06 Compare January 15, 2026 16:16
@sk1p sk1p changed the title libertem_dectris: hard upper frame stack size limit in number of frames libertem_dectris: hard upper frame stack size limit in number of frames; optional latency limit Jan 19, 2026
@sk1p sk1p merged commit c3ffec2 into LiberTEM:main Jan 19, 2026
40 checks passed
@sk1p sk1p deleted the dectris-size-limit branch January 19, 2026 11:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant