Description
Thinking a bit more about @greglucas 's argument that range-restricted auto-limiting may be easier than I thought lead me to this idea.
As currently proto-typed the query
semantics are "here is some information, you MAY use it to restrict the data returned. If we were going to start doing range-restricted autoscaling, then maybe we should change this to "here is some information, you MUST use it to restrict".
However while this does make the API more explicit and means we can do a bit more stuff with it on the calling side, it does impose a bunch more complexity on implementation side. If we consider the data
x = [0, 1, 0, 1, 0, 1]
y = [1, 2, 3, 4, 5, 6]
and the xview limit [-.5, .5]
a naive way to just filtering could turn the zigzag into a vertical line. If we are doing scatter that is fine, but if the data is meant to be discretely sampled continuous data then that is quite wrong! So maybe you could say "well, in that case the query should add points at the edge of the view limit" which is fine, but then we have to be able to tell (if say we are showing both the lines and the points) the difference between the "real" points that should have markers and the "synthetic points" that are there to preserve continuity. We could (should) push the notion of continuity down into the containers so they can make the decisions.
However, this is a huge step up in obligatory complexity to go to MUST so I think I still prefer MAY as a pragmatic choice and think about adding another get_limits(...)
with a similar API to query but that only returns ranges.