Description
In spark delta table you can enable an option to manage out of range versions or timestamps. https://docs.delta.io/latest/delta-change-data-feed.html#read-changes-in-streaming-queries

Right now the behaviour of load_cdf is inconsistent, if you provide an out of range version you get an error:

But with a timestamp out of range, you get an empty dataset:

It would be useful for incremental pipelines to have a way to manage this behaviour and make it consistent.