Description
Since we'd like to make the handling of synthesize.generic() / soundfield.p_array() more consistent, it is worth to think about cases that might be appear:
in the frequency domain:
-use many spatial points (i.e. a grid) but only one frequency (at the moment realized in synthesize.generic())
-use a single spatial point but multiple frequencies (to obtain the acoustic transfer function)
-use both, multiple spatial points and frequencies (as most generic case)
-similar in the time domain:
-use many spatial points (i.e. a grid) but only one time instance (at the moment in soundfield.p_array())
-use a single spatial point but multiple time instances (to obtain the acoustic impulse response)
-use both, multiple spatial points and time instances (as most generic case)
Obviously, the existing functions could deliver all the requested cases. The question arises if we could optimize/enhance (spatial/temporal interpolation issue) performance for special cases, such as rendering binaural impulse responses at a single listening point or even find the one and only master method, that handles all cases.