Replies: 2 comments
-
Hi Jos, each TG of the miniDexed has a PAN control that places the mono signal in the stereo image. Unfortunately, I can't say how and where this happens in the code. |
Beta Was this translation helpful? Give feedback.
-
All the audio processing is triggered from ProcessSound in minidexed.cpp. Note there are two versions - one for single core operation and one for multicore. There is a diagram of the basic architecture (as far as I could work it out) here: https://github.com/probonopd/MiniDexed/wiki/Development#minidexed-scheduling-and-architecture The key bit that does the mono TG to stereo output is the code:
which happens after the samples for each TG have been grabbed from all cores. These functions are in effect_mixer.hpp and I think that is where the panning is handled and the mono TG turned into a stereo output. The mono signal is stored in m_OutputLevel which has a chunk of audio per TG and is the input to doAddMix, which adds it to an internal (presumably stereo) buffer within the mixer. Then the mix is grabbed back from the mixer into a new stereo L/R buffer a bit later:
SampleBuffer is also the same chunk_size of audio, but now with two channels, for L and R (so is twice as big in reality). Then other effects are processed where necessary, there is a bit of manipulation (e.g. master volume) and the option of having L/R channels swapped, then the stereo sound buffer is converted from a pair of float buffers, to an interleaved float buffer (i.e. alternative L/R floats next to each other):
Before finally being turned into an array of fixed point integers:
It is now in the temporary buffer tmp_int (another stereo buffer of "chunk size") which finally gets written out using:
Which takes a single buffer of alternating L/R integers for each sample of sound iirc. That is my understanding anyway. I had to skip most of the above for the quad DAC version and instead convert each mono TG channel into one of the L/R channels of one of the four DACs. I'm sure some of this could perhaps be optimised more, but it seems to work ok, even on a Pi V1! Kevin |
Beta Was this translation helpful? Give feedback.
-
I was looking at the code and as sort of an exercise tried to replace the AudioEffectPlateReverb by an AudioEffectFreeverb, which I managed to get to run. There is one thing that puzzles me though.
The method that does the actual processing apparently gets a left and right input signal:
void doReverb(const float32_t* inblockL, const float32_t* inblockR, float32_t* rvbblockL, float32_t* rvbblockR, uint16_t len);
I've tried to trace these left and right signals back to their origin, because I was surprised that there were 2 as I assumed a TG would have a single channel output only. Alas I got lost in the code somehow and could not find where they stem from. So I used the mean value of inblockR and inblockL as the input value for the freeverb computations, and got that to work. Can anyone explain what the output topology of a TG is, or what the reasoning behind the 2 input channels for FX processing is (I guess one could route 2 TGs to one FX-unit) ?
Beta Was this translation helpful? Give feedback.
All reactions