We talked about latency for the case, when the audio source is connected to an analog input of your recording interface.
We compared
- ADM - signal flow local on your audio interface vs
- non-ADM with full RTL (converter latency IN/OUT, 2x transport over USB/FW/TB) and processing time on DAW
VSTi is a different case because the signal flow differs and latency has slightly different components.
Signal flow for a VSTi:
If you are using a MIDI keyboard two more steps
1. you hit a key on your MIDI master keyboard
2. the digital information is being transferred through MIDI to your DAW
If you have already a MIDI track with MIDI notes, then it starts here:
3. the computer has to process the virtual instrument according to MIDI notes which creates digital audio according to MIDI note information
4. the latency depends on
- sample rate being used, double and quad speed means more data to process in the same time
- how complex the processing and
- how fast the single thread performance of your computer and
- how loaded the CPU core is where the sound for your VSTi is being created / processed
- whether drivers on your computer are well written or block the CPU (-> DPC latencies)
5. If you would like to monitor this created/processed sound, it needs to be transported from the DAW/application through the audio driver, in this case the RME ASIO driver. The latency depends heavily on the selected buffersize.
On Windows Machines, this is the ASIO buffersize. But Apple also has buffer sizes that need to be adjusted according to the sample rate and complexity of a project.
6. On the recording interface you have the converter latency for D/A.
So the remarkable differences in case of VSTi are
- different signal flow...
- you have to go at least one time through USB/FW/TB which is a remarkable difference because of e.g. ASIO buffersize or other buffersizes (Apple)
- when using a MIDI keyboard you have additional latency for the transfer of digital MIDI notes to the DAW
- VSTi processing time depends how much CPU performance the VSTi requires (complexity, quality of sound emulation) and what sample rate is being used for that, higher sample rate means, the CPU has to process more data in the same time.
Back to ADM
It is for the use case when audio source and destination are directly attached to the recording interface.
And it is to allow you to use the speaker symbol of a DAW track without routing audio through the DAW.
You use comfortably the speaker symbol in the DAW to activate the monitoring for a track.
But the routing of audio happens on the interface avoiding the time consuming path over USB/FW/TB twice and processing time inside the application. Not to forget: routing does not need to be changed in TotalMix FX.
BR Ramses - UFX III, 12Mic, XTC, ADI-2 Pro FS R BE, RayDAT, X10SRi-F, E5-1680v4, Win10Pro22H2, Cub14