Topic: What is the difference between hardware input and software playback?

Thank everyone. I know my questions are stupid.


many tutorials on how to calibrate SPL in a room on the Internet.
The last step is to promote the knob of audio interface to the correct SPL (72-76).
("Increase the volume of your monitors until your SPL meter reads 76dB SPL")

But I don't know whether I should improve the software playback channel or the hardware input channel? They sound a little different.

2 (edited by ramses 2022-01-22 13:58:16)

Re: What is the difference between hardware input and software playback?

HW inputs are your interfaces inputs
HW outputs are your interfaces outputs

When performing audio playback you see audio in an additional layer in front of the HW outputs, the sw playbacks in TM FX middle row.

Now comes the flexibility of TM FX's concept.
It works here like a patchbay, you can create a submix for each of the HW outputs individually.
Either audio from HW inputs, e.g. directly from a microphone, with near zero latency or audio from the PC with the typical RTL (round trip latency).
On the PC you can organize audio playback like you want. For the PC, in the DAW ....

For example use:
AN1/2 as your default sound device for Windows and Applications without ASIO support like youtube
AN3/4 for audio from MusicBee player with ASIO support
AN5/6 for audio from games, if the games allow to specify the output (most use the default Windows sound device)

Now you can create for each output individually a submix by
- raising faders of HW inputs
- raising faders of SW playbacks
to get the desired submix for each of the HW outputs.

By this I could create e.g. a nice submix of AN1/2, AN3/4, AN5/6
- audio from a Youtube lets's play for a game
- game sound
- Music Bee
To be able to listen to the lets play information (only by listening to audio) and to play the game with game sound turned down a bit and to have some nice background music of my own while playing and listning to the youtube video.

So TM FX is for each of the HW outputs / submixes like a patch bay where you can very fine granular define, what you want to hear and in what intensity.

Other use cases is in a DAW to send Vocalist, Solist and Rhythm group to different outputs.
Audio appears then on different SW playbacks.
And per HW output you can create a submix as you like.
If you have multiple phones outputs, then every musician can get it's preferred submix.

See also: https://forum.rme-audio.de/viewtopic.php?id=34394

After clarification about HW inputs, SW playbacks, HW outputs ... to answer your question:

As you can also set different reflevels per output for most devices I would use the faders there for finding correct SPL.
The faders of HW inputs and SW playbacks you only need to move to get a special submix to bring a vocalist into the front for his submix or what not ...

BR Ramses - UFX III, 12Mic, XTC, ADI-2 Pro FS R BE, RayDAT, X10SRi-F, E5-1680v4, Win10Pro22H2, Cub14

3 (edited by Kirby47 2022-01-22 15:56:23)

Re: What is the difference between hardware input and software playback?

ramses wrote:

HW inputs are your interfaces inputs
HW outputs are your interfaces outputs

When performing audio playback you see audio in an additional layer in front of the HW outputs, the sw playbacks in TM FX middle row.

Now comes the flexibility of TM FX's concept.
It works here like a patchbay, you can create a submix for each of the HW outputs individually.
Either audio from HW inputs, e.g. directly from a microphone, with near zero latency or audio from the PC with the typical RTL (round trip latency).
On the PC you can organize audio playback like you want. For the PC, in the DAW ....

For example use:
AN1/2 as your default sound device for Windows and Applications without ASIO support like youtube
AN3/4 for audio from MusicBee player with ASIO support
AN5/6 for audio from games, if the games allow to specify the output (most use the default Windows sound device)

Now you can create for each output individually a submix by
- raising faders of HW inputs
- raising faders of SW playbacks
to get the desired submix for each of the HW outputs.

By this I could create e.g. a nice submix of AN1/2, AN3/4, AN5/6
- audio from a Youtube lets's play for a game
- game sound
- Music Bee
To be able to listen to the lets play information (only by listening to audio) and to play the game with game sound turned down a bit and to have some nice background music of my own while playing and listning to the youtube video.

So TM FX is for each of the HW outputs / submixes like a patch bay where you can very fine granular define, what you want to hear and in what intensity.

Other use cases is in a DAW to send Vocalist, Solist and Rhythm group to different outputs.
Audio appears then on different SW playbacks.
And per HW output you can create a submix as you like.
If you have multiple phones outputs, then every musician can get it's preferred submix.

See also: https://forum.rme-audio.de/viewtopic.php?id=34394

After clarification about HW inputs, SW playbacks, HW outputs ... to answer your question:

As you can also set different reflevels per output for most devices I would use the faders there for finding correct SPL.
The faders of HW inputs and SW playbacks you only need to move to get a special submix to bring a vocalist into the front for his submix or what not ...


thank you.

So, if I want to master or mix, which fader should I push to get the best playback effect? I think other fader should be 0.

I just don't want the sound singal loss (such as background noise or dynamics)


https://www.masteringthemix.com/blogs/l … ome-studio
"To take the measurement"

4 (edited by ramses 2022-01-22 16:24:37)

Re: What is the difference between hardware input and software playback?

I made you already a valid proposal to keep sw playbacks at 0dB, so that you have the full/unmodified signal in the digital domain. Maybe I was not detailed enough. Thanks for the link, now I understand better what you mean.

The final volume you get by a combination of choosing the
- proper reference level and
- volume selection at HW output
to maximize SNR choose a proper reference level, that enables you to keep the fader not too far away from 0dB.
You might need attenuators to be able to choose a volume between 0 and -30 dB.
See also: https://forum.rme-audio.de/viewtopic.php?id=25399 "level mismatches".

The ADI-2 Pro FS optimizes SNR/Dynamics automatically when enabling the feature auto reflevel.
It chooses the ideal reflevel when turning the volume.

BR Ramses - UFX III, 12Mic, XTC, ADI-2 Pro FS R BE, RayDAT, X10SRi-F, E5-1680v4, Win10Pro22H2, Cub14

5 (edited by Kirby47 2022-01-22 17:20:37)

Re: What is the difference between hardware input and software playback?

ramses wrote:

I made you already a valid proposal to keep sw playbacks at 0dB, so that you have the full/unmodified signal in the digital domain. Maybe I was not detailed enough. Thanks for the link, now I understand better what you mean.

The final volume you get by a combination of choosing the
- proper reference level and
- volume selection at HW output
to maximize SNR choose a proper reference level, that enables you to keep the fader not too far away from 0dB.
You might need attenuators to be able to choose a volume between 0 and -30 dB.
See also: https://forum.rme-audio.de/viewtopic.php?id=25399 "level mismatches".

The ADI-2 Pro FS optimizes SNR/Dynamics automatically when enabling the feature auto reflevel.
It chooses the ideal reflevel when turning the volume.

Thanks! However, I found that in the "digital audio workstation mode", the hardware input does not have fader and can only adjust the fader of the software playback.