Topic: Bit Test (32 bit)
Hi,
I ran Bit Test for, and 16 bit and 24 bit files are passing and shown as passed.
And 32 bit files are also shown as passed, but it is shown as 24 bit, not 32 bit.
Could anyone verify?
You are not logged in. Please login or register.
RME User Forum → ADI-2 & 2/4 Pro series, ADI-2 DAC series → Bit Test (32 bit)
Hi,
I ran Bit Test for, and 16 bit and 24 bit files are passing and shown as passed.
And 32 bit files are also shown as passed, but it is shown as 24 bit, not 32 bit.
Could anyone verify?
Give us a bit of info to work with, please:
Mac/Windows/OSX?
What software?
What kind of connection to the DAC?
ch 34.21: iOS, AES, SPDIF and ADAT are limited to 24 bit.
It is Mac, OS X Mojave 10.14.6, Tested with Colibri DAC DSD Test app, and QuickTime.
Hi,
I ran Bit Test for, and 16 bit and 24 bit files are passing and shown as passed.
And 32 bit files are also shown as passed, but it is shown as 24 bit, not 32 bit.
Could anyone verify?
This also happens to me using either Audirvana or Apple Music on a Catalina MacBook Pro via USB to ADI-2 Pro FS. I never thought to report it as I never use 32bit, but same thing happens - bit test passed at 32bit, but says 24bit on the ADI’s screen.
Reading the manual helps!
I thought that was kind of a joke, but it is not!!!! On the page 66 of the manual, on the well hidden chapter named "Bit Test" (so cryptic!!) we can read:
Notes:
iOS, AES, SPDIF and ADAT are limited to 24 bit.
Some players in Mac OS X offer a Direct Mode, using 32 bit integer in non-mixable format. The 32 bit test might still fail. At this time only HQPlayer 3.20 is known to pass.
SPDIF/ADAT (AES) are checked behind clocking. Therefore the unit needs to be synchro-nized correctly to the digital input signal.
MPD developer here. On Mac, the only internal format of CoreAudio is 32bit floating point. All audio formats are first converted to 32 bit float, then mix together. That’s how the OS audio stack works.
This means, if we use CoreAudio, the 32bit integer is first converted to float, and then back to integer when OS send the stream to the device. Through the convention the last bits may be lost as it’s not a 1:1 conversion. So as long as CoreAudio is used for output, it cannot pass the test.
An application however, can bypass OS audio framework, and use system HAL api instead. In this way your application have full control over the device, and can pass integers directly to the device. Such code is messy and hard to maintain. We just decide not to implement that.
32bit for audio playback is not necessary though— Modern DACs can hardly persevere linearity after 24 bits.
Thanks Ning...
Modern DACs are doing well to preserve 20bits. (120db)
Ignoring the rest of the system, and it's noise floor. Throw some Tubes in the Mix? You're down to the equivalent of 10bit resolution.
What would ANY of us do with 32bits? (192db) To worry about any of this would mean we LOVE our Data padded with lots of expensive Zeros. Given the preponderence of Fake Hi-Res.. this would indeed be the case.
Curt
Curt, don't mix resolution and noise floor.
There are decernible signals below or "inside" the noise floor in most systems, specially those with higher bitrates.
The noise can even lift information from below the theoretical resolution into the transferred dynamic range -> dither noise.
This is true if the noise is combined with the signal before conversion.
Noise that is added after conversion doesn't do this.
With 24bit, e.g. on ADI-2, a lot of post-amplification is needed to make the noise audible at all, so practically my states are of minor interest in this case.
When digital started in the early '80s we had just 14bits, the balance between noise and resolution was a problem.
At that time 1/4" 15ips analog studio tape with Dolby SR or Telcom C4 had better S/N ratio then digital, and better low level resolution anyway.
Curt, don't mix resolution and noise floor.
There are decernible signals below or "inside" the noise floor in most systems, specially those with higher bitrates.
The noise can even lift information from below the theoretical resolution into the transferred dynamic range -> dither noise.
This is true if the noise is combined with the signal before conversion.
Noise that is added after conversion doesn't do this.With 24bit, e.g. on ADI-2, a lot of post-amplification is needed to make the noise audible at all, so practically my states are of minor interest in this case.
When digital started in the early '80s we had just 14bits, the balance between noise and resolution was a problem.
At that time 1/4" 15ips analog studio tape with Dolby SR or Telcom C4 had better S/N ratio then digital, and better low level resolution anyway.
Bit depth has a lot to do with noise floor for me too...
https://youtu.be/cIQ9IXSUzuM?t=522
And Curt is right: who need 32 bits audio files for home listening?
Thanks for explanation here, clears up some questions I had with the test.
MPD developer here. On Mac, the only internal format of CoreAudio is 32bit floating point. All audio formats are first converted to 32 bit float, then mix together. That’s how the OS audio stack works.
This means, if we use CoreAudio, the 32bit integer is first converted to float, and then back to integer when OS send the stream to the device. Through the convention the last bits may be lost as it’s not a 1:1 conversion. So as long as CoreAudio is used for output, it cannot pass the test.
An application however, can bypass OS audio framework, and use system HAL api instead. In this way your application have full control over the device, and can pass integers directly to the device. Such code is messy and hard to maintain. We just decide not to implement that.
32bit for audio playback is not necessary though— Modern DACs can hardly persevere linearity after 24 bits.
RME User Forum → ADI-2 & 2/4 Pro series, ADI-2 DAC series → Bit Test (32 bit)
Powered by PunBB, supported by Informer Technologies, Inc.