On Thursday, July 3, 2003, at 04:26 AM, Arve Knudsen wrote:
I see that the algorithm for 32 bit output in madplay quantizes to 24 bit; I'm a little daft when it comes to fixed point representations, but why are some of the fractional bits discarded even when you're outputting in full 32 bit?
This is mostly the result of convenience of uniformity. I think some of the 32-bit audio hardware out there ignores the least significant byte, so it seemed appropriate to round/dither to the 24th bit -- not that the bits past the 24th are terribly important or accurate anyway.
There's nothing wrong with using all available bits for 32-bit output.
There's also one more thing I don't understand; the original sign bit of the fixed point number isn't kept (as far as I can see), instead the least significant whole part bit occupies the sign position. Does this whole part bit represent the sign (a little confused here)?
In two's complement notation, the least significant whole part bit (and all intervening bits) are guaranteed to be the same as the sign bit when the absolute value is less than MAD_F_ONE. Since clipping ensures this condition, shifting will not affect the sign of the value.