On Apr 5, 2004, at 4:01 AM, Bruce Fitzsimons wrote:
Out of interest Rob, do you know why I would have had to multiply the libmad output by 2^15 (after mad_f_todouble) to make it equivalent to the (slightly hacked) lame output?
Probably because libmad returns samples between -1 and +1; to convert this to signed 16-bit samples requires multiplication by 2^15. This is essentially what minimad's scale() does, with additional rounding and clipping, and without first converting to floating-point double.