Greetings to all,
I'm a new user to MAD and it's immediately become my standard for mp3 listening and decoding. The first thing that shook me when using Mad, was something that I never had encountered before, the mp3 clipping indicator. Low-bit distortion is one thing, but high-bit problems is surely a worse one, if not the worst! And when I witnessed Mad reports clipping even with files produced by the best encoders like Lame, this rang a red alert for me: are we constantly exposed to clipping distortion when using winamp's decoder or other decoding apps?
Research in the web showed that there are people who claim that clipping is a well-known artifact of lossy mp3 compression. For me this is still a mystery, since I had the misconception (or is it true ?) that a well-established or even commercial encoder cannot possibly produce an mp3 that exceeds the 100% level. I am not familiar with the binary pcm structure of wav files, but I know it uses 16-bit numbers, so clipping is by the definition either the presence of a 17th bit (impossible) or the miscoding of a music level of 99% as 100% (which means that everything above 99% is clipped cause it's represented as a constant 100%). On the other hand, I understand that an encoded mp3 level of 99% is possible to be interpreted by a decoder as a level of 100%, which would mean that clipping is indeed possible to occur, but *only* when the file is badly decoded.
For me two possibilities exist:
a) The offending mp3 does not contain clipped samples but Mad misreports them (rather improbable, I do trust Mad's author programming skills)
b) The offending mp3 does contain clipped samples which were produced by bad encoding (therefore Mad is correct)
c) The offending mp3 does not contain clipped samples, but a decoder (either because of bad programming or because of the nature of mp3 lossy compression) *could* misinterpret them, so a decoder should take corrective action (i.e. attenuation like Mad does).
I cannot easily find a way to produce convincing evidence as to which of the above is the case. At first sight one could propose: "just take an mp3 and decode it with a standard decoder and see if it contains clipped samples". However this is not good enough for me, for the following reason: If i use 10 standard decoders I'll get 10 wavs which will almost certainly differ. The next step would be to *visually* inspect the wavs for clipping, cause I do not think that any wav editor is capable of detecting and reporting (and possibly take action like Mad's attenuation) levels above 100% when opening a wav or importing an mp3 (am I wrong here?).
In conclusion, to rephrase my (naive ?) questions: how is it in principle possible that Mad detects clipping, how does Mad do it and where exaclty is Mad detecting it: a) is clipping stored inside the mp3 file content? b) is it an artifact of decoding programs ?
In addition, can you suggest another reliable means of verifying that an mp3 contain or not clipped content ?
Excuse me for the lenghty post, but I think the issue I bring forward is an important one. For one, I would hate the idea of being forced to first normalise all my wav's to a safe e.g. 95% before encoding it, to just prevent the eventual clipping upon decoding.... Moreover if my worries are justified, we should definitely propose "response to mp3 clipping" as a very important criterium for the evaluation of mp3 decoders (and encoders ?) in that famous www page that does "objective" mp3 software comparisons.
Best regards, Mits