Again, I'd be very interested in a case where this actually happens. The amount of data read here is fixed and predetermined by the frame parameters (bitrate, mode, and sample rate) which also determine the length of the frame.
All it needs is corrupted frame parameters! Anything that is driven by data content in determining lengths should be more stringently checked.
One solution to above problem would be to created a bufend ptr in struct mad_bitptr (ie. mad_bit_init() should be passed buffer and bufend). Or just rationalize lots of the code by joining buffer/ bufend into a new struct and never using a anything but that struct everywhere.
While certainly this is possible, there is always a tradeoff with performance. Having already performed buffer length checks on a macro scale, performing them again on a micro scale tends to gain very little.
The checks on a macro-scale are not adequate. The segmentation errors do occur - that is proof enough.
The required checks are quite slight, and like this:
if (bitptr->byte < bitptr->bufend) bitptr->byte++;
The cost is close to zero.
Or, if you were genuinely concerned about pointers going beyond the end of buffer you could put some more assertions in your code.
The only downside to this type of check is that it inhibits segmentation errors but doesn't return error to caller. Instead it replicates the final byte.
Also, as I said, if you want examples you would be better off taking a good packet and randomly corrupting it and feeding it to your test engine (random can be repeatable by choosing fixed seed).