Hi, I saw some light mention of this in the archives back over a year ago discussing MAD vs. ARM's decoder. However nothing definitive.
I realize this is rather difficult to qualify. However does anyone have any data or estimate of the processing time of a MAD decode where the CPU/speed/architecture are known and stream I/O overhead (read()/mmap()/memory access) is either factored out or characterized?
Hoping someone has already beaten this path..
Thanks,
-john