Order of the Butterfly
Posts: 419 from 2003/8/18
@GK_LKA: From the eaudio.device readme
Quote:
2) The AHI performances
AHI is an audio layer. So each time a task wants to play a sample AHI library is called before the sound wave is sent to the audio board's chips. This is the way. For this reason, the coding approach must consider that is better to send longer audio streams few times in a second than sending shorter audio streams many times per second. The old Paula chip had better performances in the second case. The narrator.device is an example. To reproduce the sound it sends to the Paula (trought the audio device) very short samples, in stereo, a lot of times per second. AHI, even used in low level api, is not able to reproduce the audio at a reasonable speed. So in most cases the audio is not outputted. This is a limit, which may be fixxed only by faster AHI versions, or maybe on faster audio devices. Taken from the AHI developer documentation (consider the unroll samples technique for the audio.device is not possible; it is a real time audio processor layer!):
"Also note that playing very short sounds will be very CPU intensive, since there are many tasks that must be done each time a sound has reached its end (like starting the next one, calling the SoundFunc, etc.). Therefore, it is recommended that you "unroll" short sounds a couple of times before you play them. How many times you should unroll? Well, it depends on the situation, of course, but try making the sound a thousand samples long if you can. Naturally, if you need your SoundFunc to be called, you cannot unroll."
see: http://xoomer.alice.it/nexusdev/nexusdev/eaudio.device/eaudio.device_readme.html
Leo.
[ Edited by Leo on 2006/6/26 16:54 ]
Nothing hurts a project more than developers not taking the time to let their community know what is going on.