Audio effects in Android Jelly Bean
Each new version of the Android operating system brings new features and functions. With the launch of Jelly Bean (version 4.1 and up), new audio functions were added. This article takes a look at two of these features – new voice effects accessible from the application layer and the fast mixer, and examines the implications for device manufacturers and Android application developers.
Voice effects API
For audio playback, Android 2.3 (Gingerbread) made a small set of audio playback effects available to the application layer, including bass boost, reverb, equalizer and virtualizer.
In addition, Android 4.0 (Ice Cream Sandwich) included a set of audio capturing effects for speech processing. The additional effects are Acoustic Echo Cancellation (AEC), Noise Suppression (NS) and Automatic Gain Control (AGC). In ICS, these are not accessible to applications but are only available to the device’s hardware platform. They are part of the hardware abstraction layer (HAL) and were applied automatically and independently of the application.
In Jelly Bean, the set of application-accessible effects has been extended to include the audio capturing effects for AEC, NS and AGC.
Better voice quality
In Jelly Bean, access to these effects is provided via the audio effects configuration file in the Audio Flinger. This allows applications to detect the presence of embedded effects and control them. More importantly, it allows an application to turn off its own audio effects if it finds the embedded effects present. This results in three major benefits.
Firstly, only a single set of effects is applied to the streams. Applying both the application’s effects and the embedded effects (i.e. double processing) can actually be detrimental to sound quality.
Secondly, using the embedded effects guarantees superior audio quality. Unlike the device-agnostic effects incorporated into applications, the embedded effects are tailored to the specific hardware of the device, so provide a better audio quality.
Finally, using the embedded AEC solution has an important additional benefit. Due to the non-real-time nature of the Android OS, performing echo cancellation can be a significant challenge. Delays between the speaker output and microphone input, plus delays introduced by internal circuitry such as buffers, must be known accurately. To achieve the best results, the AEC algorithm needs to be ‘close’ to the microphone and speaker hardware. The AEC can then account for any delays and ensure echo cancellation is always synced perfectly. When an application uses its own AEC, buffer delay complexities can break echo synchronization, resulting in glitches and drops in sound quality.
Optimal AEC integration is a balance between being close to the hardware and centrally on the application processor to provide the required flexibility to support different use cases (audio recording, cellular and VoIP calling and speech recognition). The update audio effect framework in Jelly bean provides the best of both worlds.
By providing access to and control of these additional audio effects in the application layer, Jelly Bean provides application developers with a suite of audio effects for Acoustic Echo Cancellation, Noise Suppression and Automatic Gain Control, tailored to each individual device. This can save time as developers can focus on the application functionality without having to worry about developing audio effects themselves. And for the device manufacturer and user, using the embedded audio effects guarantees the best sound quality from their device. In addition, the framework allows device manufacturers to replace the list of audio effects to meet their specific requirements.
Acoustic effects are crucial for the correct delivery of audio such as speech and music, ensuring voices are clear and sounds are sharp. In ICS, all the streams (up to 32) can have audio effects applied and are then mixed together into a single output channel for playback. However, not all audio needs to be treated with acoustic effects, for example, keypress tones and other audio signals where low latency is more important than audio effects. For these types of audio signals, the latest Jelly Bean release offers 7 additional ‘fast-track’ streams.
While applying effects to an audio stream does not take very much time, it does introduce a slight delay. The seven ‘fast track’ streams supported by Jelly Bean enable applications to potentially avoid this delay for sounds that do not need audio effects processing.
This could mean a saving of approximately 20 ms or possibly even more. However, the actual latency introduced by the mixer depends on the hardware platform. The fast mixer resides in the HAL and runs on the defined minimum block size. If the minimum block size is large – e.g. comparable to the normal mixer block size – then the same time will be taken for all mixer streams.
More detailed information about the Jelly Bean Audio Flinger can be found here: http://blog.csdn.net/guoguodaern/article/details/7984136