How Android L will improve real-time audio processing
When we’re talking about audio on smartphones, most of the times our conversation turns to speakers; do they offer stereo output, good frequency response, and are they placed wisely? That comes up so often because the details effect so many tasks, from games, to video playback, and even speakerphone calls. But what if your concern is more about audio processing: taking a signal in, acting upon it, and outputting the result? Some platforms have managed to shine at this task, but Android’s often struggled with latency – the delay introduced in this process. That’s drastically limited the ability for apps to do real-time processing, but the good new is that it looks like it’s about to get a lot better, as we get word of some big latency improvements coming in Android L.
Now, there’s always going to be some latency – it takes time to sample audio, process it, and output it again – but the trick is to get that below the point at which our ears and our brains are capable of noticing the delay. Currently, in some cases on Android the delay introduced by latency can be as high as 200 milliseconds, and while one-fifth of a second may not sound that long, it’s nowhere near low enough to be perceived as real time.
According to Google’s Glenn Kasten, the re-written audio APIs in Android L are designed to minimize both input and output latency, and while the team hasn’t quite hit the 10 millisecond mark it’s been shooting for, the improvements should make a large enough difference to make things like real-time voice effects or karaoke apps possible.