Editor of PAR
firstname.lastname@example.org While pro audio is respectably monogamous to iOS—its first-love mobile platform— the world overwhelmingly runs with Android. Worldwide, Android accounts for 84.7 percent of smartphones1 and at least 60 percent of tablets, and rising.2
Android OS—with thousands of available app options—currently offers no reasonable tools for its users who may want to, for example, use their tablets to capture multi-microphone/multi-track performance sessions, record overdubs, effect/edit tracks, mix, and distribute audio productions without horribly bad audio I/O latency. Yet there’s an iOS app for that—dozens actually, and those perform spectacularly.
Meanwhile Audour and Harrison MixBus are each fine DAWs available on Linux OS, the open architecture OS (with the cute penguin mascot) that shared its processing kernel with Android (branded with the little green robot with antennas).
The Latency Hurdle
Overcoming a best-case scenario 20 ms full round-trip I/O latency in tablets seems easy enough—that is, if the best case scenario would be as low and consistent. Yet the fact is, audio latency inherent in Android devices varies terribly, based on each manufacturer’s own added code and the accompanying, inherent variances; expect latencies of 250 to 300 ms, regularly and randomly. This is simply unacceptable for audio production tools and an issue that Google, Android’s current developer, will have to make a greater priority to solve with an updated API* as well as some form of enforced standards. After all, some tablets process audio at 48 kHz, other “smart” devices at 32 kHz, while games often use 24 kHz and streaming audio is widely variable (QuickTime, for example, uses rates ranging from 8 kHz to 48 kHz). Thus Android audio requires latency-causing resampling code.3 Or, tablet manufacturers must work around current Android code using other means. Or, possible still, third-party pro audio industry Android advocates could provide some interesting work-arounds.
Google now manages the world’s most popular mobile OS in history. To be fair, Pro Audio’s specific needs in new Android code is obviously down a long list of other priorities, yet that’s not to say Google isn’t concerned about audio I/O latency. Who opens the Android gate into such pro audio development? Is Android’s market dominance even that attractive to pro audio product developers, or—possibly more poignant—is pro quality audio production an attractive market to tablet manufacturers?
I recently spoke with two experts also interested in this topic: Paul Davis, developer/programmer of Audour, the JACK Audio Connection Kit**, and second-earliest programmer for Jeff Bezos’ Amazon.com; and Ben Loftis, designer/engineer and developer of MixBus for Harrison Consoles.
My conversation with Davis on the subject started with this disclaimer. “Let me first qualify anything I am about to say: I’m not an expert with Android audio. In fact, at this point, I wouldn’t even say that I’m an expert on audio inside of the Linux kernel.”
Yet, within the last few years, Davis continued to develop his Audour project with investment and collaboration with Harrison Consoles and later, development and promotional support from Solid State Logic, the latter an arrangement publicly announced late 2006 at the San Francisco AES Convention. Davis is a DAW expert with quite an interesting and scenic viewpoint.
“There’s a historical background to this, and that problem goes back to the guys that worked on the APIs and libraries that application developers use,” begins Davis. “They didn’t really care about low latency audio. I’m not even sure that they knew about it, understood it or even thought that it was a problem to pay specific attention to. The specific difference is, in a world that’s derived from UNIX-types of operating systems and that didn’t mutate in the way Apple did it, there are essentially two models for interacting with audio hardware. One is a push model; one is a pull model.”
“In the push model, the application is responsible for deciding, ‘hey, I have a big chunk of samples and I want to send it out to the device right now and it’s going to get played.’ It’s a nice, simple design that works really well for all the traditional UNIX audio applications where you just want to make a beat, play one audio file, etc. Just grab the samples, queue up a bunch or all of it, push it toward the device, and some layer in the software will just take care of the rest. This is not so good for low latency in general because of loads of buffering between the application and the hardware.”
“The pull model actually wasn’t pioneered by Apple but they were the first company to impose it on application developers when they switched to Core Audio with OS X. You had to deal with the new model. The device is going to wake up when it has audio to record or needs more audio for playback, and the application handling that has to deal with it right now. With this model, it’s easy to get very low latency. Of course, you can lay other things on top; it’s easy to write high latency stuff on top of the pull model. All the low-latency systems—ASIO, Core Audio, and systems like JACK, which sit on top of those—use a pull model.”
“In developing Android, they either didn’t know this, or didn’t care. So, all of the audio APIs in Android became push-based. The idea is, ‘I’m an application. I have some audio playback. I just need a pathway through to the device.’ In Android, historically, the model was Java—high-level languages, applications that need boatloads of buffering. Up until 2-3 years ago, that’s the reason why Android sucked. They just weren’t in that headspace at all, and there was nothing available for developers. Recently, this has begun to change.
“Google has a relatively small team focused on trying to redesign the core audio layers inside Android, and they have made some significant progress so far. They have also added the OpenSL API which does provide a pull model option for application developers, although it comes with a lot of additional stuff the pro audio apps don’t want or need.”
“Apple said to their developers, ‘We don’t care how much this is going to disrupt your other application designs. With Core Audio, this is how audio applications have to be written, end of story.’ My impression is that Android management, whatever that means, really doesn’t want to turn around and announce this kind of change.”
“My sense is that there’s an unwillingness to commit to, ‘we have to have a low latency path.’ There is also the problem that Android itself is not a product in the way iOS is. Getting this stuff to work the way we need it for pro audio requires the whole Android ecosystem to agree on some basic goals and rules, and that seems quite unlike how Android has worked so far.”
Loftis’ MixBus DAW is modeled on code developed by Paul Davis via Audour. As such, Loftis further developed in MixBus an aesthetic reflecting what you’d hope to see from an engineer at such a classic, large-format console manufacturer as Harrison.
Loftis dove into the Linux pool 12 years ago. “We first started with Linux when we changed our automation system from Apple to a system called BeOS, which marketed itself as the multimedia operating system,” recalls Loftis. “TASCAM, RADAR (iZ Technology), Harrison, Level Control Systems, Roland and several others bought into the idea that BeOS was going to be the next big multimedia platform—the replacement for Mac. That didn’t happen. BeOS changed focus to consumer things and we were stuck with an automation system that we had written that had to be ported away to something different. We looked around at Windows and Mac and decided that we couldn’t afford to have another company shift their focus and make us re-write our software every few years, so we got into Linux development.”
Upon Android’s burgeoning success just a couple of years ago, Linux audio folks like Loftis were stoked. “The thought was, ‘If we can put JACK on Android, then all this cool software we’ve written—metering and recording, MIDI, virtual instruments and synthesizers—could get immediately ported over. They found that Android’s infrastructure was incapable, raised a fuss, started a mailing list and hammered on Google to address the issue. They have tried to address some of these things, so that users can use Android for ‘semi-pro audio,’ but there’s a lot of existing hardware and things that make it hard to backtrack. The fact of the matter is, the underlying protocol is unidirectional only; it’s not a record/playback system. You have separate record and playback systems with separate clocks. So, when Google announced that this was what you’re going to get, everyone (in pro audio) just said, ‘forget it, then.’”
Loftis poses that the pro audio industry may ultimately regret “hitching wagons” to consumer OS systems. “Don’t get me wrong, there are great iOS products,” disclaims Loftis, “but the problem is that we, as the audio industry, are dependent on consumer electronics to make the bigger version, the smaller version, the faster version or the cheaper version to meet our needs. We are not the tail that wags the dog; they don’t make larger iPads because the audio industry wants it. They may do that anyway. But they’re not going to do it because the audio industry wants it.”
But wouldn’t “audio-optimized” Android tablets open up music production to a much larger audience, further nurturing and growing next generation audio content creators? Possibly, agrees Loftis, but “there’s a bit of a chicken-or-egg situation there, too. Samsung has promoted a high-performance audio tablet, porting JACK over to Android, and apparently it works. But they’re still held back by a couple of things: Samsung didn’t rewrite anything in Android; they’re still using what Android provides them. No one can write software for a really good audio back-end until there is one. If you make a very good audio back-end and there are no apps out there, it’s not a sellable product. I’m just looking at this from the developers’ standpoint. It’s hard for me to make a really cool app because even with Samsung’s changes, the performance of Android isn’t as good as iOS. I would be motivated if it was as good, but I would still have to consider that only those with the latest Samsung tablet could run it.”
Rather than apps, perhaps the future of Android in pro audio is Android-enhanced hardware. “A USB front-end is already costing more than a tablet or computer,” poses Loftis. “The next step is, you buy your USB I/O device with Android on it, which costs nothing to license. Suddenly you can do everything any other tablet can do: drag and drop, wi-fi, use a keyboard, etc. You wouldn’t buy a tablet like an iPad; you actually buy a tablet with four XLR ins, four XLR outs, etc. That could happen—an I/O device featuring Android. It’s definitely an interesting prospect for hardware manufacturers.”
Meanwhile, it’s amazingly quiet in pro audio circles on the topic of Android. Why? “It’s not a solution to a problem we have,” offers Loftis. “It’s just that people want to use their iPads. I’ve heard the sentiment, ‘Put an iPad in it and it’ll sell.’ If you can afford a $700 iPad, you can afford a $200 guitar input dongle. And eventually, you’ll go around looking at mixers. ‘What kind?’ ‘The kind that works with my iPad, of course!’ Really, that’s a lot of what’s going on.”
“If it wasn’t just about the iPad, [pro audio companies] would stick an Android tablet in there, one that they could have manufactured specifically for their needs,” he continues. “If Amazon can sell a cheap tablet for, say, $50, they’re being made for $20. A company could have them made and put their software in; users could pull it out of the slot, it would be company branded and the company would have a lot more control over it—that is, if it isn’t just about the iPad. And as controllers, Android tablets are already superior because they’re cheaper, customizable with better battery life, Wi-Fi range, or any number of features.”
Filling The Void
It’s arguable that Android, the most accessible OS in consumer electronics history, could allow for more pro audio products to, for example, assist budding producers who already have an Android tablet, or for those who could more easily afford an Android machine. And despite how much we all love iOS, a little competition is never a bad thing. But for now, the competition is still in the locker room.
1 “Android, iOS Gobble Up Even More Global Smartphone Share,” PC World (http://www.pcworld.com/article/2465045/android-ios-gobble-up-even-more-global-smartphone-share.html)
2 “Gartner Analysts: 2.5B PCs, Tablets And Mobiles Will Be Shipped in 2014, 1.1B Of Them On Android,” TechCrunch http://techcrunch.com/2014/01/07/gartner-2-5b-pcs-tablets-and-mobiles-will-be-shipped-in-2014-1-1b-of-them-on-android/)
3 “Sample rates and resampling: Why can’t we all just agree?” Google I/O 2014 (https://www.google.com/events/io/io14videos/17fb53da-42e0-e311-b297-00155d5066d7)
* Application programming interface, specifically audio I/O software http://en.wikipedia.org/wiki/Application_programming_interface
** The JACK Audio Connection Kit is Linux audio/MIDI software, a soundcard server for Linux, iOS/OSX and Windows: http://en.wikipedia.org/wiki/JACK_Audio_Connection_Kit. Meanwhile, the iOS exclusive AudioBus, is in V.2.