Audio-Focused APIs Mark Promising Direction for Android
Aside from Android, I’ve maintained a healthy interest in writing and producing music over the years, and unfortunately I’ve never quite found a compelling use for my smartphone as part of my audio set-up. Annoyingly for me, Apple have always had a bit of a head start here, or more rather an increased focus on the world of music production, which makes sense considering how ubiquitous Macs are in studios and after Apple’s purchase of the Logic DAW way back in 2002. But with Android M, all this could change.
Google announced a huge number of changes to APIs at I/O, but the ones I’d like to focus on in this article are purely in the audio department. Some, like the added support for playback and encoding of 32bit/96kHz files are relatively minor. These are the kind of files you’d be creating on your home or professional studio set-up, or the kind of hi-res encoded tracks that some audiophiles use. With the existing support of USB DACs, this could encourage a good number of power-users to switch platforms, especially considering how cheap the storage options in some Android smartphones are when compared to the Apple method. In a similar vein, there has been some existing support for other audio-related USB host peripherals (like microphones, speakers, etc) but with proper audio capture support, Android M should make it far easier for developers to create DAW-type Apps for recording on the move. Again, these have existed in the past due to the sheer tenacity of talented App developers, but it has involved writing code to support all of the data conversions and formats that Android didn’t natively. The new multi-channel stream support through USB should not only help with these portable audio workstation implementations, but I also am eagerly awaiting the day that I can hook an Android phone or tablet with all my stored movies on into my AV Receiver and enjoy Dolby or DTS sound through my home cinema set-up. This could be excellent, relieving the stress on my laptop and external hard drives, and making high quality films available wherever I am, at the best quality I can enjoy at the time.
But I believe that one of the best additions this year are the APIs for full MIDI support, and specifically the fact that these process signals in both directions. MIDI has been around since the 80’s, and has been inexorably linked to the composition of countless songs since that time. It’s still extremely popular today, and that’s because the MIDI standard is easy to work with and can be used to control almost anything, from live guitar pedal setups to recording sampled drums. Accepting MIDI signals into your smartphone is one thing, and again, pretty essential for any type of sequencer or DAW solution, but the addition of being able to output standard MIDI is the really important aspect here. Your smartphone or tablet is essentially one big screen, and the ability to interpret multiple touch events and convert them into useful data is incredibly versatile. There are endless applications in the audio-production arena for this kind of MIDI controller; once linked to a synthesizer or similar plugin on your computer, the developer (or user) could map different controls to areas on your device’s screen, allowing it to become an instrument of its own. You can play your smartphone and then write a song with it.
The element that could cripple this functionality is latency. The time it takes for a signal to enter a device, be processed, and return back out is of the utmost importance in the audio production world, and Android hasn’t had a particularly good reputation in this regard thus far. This counts for a lot if you’ve integrated a tablet into your recording workstation, and are attempting to lay down a track where the notes you input reach your computer half a second after you play them. It counts for even more when you’re trying to do it live, in front of a room of people. With the Android M preview on the Nexus the results are a little mixed; some users are reporting a decrease in latency, whilst others aren’t seeing any improvement. However we’re still on the first iteration of this software, we know it’ll be updated regularly before it is released into the wild, and now that the APIs are in place we should start to see some improvements.
This brings me to my main point, and it is good news; Google is clearly aware of this situation. Android M is looking more and more like a huge maintenance release as it gets closer, so that where Lollipop brought forth abundant features and a massive redesign, M brings some glue to patch up the holes and a rag to polish the corners. This is especially important when you’re in a permanent race against a company like Apple, and with Android now widely distributed across the globe and all of the most important boxes ticked, the smaller demographics can start to be catered for. Sure, there exists adapters for plugging your guitar into your Android device just like on iOS, but until now they’ve only been available for a handful of mostly Samsung devices, limiting your options significantly. It’s this kind of barrier that makes Android less attractive to any bedroom-producer, and make Apple’s wide offering of third-party products look far more professional. Once Android M makes its way to the most popular flagships, which of course may take a little while, developers will have a much easier time creating audio applications that work more seamlessly for the user, and most importantly will know that any device running the latest version of the OS should work. Android was always playing catch-up in this specific area, but with the help of some creative developers it will soon be a viable option for those that work with audio, and I think that Google have made an excellent decision in tackling this absence head-on.
So, how will you use the new audio-focused APIs in Android M? Have you been jealous of any Apps that haven’t made their way to Android because of the lack of these features? Let us know in the comments!