Biggest Bottlenecks When Building Android From Source

Biggest Bottlenecks When Building Android From Source

Update 4/19 12 pm CT: Clarified build times are ccache build times.
Update 4/20 9:17 am CT: Build 3 was most certainly not RAID 1. Corrected that error.

In 2012 I started to build kernels — and relied on my trusty Core 2 Quad Q9550 to build it. If that wasn’t worthy of a cringe then the fact that I did it in a VM inside of Windows will probably ensure that for most folks who build Android from source.

A virtualized Ubuntu environment does not perform as well as a native environment and oh, how painful that was made apparent when a kernel took over 2 hours to build. As I wanted to start building Android from source the following year, I knew my current hardware wouldn’t cut it — and so began a long and still continuing journey to find a way to reduce that ever-growing build time.


In the years since then I’ve been fortunate enough to test on multiple form factors and platforms. This is important since build configurations are not a one-size-fits-all situation with Android. An application developer may not need the same configuration as game developer. And someone building kernels only may not need to spend as much as someone who needs to build a full Android ROM from source in a very short amount of time. And what about OS selection — what can (and can’t) be used right now? I hope to explore this more as well, especially with Windows and Canonical working to bring a full-fledged Bash to Windows 10.

To kick this series off right, we have to find where the biggest potential bottlenecks are in building AOSP projects from source. We don’t often go shopping for a PC or upgrades without knowing where to put your money. So based on 3 years of research and quantifiable results I’m ready to share what I’ve found. Now the expected disclaimer: These findings are based on personal experiences and can’t possibly factor in all combinations. Those of you with your own build configuration, sound off and let us know how your builds are faring! Times are also referring to builds with ccache enabled and populated – was usually double when ccache was not populated yet.

m550-introDisk I/O: I have to give a hat tip to Cyanogen’s Tom Marshall – also a member of Team Kang – for pointing me in this direction last year. I honestly didn’t believe him when he told me this would be the bottleneck over CPU.  But over the past 6 months I’ve been able to back this up with quantifiable data. In higher-end CPUs (such as most desktop Intel Core i7 models) this is the top bottleneck your system will experience.

Let’s take 4 build configurations that I have tested this on. I’ll highlight here the CPU,

  • Build 1, my “un-upgraded” PC, was an Intel i7-4790K with 32GB of DDR3-2400 RAM, a Samsung 840 Evo 250GB for my primary drive and an older Micron P400E 100GB.
  • Build 2, which was the upgraded version of Build 1. Now sports an Intel i7-5960X overclocked to 4.0 GHz, 32GB of DDR4-3200 RAM, a Samsung SM951 512GB AHCI m.2 SSD along with the two previous SSDs. Full build specs for this are on PCPartPicker.
  • Build 3, a recent user build, featured an Intel i7-5820K overclocked to 4.2 GHz, 16GB of DDR4-2400 and 2 Samsung 840 EVO 120GB in RAID0 (striped) configuration.
  • Build 4, a recent server build featuring an Intel Xeon E3-1270 v5 at normal speeds, 32 GB DDR4-2133, a Samsung 950 Pro 512GB NVMe m.2 along with 4 SATA Samsung enterprise SSDs in a RAID5 array.

If you just looked at those, which one would you think achieved the lowest build time? How about the second? To my shock it wasn’t the second configuration that took lowest build time – it was the third configuration, at just under 14 minutes for building CyanogenMod 13.0. So certainly the dominating CPU would take second place, right? Wrong again. Build 4, which I just finished testing on, took just over 25 minutes! Only here is where my current build stands, 2 minutes slower than a system with half the cores and threads but an SSD array of 3 SSDs, whereas my SSDs were standalones. The SM951 has also been known to have throttling issues if it gets too hot, something that could be a very real factor in this case. The first and slowest build took about 30 minutes, one of the only times I had built CM 13.0; I have heard of similar build configurations doing it in 27.

SSDs also used to be a difficult item to get so there was very little discussion on the topic. However, prices have dropped dramatically both in retail and secondhand markets over the last year. With 120GB SSDs now under $50 it’s not the barrier it once was to add one to a system. Traditional hard drives will do the job as well, but users are more likely to reach this bottleneck before others if not using SSDs.

CPU SleepCPU: When I mention above that the top bottleneck is disk I/O it does bake in an assumption that may not always be the case – each of those builds I used featured an Intel Core i7. But as I found with the Xeon server, the disk keeps up but then keeps all 8 CPU threads at high utilization through the heaviest of build processes. And try as I might, without the RAID array that we found above I don’t find my Haswell-E even being close to fully utilized for most of the build process. So if you’re looking for the best bang for your building buck, consider the Intel i7-5820K.

True, it’s X99 and so the motherboard may be more expensive than a Z97 motherboard; but we’re also still in year one of the X99 cycle. Prices for Broadwell-E are also expected to remain similar to Haswell-E upon release, meaning that you should be able to buy into the enthusiast segment for almost the same price as a i7-4790K or i7-6700K.

On Intel there isn’t much reason to go beyond a 5820K at the moment as you can get impressive build times with it. For the most part the higher the core/thread count below, along with processor speeds, will get you a faster build time. An i7-4770R in a GIGABYTE Brix last year averaged me a 42 minute build. While not the fastest it did suit my needs and allowed me to have a dedicated low-power configuration. You’ll find the same with AMD APUs – while they may not currently perform as well as their Intel counterpart, they will easily get the job done and usually at a lower price point than buying Intel. This is a situation I have a close eye on because if the rumors are true then Zen based APUs may close that gap significantly.

There is an upshot to those of you who would choose to remove those bottlenecks, one that applies to home users more than the office. General performance will increase on a system by removing these bottlenecks. Gamers in particular will find that upgrading to address these bottlenecks will in almost all cases also increase game performance. While it may not have won the fastest build time, that second build gave an unexpected surprise — a 30 second load time on Just Cause 3 when many others were complaining about load times in minutes. In the end these build times are really high end and may be overkill for many… but at least now the argument that more cores will mean faster builds has been finally put to rest.

Since this is only the beginning we hope readers will chime in and share their build experiences on various configurations. As a reader do you want to see more discussions on these types of topics? Sound off in the comments below!

About author

Daniel Moran
Daniel Moran

Former PC Hardware Editor for XDA.

We are reader supported. External links may earn us a commission.