• 5,807,029
  • 38,206

Posts Tagged: security


Earlier today, we talked about how the Replicant team found a potential backdoor in Samsung’s proprietary radio software. As demonstrated in a proof-of-concept attack, this allowed certain baseband code to gain access to a device’s storage under a specific set of circumstances. But upon closer inspection, this backdoor is most likely not as bad as it was initially made out to be.

A few hours after posting our previous article on the alleged backdoor, a highly respected security expert who wishes to remain anonymous approached us, stating that the way in which the proof-of-concept attack was framed by the Replicant team was a bit misleading. Essentially, it boils down to the POC requiring a modified firmware with with security features disabled. Thus, if a user is running an updated version of the official firmware, this attack will not work. To that end, the Replicant team even states in their write-up that SELinux would considerably restrict the potential files that the modem can access, such as those on the /sdcard partition.

Now, another highly trusted security researcher (XDA Recognized Developer djrbliss) has gone on record with Ars, stating that there’s “virtually no evidence” that this is indeed a true backdoor, although his reasons are a bit different. There is absolutely no indication at this time that the baseband file access can be controlled remotely. Rather, this is only a “possibility,” since the baseband software is proprietary. Instead, it’s far more likely that this was only ever intended to write radio diagnostic files to the /efs/root directory, as that is is the radio user’s home directory.

In summary, we shouldn’t rush to replace our Samsung phones just yet. There is absolutely no evidence to state that this can be controlled remotely. And even if it were possible, using SELinux, which is set to Enforcing in stock firmware, would restrict the radio user’s access.


Google has been on a roll with a few high profile acquisitions and sales in the past month. Not too long ago, we talked about how the company had acquired the smart thermostat and carbon monoxide detector manufacturer Nest for $3.2 billion, and how this could signal the coming of future home automation products from the Mountain View company. Then, we were all relatively surprised when we saw Lenovo take money pit Motorola from their hands for a cool $2.91 billion. Now, Google has gone ahead and acquired the SlickLogin team.

For the unaware, Israeli-based SlickLogin pioneered a unique authentication method designed to make traditional security measures a thing of the past. Rather than using traditional passwords or identifiable biometrics, SlickLogin’s method utilized ultrasonic sounds emitted from a user’s computer, which are then used to authenticate that the person trying to gain access is indeed you. Needless to say, this falls inline with Google’s continual push to encourage stronger passwords and two layer authentication.

The details surrounding the acquisition are unfortunately sparse at present, but we will update this article as soon as more information is known. Let us know your thoughts in the comments below!

[Source: SlickLogin | Via TechCrunch]


Don’t you hate it when you are stuck in a crowd and you need to unlock your mobile device? Sure, the vast majority of the time, nobody’s genuinely trying to sneak a peek at your lock screen code—but you never truly know who’s watching. Because of the potential danger of having others learn our lock screen codes, we all try various “techniques” to thwart would-be prying eyes. But let’s face it—if somebody really wants to stealthily learn your lock screen code, there’s a good chance that they’ll find it.

Rather than using a single, predefined unlock code, wouldn’t it be nice if you could have a time-based PIN that changes so that a password that works one minute won’t work the next? And wouldn’t it be nice if this PIN relied on something like the time of day so that you could never accidentally forget? Well, that’s exactly what XDA Senior Recognized Developer jcase has done with his new application TimePIN.

TimePIN does exactly as its name states. It allows you to enter a 4-digit PIN based on the current time of day to unlock your device. And if somebody happens to sneak a peek at your unlock code, it won’t do them any good unless they somehow figure out that the PIN changes based on the time of day. Obviously for this to work, you must grant the application Device Administrator status. However, the process is painlessly easy, and you will be up and running in no time.

What about if you want to make things a bit more complicated for those would-be hackers? Never fear, as jcase has you covered. Through a series of modifiers, you can obfuscate the original time from the generated PIN code. For example, the if the time is 12:34 and you enable the “reverse” modifier, the code will become 4321. And for an even greater degree of security, there are also other modifiers available such as mirror (12344321), double (12341234), and offset (add a predefined offset to the PIN). What’s more, these additional modifiers can be stacked together for a seriously complicated password that only you could ever know.

TimePIN is available for free and comes with the time-based PIN functionality, as well as the reverse modifier. However, a small $1.99 in-app purchase unlocks all of the additional modifiers for life. Fend off those pesky password snoopers by heading over to the application thread to get started.

Those who’d rather see it action before jumping on the bandwagon should check out the demo video made by jcase himself below:


Remember all those times when we here at the XDA Portal have told you that privacy is important? Despite many people thinking that we are all just a bunch of nerds wearing tinfoil hats, we do have our reasons to be somewhat paranoid. After all, we’re quite sure that you wouldn’t like the idea of having somebody snoop around your cell phone for all the naughty pictures and messages sent to and from your significant other. If you couldn’t care less about who reads the information on your device, then you might as well just go ahead and install Facebook. Yes, the Facebook app for Android. Yes, the free one from the Play Store. But, wait… Why would this app even be highlighted here? If this caught your attention, you will be glad to know that Facebook now has access to yet another part of your mobile life: your SMS and MMS messages.

Those of you in the US (and many abroad who are avidly for organically grown food) will likely remember an episode last year when the Monsanto Bill was passed along with a massive binder of bills and amendments by the US Government. The background of what the bill actually does is of no relevance, but rather how it actually become law. As it turns out, someone “slipped” the bill into the massive group of fixes and proposals and “no one” noticed. Why in the world am I bringing this up? Because Facebook decided that it would be a good idea to try and do the same thing with Android users.

According to an article that went up in reddit yesterday, a blogger found out that the Facebook Android app was about to automatically update itself, when he was prompted to accept some new permissions that had just been added to the app. The very first one: “Read your text messages (SMS or MMS).” It is no secret that Facebook makes a living out of collecting information and using it for targeted advertising. They already have access to all the garbage that we post on our walls, all the times we “check in” anywhere, all the pictures/videos that we upload, and let’s not forget our contacts and what we like and dislike. It is not like they don’t have access to pretty much every part of our lives, so why do they need more?

The comments in the reddit thread linked above seem to suggest that the permissions have been around since the end of December, so it is entirely possible that the upgrade came in the form of a staged rollout to prevent everyone from noticing this change at the same time and causing a fuss. As you would guess, they (Facebook) do have a somewhat of an official explanation for this permission, and that is to scan for SMS confirmation codes. The permission granted is far too broad to be granted to an app whose main function is to collect information about you. This is where selective permissions tools such as App Ops and Xprivacy come in handy. And based on some of the other permissions in the screenshot—well, lets just say that if you use any app on your device, Facebook will know about it. I mean, looking at those permissions, I have gotten rid of Trojans on my PC that used fewer permissions.

So, what options do you have as a user? For starters, if you truly do enjoy using the Facebook official app (not entirely sure why anyone would), you could either stay with it and live with the fact that Facebook will know everything about you or simply try to uninstall it and install a previous version that does not require these permissions. You could also try to block some of these permissions with various privacy suites. And last but not least, you could simply opt for a different Facebook client such as Fast. On the flip side, uninstalling Facebook might give you a nice boost in productivity, as well as battery life.

Being social on the web should not have to be something that we’re afraid of. But companies like Facebook are making it harder every single day, as we cannot even have privacy in the confines of our own virtual domain. Perhaps the best way to be social and share your experiences without being virtually cavity searched would be to host your own blog without a “Facebook” in the domain. Then again, you will still have the NSA to look out for, but that is a rant for another day. You can find the original blog in the following link.


About a month ago, we talked about a recent study (PDF) stating that most security vulnerabilities on Android are ultimately due to OEM customizations. And surprise, surprise—this can even happen on devices with technologies designed to protect users.

Late last month, security researchers at Israel’s Ben-Gurion University of the Negev discovered a security vulnerability that allowed a user-installed application to intercept unencrypted network traffic. Rather than describing this as a flaw or bug, Samsung labels the vulnerability a classic Man in the Middle (MitM) attack, which could be launched at any point on the network.

Samsung was also quick to state that this type of attack can be thwarted using existing KNOX technology (or the device-wide VPN support in stock Android):

Android development practices encourage that this be done by each application using SSL/TLS. Where that’s not possible (for example, to support standards-based unencrypted protocols, such as HTTP), Android provides built-in VPN and support for third-party VPN solutions to protect data. Use of either of those standard security technologies would have prevented an attack based on a user-installed local application.

KNOX offers additional protections against MitM attacks. Below is a more detailed description of the mechanisms that can be configured on Samsung KNOX devices to protect against them:

1.    Mobile Device Management — MDM is a feature that ensures that a device containing sensitive information is set up correctly according to an enterprise-specified policy and is available in the standard Android platform. KNOX enhances the platform by adding many additional policy settings, including the ability to lock down security-sensitive device settings.  With an MDM configured device, when the attack tries to change these settings, the MDM agent running on the device would have blocked them. In that case, the exploit would not have worked.

2.    Per-App VPN — The per-app VPN feature of KNOX allows traffic only from a designated and secured application to be sent through the VPN tunnel. This feature can be selectively applied to applications in containers, allowing fine-grained control over the tradeoff between communication overhead and security.

3.    FIPS 140-2 — KNOX implements a FIPS 140-2 Level 1 certified VPN client, a NIST standard for data-in-transit protection along with NSA suite B cryptography. The FIPS 140-2 standard applies to all federal agencies that use cryptographically strong security systems to protect sensitive information in computer and telecommunication systems.  Many enterprises today deploy this cryptographically strong VPN support to protect against data-in-transit attacks.

Now before we start bashing Samsung’s KNOX technology more than necessary, let’s remember that these kinds of attacks can affect non-KNOX devices as well. Furthermore, sending personal data in unencrypted form is simply asking for trouble. If anything, this should serve as a reminder to use encrypted transfers and connections whenever possible and to be wary about where we store and input our data.

[Source: Samsung KNOX Blog | Via AndroidPolice]

Snapchat logo

Here at XDA, we focus on bringing you news about what developers are up to on the forums or significant changes in the mobile industry. Today though, I bring an analysis of some recent news about goings-on in the security world in relation to a particular mobile application you may or not have heard of: Snapchat.

Snapchat is best described as a gimmick application, widely used by teens to send each other photos and short videos, which “self destruct” after viewing, preventing copies being made, etc. Before the security world tries to spear me on a stick and roast me, allow me to point out that Snapchat is an entirely flawed application. It’s not possible to achieve what they are trying to do, as they are trusting a device you control (your phone) to prevent you from copying data they send to it. As such, Snapchat has been broken. Many. Times. Over. On iPhone, and Android, and even via HTTP interception.

Four months ago, a group of security researchers, known as Gibson Security, identified a flaw in the Snapchat server API (the interface through which the Snapchat application communicates with the server), from the feature allowing users to find other users based on their mobile phone number. As the intention was for the application to upload a user’s contact list in order to find friends using the service, the API permitted a rapid rate of phone number queries. This allowed anyone to rapidly query the Snapchat service with phone numbers, asking if those numbers were in use by any user of the service, and if so, the associated username of that user.

Gibson Security found the original flaw in July 2013 and disclosed the issue to Snapchat. Four months later, and no response from Snapchat. They even tried applying for one of the jobs they were advertising! (source) On December 24th, Gibson Security released full documentation of the Snapchat API. The Snapchat API, while not documented, is not in any way hidden from a competent user, as the Snapchat application simple sends requests to the Snapchat servers using a particular format. Unfortunately though, Snapchat seem to be great believers of “security through obscurity,” sending unfounded takedown requests against people working to understand their API. That shows Snapchat has something to hide. After all, reliable, robust, and professional services make their API available freely and openly for people to use.

What followed was Snapchat’s somewhat lackluster statement on the matter, which amounted to saying “they were right, but we don’t think it’s a big deal, so we won’t really do anything about it, short of hiding behind some words about API query limits”. As anyone competent in security can tell you, putting some limits on this API is a short-term stop-gap fix (if done correctly), but isn’t a proper solution. The proper solution is to redesign this functionality to prevent attackers from gaining any information about users by guessing at simply phone numbers. A shame Snapchat’s team probably have never even seen the word “security”, let alone used the word with any meaning.

They also make some really rather bold statements, such as:

“We are grateful for the assistance of professionals who practice responsible disclosure and we’ve generally worked well with those who have contacted us.”

Given this case indicates the opposite (4 months+ without a response for Gibson security), I refuse to believe this, and implore you to do so as well. In fact, I would love to hear from any security researcher who has had any kind of positive interaction with Snapchat, at any time. I genuinely would, as it would prove that perhaps Snapchat are not lying through their teeth in a moment of self-preservation at this point.

On the 1st of January, a website appeared, offering for download 4.6 million Snapchat users’ phone numbers and associated usernames. While the resulting database had censored the final 2 characters of each phone number, those releasing the data said they would give access to the full, uncensored data, if approached with reasonable requests. That means there are now 4.6 million users of Snapchat with their phone numbers available to the world. While some naive and technically inept news sources report that the files and associated website have been “taken down,” as we all know, nothing is ever fully deleted from the Internet, and the files remain easily accessible for those seeking them. Unfortunately for anyone whose details were in that database, the damage has now been done.

Two to three days later (depending on timezones and the time of the precise release of the data), Snapchat finally raised their heads from the sand to make a somewhat pointless blog post. They did not apologise for the data breach, and nor did they apologise for being naive or mishandling user data. In fact, they didn’t actually apologize for anything. They were rather quick to apportion blame though:

On Christmas Eve, that same group publicly documented our API, making it easier for individuals to abuse our service and violate our Terms of Use”

Unfortunately, Snapchat, after designing a broken and insecure web service and trying to call the API that interfaces with it “private,” is not going to help you here. No serious hacker (who has bad intentions for your users’ data) is going to read your terms of use and say “I won’t hack them then… They asked me nicely not to do it.” Your terms of service should never be a front-line protection. Consider, for instance, that Snapchat states in their terms of service, “you must not view messages intended for other users,” and then simply makes every message publicly visible to everyone. I know it sounds far-fetched and silly, but that perhaps puts into perspective Snapchat’s naive approach to security.

Indeed, Sophos appear to also concur with my thinking here. Ultimately though, if users want to be protected from these kinds of attacks, I have 2 key pieces of advice: Firstly, give out less information. There is no reason for Snapchat to require or ask for your phone number, other than to enhance their user base and get you using Snapchat more. Mobile phone numbers are personal information, and you should really stop handing it out to services (sometimes without your knowledge). Take a look at XPrivacy by XDA Senior Member M66B to control access to this kind of data.

Secondly, and arguably more importantly, companies need to protect your data. I would say they should protect it as much as their own data, though given Evan Spiegel’s (Snapchat founder and CEO) own phone number and username were in the data breach, I suggest they don’t take enough care of their own information either. Users should have the expectation that ANY service being actively marketed and encouraging users is secure, and that this security has been tested through the company employing security experts, or at least getting suitable levels of peer review on their source code. Just think—if Snapchat’s web service was open source, this would have been fixed months ago, if that bug had even got through the scrutiny of the open source community in the first place.

To close, I offer you the following questions:

  1. How incompetent and complacent must a company be to ignore a security advisory of any kind, for 4 months?
  2. Why would a company such as Snapchat, in dire need of security knowledge, ignore a job application from a group of security researchers?
  3. Do Snapchat seriously believe that a malicious attacker (who wouldn’t tell anyone they obtained this information) will avoid taking advantage of their own security weaknesses, just because they ask politely for people not to? (Imagine asking another country nicely to not invade you – it doesn’t work)
  4. What can Snapchat do to regain user trust? Asides from coming and working with the security community (full disclosure, which I am a member of) in an open manner and fixing issues, Snapchat need to apologise to their users, and show humility here. Evan Spiegel is a college student, and he needs to get in people who know about security. And I know plenty of college students who are experts and could have prevented this.

While writing this article, further vulnerabilities have already been found in Snapchat. It appears that the original Snapchat issue is only scraping the surface of issues on their service. I do hope they take this opportunity to get competent security review of all their services and code carried out, so that they can protect their users and ensure their data is properly protected in future.


While secure text messaging systems have been available on Android for quite some time, many users (even power users) have failed to set them up on their devices. This isn’t because privacy isn’t important, but it’s often one of those things you don’t think of until it’s too late.

Now, CyanogenMod is taking a great first step by incorporating an existing and open source secure text messaging platform into CyanogenMod. The integration comes in the form of TextSecure, which is maintained by Open WhisperSystems and lead engineer Moxie Marlinspike. Moxie is also in charge of the CM integration of the app, ensuring functionality and a degree of security. New to the CM implementation is SMS middleware functionality. This functions similarly to the Google Voice integration in CyanogenMod.

The way it will work for end users is simple: If you are running CM and send a message to another CM or TextSecure user, your messages will be automatically encrypted and secured. However, if your messages are sent to recipients without either, a standard unencrypted text message will be sent.

Now, you might be wondering when you can get your hands on these goods. Luckily, You just have to make your way over to GitHub (12) if you’re a developer looking to incorporate the code into your own work, or if you simply want to snoop around. And if you’re an end user, rest assured that the latest CM10.2 nightlies already feature TextSecure integration. Integration into CM11 is coming soon as well, depending on how things go with the CM10.2 integration.

[Source GitHub (12) | Via CyanogenMod Blog]


Not too long ago, we talked about the Flash SMS (class 0) DoS vulnerability affecting the current lineup of Nexus devices. Discovered by Romanian security researcher Bogdan Alec, the vulnerability was such that Flash SMS (class 0) messages sent in rapid succession would cause unexpected behavior on various Nexus devices. Curiously, though, the bug only affected Nexus device owners.

Luckily, the vulnerability was never all that damaging. After all, the worst outcome that has been seen so far is data loss due to a device reboot. That said, the vulnerability certainly opens up users to annoying pranks and spam that can get in the way of essential productivity.

Now, the vulnerability has claimed its first major conquest, though in a somewhat unexpected way. No, there wasn’t a malicious attack based on the vulnerability. HushSMS by app developer Michael Mueller has been removed by the Google Play store for being in “violation of the dangerous products provision of the Content Policy and sections 4.3 and 4.4 of the Developer Distribution Agreement.” This is for an application that has been available in the Play store for roughly ten months, and one that, “can send messages in accordance to the 3GPP Specification 23.040 ‘Technical realization of the Short Message Service,’ and some other specifications like OMA WAP,” as stated by Mueller himself.

While many of us are anticipating an official fix to come in the forthcoming Android 4.4.1, we can’t help but think that this is a rather curious “solution” to the problem by Google. For reference, the Google Cached Page for the HushSMS Play Store Listing is still available. More information from the developer can be found in the source link below.

[Source: Softpedia]


Due to their expedient updates and lack of potentially vulnerable carrier and OEM addons, Nexus devices are considered to be among the safest Android devices. Being certified by Google mean a lot, but everything has some vulnerabilities, and newest Nexus devices are no exemption.

According to Romanian security researcher Bogdan Alecu, the Nexus lineup is vulnerable to a denial-of-service attacks based on a special type of SMS. This attack relies on Flash SMS, short messages displayed on the screen without being stored in the inbox. These are most often seen in pre-paid contract plans, used by a carrier to send messages with recent costs.

As it turns out, Flash SMS messages sent in rapid succession can cause some unexpected behavior like freezing, crashing, or even rebooting. The newest Nexus phones will reboot after approximately 30 messages sent in a short time. Users won’t be able to realize that they device was attacked without looking at the screen. Sometimes some data loss occur, so many important calls can be missed because of this.

Alecu claims that Google was alerted about this problem about a year ago and promised to fix it in Android 4.3. Unfortunately, they didn’t fulfill their promise, and the issue is still present in KitKat on the Nexus 5. The situation is even more abnormal, as non-Nexus device are unaffected. The security researcher claims that he tested almost 20 various devices, and only Nexus devices were vulnerable.

The Google Play Store offers plenty of apps that can send Flash SMS messages, including one made by Bogdan Alecu himself. Luckily, Alecu was kind enough to release a proof of concept application that protects Nexus devices from these attacks as well.

These DoS attacks that are described by Bogdan Alecu are not the most malicious and dangerous. An attacker can’t control your device. However, the potential for data loss, pranking, and even stalking may make this a rather annoying glitch. Hopefully, Google will look into this issue and fix it as soon as possible.

[Thanks to XDA Recognized Contributor D™ for the tip]


My mother always told me that security matters. And she was right. Security is important, as right now, devices can be hacked, phished, or scammed in multiple ways. That’s why protections are so important, especially in public areas. Security certificates were invented and widely used to prevent thieves from stealing our data.

It appears that security matters to XDA Forum Member forceu as well, as he wrote a guide on installing a custom security certificate to bypass the “Your network could be monitored” message when connecting to certain networks in KitKat. This pop up can be annoying, and it forces you to ignore the message when it could actually matter.

Forceu then discovered that certificate can be pushed to /system/etc/security/cacerts/ folder, and the device will interpret it as a trusted certificate. As a result, that little annoyance will be disabled for good for specific sites of your choosing. The certificate file must be saved in PEM format and edited as suggested in the guide. The device must be rooted to allow copying the file to /system partition. After this process is done, the newly created certificate can be freely enabled or disabled from the trusted certificates list.

Visit the guide thread to learn more.


It should come as no surprise that here at XDA, we are always calling on the OEMs to do a better job of removing the bloat of their custom UIs (Samsung – we’re looking at you and your now insane TouchWiz size) and improving the overall user experience. What may come as a shock to some, though, is that a recent study by researchers at North Carolina State University says that those same OEMs, and their incessant need to have a custom UI as some sort of “branding,” are directly responsible for most of the security issues found with Android. Cue Home Alone face.

In all honesty, we really shouldn’t be all that surprised. XDA Elite Recognized Developer jcase gave a great talk at XDA:DevCon13 where he discussed “Android Security Vulnerabilites and Exploits.” There, he identified how OEMs (LG was his main example) are directly responsible for many of the vulnerabilities and exploits he finds.

The researchers at NC State found that 60% of the security issues were directly tied to changes OEMs had made to stock Android, specifically related to apps requesting more permissions than were necessary. They looked at 2 devices from each 4 different OEMs (Sony, Samsung, LG and HTC), with one running a version of Android 2.x and another running 4.x from each OEM, along with the Nexus S and Nexus 4 from Google.

Here are a few of the findings:

  • 86% of preloaded apps asked for more permissions than were necessary, with most coming from OEMs.
  • 65-85% of the security issues on Samsung, HTC, and LG devices come from their customizations, while only 38% of the issues found on Sony devices came from them.

For the user, this should be a warning to pay attention to the permissions used when you install an app and take steps to protect yourself, like with the Xposed module XPrivacy. For OEMs, shame on you. Consumers place trust, no matter how unfounded and risky that is, on you. For you to be breaking that trust by not being responsible and open in your dealings and development is just plain careless.

The full study, presented yesterday at the ACM Conference on Computer and Communications Security in Berlin, is definitely a good read, with specific case studies done on the Samsung Galaxy S3 and LG Optimus P880.

Source: MIT Technology Review

[Thanks to XDA Elite Recognized Developer toastcfh for the tip.]


Along with the various user-facing features added in Android 4.4 KitKat, Google significantly bolstered the overall security of the platform with a number of key changes. Among other things, one of the key changes related to SELinux, which was previously introduced in Android 4.3. Android 4.4, however, shifted the SELinux status from Permissive to Enforce Mode.

To quote our security expert Pulser_G2 on the matter:

SELinux in Enforce Mode

In Android 4.4, SELinux has moved from running in permissive mode (which simply logs failures), into enforcing mode. SELinux, which was introduced in Android 4.3, is a mandatory access control system built into the Linux kernel, in order to help enforce the existing access control rights (i.e.permissions), and to attempt to prevent privilege escalation attacks (i.e. an app trying to gain root access on your device).

While this is largely a good thing for the general population, this security enhancement hasn’t been without its own share of issues. For example, it has broken some root-enabled applications such as the previously covered Ultimate Dynamic Navbar.

In order to allow users to easily toggle between SELinux modes, XDA Senior Member MrBIMC created the aptly titled SELinuxModeChanger app. The application (obviously) requires root access. Once given, the app allows you to toggle the SELinux status with but a single click. Once you’ve made your choice, a script will execute on boot to change the mode to what you have selected.

Naturally, the app only works on devices with SELinux. In other words, this is only meant for devices running Android 4.3 Jelly Bean or 4.4 KitKat. Of note, however, this does not yet work with Samsung KNOX-enabled devices. However, this is currently being worked on.

If you wish to easily change your SELinux mode and you’re not running a KNOX-enabled ROM, make your way over to the application thread and give this app a try.

Android KitKat

In addition to the many user-facing improvements in the latest incarnation of Android announced yesterday, there are a number of interesting security improvements, which seem to indicate that Google have not totally neglected platform security in this new release. This article will run through what’s new, and what it means for you.

SELinux in Enforce Mode

In Android 4.4, SELinux has moved from running in permissive mode (which simply logs failures), into enforcing mode. SELinux, which was introduced in Android 4.3, is a mandatory access control system built into the Linux kernel, in order to help enforce the existing access control rights (i.e. permissions), and to attempt to prevent privilege escalation attacks (i.e. an app trying to gain root access on your device).

Support for Elliptic Curve Cryptography (ECDSA) Signing keys in AndroidKeyStore

The integrated Android keystore provider now includes support for Eliptic Curve signing keys. While Eliptic Curve Cryptography may have received some (unwarranted) bad publicity lately, ECC is a viable form of public key cryptography that can provide a good alternative to RSA and other such algorithms. While asymmetric cryptography will not withstand quantum computing developments, it is good to see that Android 4.4 is introducing more options for developers. For long-term data storage, symmetric encryption remains the best method.

SSL CA Certificate Warnings

Many corporate IT environments include SSL monitoring software, which adds a Certificate Authority (CA) to your computer and/or browser, to permit the corporate web filtering software to carry out a “man in the middle” attack on your HTTPS sessions for security and monitoring purposes. This has been possible with Android by adding an additional CA key to the device (which permits your company’s gateway server to “pretend” to be any website it chooses). Android 4.4 will warn users if their device has had such a CA certificate added, such that they are aware of the possibility of this happening.

Automated Buffer Overflow Detection

Android 4.4 now compiles with FORTIFY_SOURCE running at level 2, and ensures all C code is compiled with this protection. Code compiled with clang is also covered by this. FORTIFY_SOURCE is a security feature of the compiler, which attempts to identify some buffer overflow opportunities (which can be exploited by malicious software or users to gain arbitrary code execution on a device). While FORTIFY_SOURCE doesn’t eliminate all possibilities of buffer overflows, it certainly is better used than unused, to avoid any obvious oversights when allocating buffers.

Google Certificate Pinning

Expanding on the support for certificate pinning in earlier versions of Jellybean, Android 4.4 adds protection against certificate substitution for Google certificates. Certificate Pinning is the act of permitting only certain whitelisted SSL certificates to be used against a certain domain. This protects you from your provider substituting (for example) a certificate provided to it under an order by the government of your country. Without certificate pinning, your device would accept this valid SSL certificate (as SSL allows any trusted CA to issue any certificate). With certificate pinning, only the hard-coded valid certificate will be accepted by your phone, protecting you from a man-in-the-middle attack.

It certainly appears that Google have not been resting on their laurels with Android security. This is in addition to the inclusion of dm-verity, which could possibly have serious consequences for people who like to root and modify their devices with locked bootloaders (i.e. which enforce kernel signatures).


XDA TV: Most Recent Video

Buy/Sell on Swappa

  • Nexus 5 (Unlocked) buy | sell
  • Galaxy Note 3 (T-Mobile) buy | sell
  • HTC One M7 (Verizon) buy | sell
  • Galaxy S 5 (Unlocked) buy | sell
  • Nexus 7 2013 buy | sell
  • Swappa is the official marketplace of XDA