• 5,742,989
    REGISTERED
  • 41,419
    ONLINE NOW

Posts Tagged: security

Snapchat logo

Here at XDA, we focus on bringing you news about what developers are up to on the forums or significant changes in the mobile industry. Today though, I bring an analysis of some recent news about goings-on in the security world in relation to a particular mobile application you may or not have heard of: Snapchat.

Snapchat is best described as a gimmick application, widely used by teens to send each other photos and short videos, which “self destruct” after viewing, preventing copies being made, etc. Before the security world tries to spear me on a stick and roast me, allow me to point out that Snapchat is an entirely flawed application. It’s not possible to achieve what they are trying to do, as they are trusting a device you control (your phone) to prevent you from copying data they send to it. As such, Snapchat has been broken. Many. Times. Over. On iPhone, and Android, and even via HTTP interception.

Four months ago, a group of security researchers, known as Gibson Security, identified a flaw in the Snapchat server API (the interface through which the Snapchat application communicates with the server), from the feature allowing users to find other users based on their mobile phone number. As the intention was for the application to upload a user’s contact list in order to find friends using the service, the API permitted a rapid rate of phone number queries. This allowed anyone to rapidly query the Snapchat service with phone numbers, asking if those numbers were in use by any user of the service, and if so, the associated username of that user.

Gibson Security found the original flaw in July 2013 and disclosed the issue to Snapchat. Four months later, and no response from Snapchat. They even tried applying for one of the jobs they were advertising! (source) On December 24th, Gibson Security released full documentation of the Snapchat API. The Snapchat API, while not documented, is not in any way hidden from a competent user, as the Snapchat application simple sends requests to the Snapchat servers using a particular format. Unfortunately though, Snapchat seem to be great believers of “security through obscurity,” sending unfounded takedown requests against people working to understand their API. That shows Snapchat has something to hide. After all, reliable, robust, and professional services make their API available freely and openly for people to use.

What followed was Snapchat’s somewhat lackluster statement on the matter, which amounted to saying “they were right, but we don’t think it’s a big deal, so we won’t really do anything about it, short of hiding behind some words about API query limits”. As anyone competent in security can tell you, putting some limits on this API is a short-term stop-gap fix (if done correctly), but isn’t a proper solution. The proper solution is to redesign this functionality to prevent attackers from gaining any information about users by guessing at simply phone numbers. A shame Snapchat’s team probably have never even seen the word “security”, let alone used the word with any meaning.

They also make some really rather bold statements, such as:

“We are grateful for the assistance of professionals who practice responsible disclosure and we’ve generally worked well with those who have contacted us.”

Given this case indicates the opposite (4 months+ without a response for Gibson security), I refuse to believe this, and implore you to do so as well. In fact, I would love to hear from any security researcher who has had any kind of positive interaction with Snapchat, at any time. I genuinely would, as it would prove that perhaps Snapchat are not lying through their teeth in a moment of self-preservation at this point.

On the 1st of January, a website appeared, offering for download 4.6 million Snapchat users’ phone numbers and associated usernames. While the resulting database had censored the final 2 characters of each phone number, those releasing the data said they would give access to the full, uncensored data, if approached with reasonable requests. That means there are now 4.6 million users of Snapchat with their phone numbers available to the world. While some naive and technically inept news sources report that the files and associated website have been “taken down,” as we all know, nothing is ever fully deleted from the Internet, and the files remain easily accessible for those seeking them. Unfortunately for anyone whose details were in that database, the damage has now been done.

Two to three days later (depending on timezones and the time of the precise release of the data), Snapchat finally raised their heads from the sand to make a somewhat pointless blog post. They did not apologise for the data breach, and nor did they apologise for being naive or mishandling user data. In fact, they didn’t actually apologize for anything. They were rather quick to apportion blame though:

On Christmas Eve, that same group publicly documented our API, making it easier for individuals to abuse our service and violate our Terms of Use”

Unfortunately, Snapchat, after designing a broken and insecure web service and trying to call the API that interfaces with it “private,” is not going to help you here. No serious hacker (who has bad intentions for your users’ data) is going to read your terms of use and say “I won’t hack them then… They asked me nicely not to do it.” Your terms of service should never be a front-line protection. Consider, for instance, that Snapchat states in their terms of service, “you must not view messages intended for other users,” and then simply makes every message publicly visible to everyone. I know it sounds far-fetched and silly, but that perhaps puts into perspective Snapchat’s naive approach to security.

Indeed, Sophos appear to also concur with my thinking here. Ultimately though, if users want to be protected from these kinds of attacks, I have 2 key pieces of advice: Firstly, give out less information. There is no reason for Snapchat to require or ask for your phone number, other than to enhance their user base and get you using Snapchat more. Mobile phone numbers are personal information, and you should really stop handing it out to services (sometimes without your knowledge). Take a look at XPrivacy by XDA Senior Member M66B to control access to this kind of data.

Secondly, and arguably more importantly, companies need to protect your data. I would say they should protect it as much as their own data, though given Evan Spiegel’s (Snapchat founder and CEO) own phone number and username were in the data breach, I suggest they don’t take enough care of their own information either. Users should have the expectation that ANY service being actively marketed and encouraging users is secure, and that this security has been tested through the company employing security experts, or at least getting suitable levels of peer review on their source code. Just think—if Snapchat’s web service was open source, this would have been fixed months ago, if that bug had even got through the scrutiny of the open source community in the first place.

To close, I offer you the following questions:

  1. How incompetent and complacent must a company be to ignore a security advisory of any kind, for 4 months?
  2. Why would a company such as Snapchat, in dire need of security knowledge, ignore a job application from a group of security researchers?
  3. Do Snapchat seriously believe that a malicious attacker (who wouldn’t tell anyone they obtained this information) will avoid taking advantage of their own security weaknesses, just because they ask politely for people not to? (Imagine asking another country nicely to not invade you – it doesn’t work)
  4. What can Snapchat do to regain user trust? Asides from coming and working with the security community (full disclosure, which I am a member of) in an open manner and fixing issues, Snapchat need to apologise to their users, and show humility here. Evan Spiegel is a college student, and he needs to get in people who know about security. And I know plenty of college students who are experts and could have prevented this.

While writing this article, further vulnerabilities have already been found in Snapchat. It appears that the original Snapchat issue is only scraping the surface of issues on their service. I do hope they take this opportunity to get competent security review of all their services and code carried out, so that they can protect their users and ensure their data is properly protected in future.

Capture

While secure text messaging systems have been available on Android for quite some time, many users (even power users) have failed to set them up on their devices. This isn’t because privacy isn’t important, but it’s often one of those things you don’t think of until it’s too late.

Now, CyanogenMod is taking a great first step by incorporating an existing and open source secure text messaging platform into CyanogenMod. The integration comes in the form of TextSecure, which is maintained by Open WhisperSystems and lead engineer Moxie Marlinspike. Moxie is also in charge of the CM integration of the app, ensuring functionality and a degree of security. New to the CM implementation is SMS middleware functionality. This functions similarly to the Google Voice integration in CyanogenMod.

The way it will work for end users is simple: If you are running CM and send a message to another CM or TextSecure user, your messages will be automatically encrypted and secured. However, if your messages are sent to recipients without either, a standard unencrypted text message will be sent.

Now, you might be wondering when you can get your hands on these goods. Luckily, You just have to make your way over to GitHub (12) if you’re a developer looking to incorporate the code into your own work, or if you simply want to snoop around. And if you’re an end user, rest assured that the latest CM10.2 nightlies already feature TextSecure integration. Integration into CM11 is coming soon as well, depending on how things go with the CM10.2 integration.

[Source GitHub (12) | Via CyanogenMod Blog]

Advertisment
unnamed

Not too long ago, we talked about the Flash SMS (class 0) DoS vulnerability affecting the current lineup of Nexus devices. Discovered by Romanian security researcher Bogdan Alec, the vulnerability was such that Flash SMS (class 0) messages sent in rapid succession would cause unexpected behavior on various Nexus devices. Curiously, though, the bug only affected Nexus device owners.

Luckily, the vulnerability was never all that damaging. After all, the worst outcome that has been seen so far is data loss due to a device reboot. That said, the vulnerability certainly opens up users to annoying pranks and spam that can get in the way of essential productivity.

Now, the vulnerability has claimed its first major conquest, though in a somewhat unexpected way. No, there wasn’t a malicious attack based on the vulnerability. HushSMS by app developer Michael Mueller has been removed by the Google Play store for being in “violation of the dangerous products provision of the Content Policy and sections 4.3 and 4.4 of the Developer Distribution Agreement.” This is for an application that has been available in the Play store for roughly ten months, and one that, “can send messages in accordance to the 3GPP Specification 23.040 ‘Technical realization of the Short Message Service,’ and some other specifications like OMA WAP,” as stated by Mueller himself.

While many of us are anticipating an official fix to come in the forthcoming Android 4.4.1, we can’t help but think that this is a rather curious “solution” to the problem by Google. For reference, the Google Cached Page for the HushSMS Play Store Listing is still available. More information from the developer can be found in the source link below.

[Source: Softpedia]

ddos-attack-335px

Due to their expedient updates and lack of potentially vulnerable carrier and OEM addons, Nexus devices are considered to be among the safest Android devices. Being certified by Google mean a lot, but everything has some vulnerabilities, and newest Nexus devices are no exemption.

According to Romanian security researcher Bogdan Alecu, the Nexus lineup is vulnerable to a denial-of-service attacks based on a special type of SMS. This attack relies on Flash SMS, short messages displayed on the screen without being stored in the inbox. These are most often seen in pre-paid contract plans, used by a carrier to send messages with recent costs.

As it turns out, Flash SMS messages sent in rapid succession can cause some unexpected behavior like freezing, crashing, or even rebooting. The newest Nexus phones will reboot after approximately 30 messages sent in a short time. Users won’t be able to realize that they device was attacked without looking at the screen. Sometimes some data loss occur, so many important calls can be missed because of this.

Alecu claims that Google was alerted about this problem about a year ago and promised to fix it in Android 4.3. Unfortunately, they didn’t fulfill their promise, and the issue is still present in KitKat on the Nexus 5. The situation is even more abnormal, as non-Nexus device are unaffected. The security researcher claims that he tested almost 20 various devices, and only Nexus devices were vulnerable.

The Google Play Store offers plenty of apps that can send Flash SMS messages, including one made by Bogdan Alecu himself. Luckily, Alecu was kind enough to release a proof of concept application that protects Nexus devices from these attacks as well.

These DoS attacks that are described by Bogdan Alecu are not the most malicious and dangerous. An attacker can’t control your device. However, the potential for data loss, pranking, and even stalking may make this a rather annoying glitch. Hopefully, Google will look into this issue and fix it as soon as possible.

[Thanks to XDA Recognized Contributor D™ for the tip]

Android-security-apps

My mother always told me that security matters. And she was right. Security is important, as right now, devices can be hacked, phished, or scammed in multiple ways. That’s why protections are so important, especially in public areas. Security certificates were invented and widely used to prevent thieves from stealing our data.

It appears that security matters to XDA Forum Member forceu as well, as he wrote a guide on installing a custom security certificate to bypass the “Your network could be monitored” message when connecting to certain networks in KitKat. This pop up can be annoying, and it forces you to ignore the message when it could actually matter.

Forceu then discovered that certificate can be pushed to /system/etc/security/cacerts/ folder, and the device will interpret it as a trusted certificate. As a result, that little annoyance will be disabled for good for specific sites of your choosing. The certificate file must be saved in PEM format and edited as suggested in the guide. The device must be rooted to allow copying the file to /system partition. After this process is done, the newly created certificate can be freely enabled or disabled from the trusted certificates list.

Visit the guide thread to learn more.

Home_Alone_Boy1

It should come as no surprise that here at XDA, we are always calling on the OEMs to do a better job of removing the bloat of their custom UIs (Samsung – we’re looking at you and your now insane TouchWiz size) and improving the overall user experience. What may come as a shock to some, though, is that a recent study by researchers at North Carolina State University says that those same OEMs, and their incessant need to have a custom UI as some sort of “branding,” are directly responsible for most of the security issues found with Android. Cue Home Alone face.

In all honesty, we really shouldn’t be all that surprised. XDA Elite Recognized Developer jcase gave a great talk at XDA:DevCon13 where he discussed “Android Security Vulnerabilites and Exploits.” There, he identified how OEMs (LG was his main example) are directly responsible for many of the vulnerabilities and exploits he finds.

The researchers at NC State found that 60% of the security issues were directly tied to changes OEMs had made to stock Android, specifically related to apps requesting more permissions than were necessary. They looked at 2 devices from each 4 different OEMs (Sony, Samsung, LG and HTC), with one running a version of Android 2.x and another running 4.x from each OEM, along with the Nexus S and Nexus 4 from Google.

Here are a few of the findings:

  • 86% of preloaded apps asked for more permissions than were necessary, with most coming from OEMs.
  • 65-85% of the security issues on Samsung, HTC, and LG devices come from their customizations, while only 38% of the issues found on Sony devices came from them.

For the user, this should be a warning to pay attention to the permissions used when you install an app and take steps to protect yourself, like with the Xposed module XPrivacy. For OEMs, shame on you. Consumers place trust, no matter how unfounded and risky that is, on you. For you to be breaking that trust by not being responsible and open in your dealings and development is just plain careless.

The full study, presented yesterday at the ACM Conference on Computer and Communications Security in Berlin, is definitely a good read, with specific case studies done on the Samsung Galaxy S3 and LG Optimus P880.

Source: MIT Technology Review

[Thanks to XDA Elite Recognized Developer toastcfh for the tip.]

Capture

Along with the various user-facing features added in Android 4.4 KitKat, Google significantly bolstered the overall security of the platform with a number of key changes. Among other things, one of the key changes related to SELinux, which was previously introduced in Android 4.3. Android 4.4, however, shifted the SELinux status from Permissive to Enforce Mode.

To quote our security expert Pulser_G2 on the matter:

SELinux in Enforce Mode

In Android 4.4, SELinux has moved from running in permissive mode (which simply logs failures), into enforcing mode. SELinux, which was introduced in Android 4.3, is a mandatory access control system built into the Linux kernel, in order to help enforce the existing access control rights (i.e.permissions), and to attempt to prevent privilege escalation attacks (i.e. an app trying to gain root access on your device).

While this is largely a good thing for the general population, this security enhancement hasn’t been without its own share of issues. For example, it has broken some root-enabled applications such as the previously covered Ultimate Dynamic Navbar.

In order to allow users to easily toggle between SELinux modes, XDA Senior Member MrBIMC created the aptly titled SELinuxModeChanger app. The application (obviously) requires root access. Once given, the app allows you to toggle the SELinux status with but a single click. Once you’ve made your choice, a script will execute on boot to change the mode to what you have selected.

Naturally, the app only works on devices with SELinux. In other words, this is only meant for devices running Android 4.3 Jelly Bean or 4.4 KitKat. Of note, however, this does not yet work with Samsung KNOX-enabled devices. However, this is currently being worked on.

If you wish to easily change your SELinux mode and you’re not running a KNOX-enabled ROM, make your way over to the application thread and give this app a try.

Android KitKat

In addition to the many user-facing improvements in the latest incarnation of Android announced yesterday, there are a number of interesting security improvements, which seem to indicate that Google have not totally neglected platform security in this new release. This article will run through what’s new, and what it means for you.

SELinux in Enforce Mode

In Android 4.4, SELinux has moved from running in permissive mode (which simply logs failures), into enforcing mode. SELinux, which was introduced in Android 4.3, is a mandatory access control system built into the Linux kernel, in order to help enforce the existing access control rights (i.e. permissions), and to attempt to prevent privilege escalation attacks (i.e. an app trying to gain root access on your device).

Support for Elliptic Curve Cryptography (ECDSA) Signing keys in AndroidKeyStore

The integrated Android keystore provider now includes support for Eliptic Curve signing keys. While Eliptic Curve Cryptography may have received some (unwarranted) bad publicity lately, ECC is a viable form of public key cryptography that can provide a good alternative to RSA and other such algorithms. While asymmetric cryptography will not withstand quantum computing developments, it is good to see that Android 4.4 is introducing more options for developers. For long-term data storage, symmetric encryption remains the best method.

SSL CA Certificate Warnings

Many corporate IT environments include SSL monitoring software, which adds a Certificate Authority (CA) to your computer and/or browser, to permit the corporate web filtering software to carry out a “man in the middle” attack on your HTTPS sessions for security and monitoring purposes. This has been possible with Android by adding an additional CA key to the device (which permits your company’s gateway server to “pretend” to be any website it chooses). Android 4.4 will warn users if their device has had such a CA certificate added, such that they are aware of the possibility of this happening.

Automated Buffer Overflow Detection

Android 4.4 now compiles with FORTIFY_SOURCE running at level 2, and ensures all C code is compiled with this protection. Code compiled with clang is also covered by this. FORTIFY_SOURCE is a security feature of the compiler, which attempts to identify some buffer overflow opportunities (which can be exploited by malicious software or users to gain arbitrary code execution on a device). While FORTIFY_SOURCE doesn’t eliminate all possibilities of buffer overflows, it certainly is better used than unused, to avoid any obvious oversights when allocating buffers.

Google Certificate Pinning

Expanding on the support for certificate pinning in earlier versions of Jellybean, Android 4.4 adds protection against certificate substitution for Google certificates. Certificate Pinning is the act of permitting only certain whitelisted SSL certificates to be used against a certain domain. This protects you from your provider substituting (for example) a certificate provided to it under an order by the government of your country. Without certificate pinning, your device would accept this valid SSL certificate (as SSL allows any trusted CA to issue any certificate). With certificate pinning, only the hard-coded valid certificate will be accepted by your phone, protecting you from a man-in-the-middle attack.

It certainly appears that Google have not been resting on their laurels with Android security. This is in addition to the inclusion of dm-verity, which could possibly have serious consequences for people who like to root and modify their devices with locked bootloaders (i.e. which enforce kernel signatures).

Capture

Android 4.4 introduces a number of changes intended to reduce the risks of rootkits on the platform. In addition to SELinux, the dm-verity kernel feature is also used on boot. The dm-verity feature is used to verify the filesystem storage, and detect modifications to the device at block level (rather than file level). In essence, dm-verity aims to prevent root software from modifying the device file system. This is done by detecting the modifications made to the filesystem, which will no longer match the expected configuration.

In dm-verity, each block of the storage device has a SHA-256 hash associated with it. (For reference, a block is simply a unit of address for storage, typically around 4 KB on flash devices.) A tree of hashes is formed across pages, such that only the “top” hash in the tree (known as the root hash) needs to be trusted, in order for the entire filesystem to be trusted. If any block is modified, this will change the hash, breaking the chain.

The boot partition of the device will contain a public key, which the OEM is expected to externally verify (perhaps via the bootloader or low-level CPU features). This public key is used to ensure the signature of the hash on the file system is valid and unmodified.

In order to reduce the time taken to verify the filesystem, blocks are only verified when they are accessed, and are verified in parallel with the regular read operation (to essentially eliminate any latency with accessing the storage). If the verification changes (i.e. files have changed on the system partition), then a read error is generated. Depending on the application accessing the data, it may proceed if it’s not a critical action, but it is also possible for applications to decline to operate under these conditions.

While nobody can predict the future with 100% accuracy, I think it’s fair to say that “rooting” and modifying devices running Android 4.4 with locked bootloaders (i.e. where root exploits are required, as the OEM will not permit custom kernels) may well be considerably more difficult than in previous Android versions. It seems that Android 4.4 is taking a few leaves out of the Chrome OS book, as these changes essentially implement “verified boot,” as found on Chrome OS.

To re-iterate, if you are able to change the kernel your device uses, this feature will not be a concern. It’s possible to either disable dm-verity in the kernel, or to set it up to use your own keys to authenticate the system hash. For users who choose to buy carrier-branded devices and accept a locked bootloader, but find a way to root the device, take heed of this warning. It’s not at all unlikely (in my technical opinion) for this to happen on future devices. If you want the ability to modify the software on your phone, I’d avoid anything with a locked bootloader, and ensure you can modify the kernel (to disable or modify the dm-verity signatures).

Right now, little is known about what this will actually mean, but aside from greater security for users on stock ROMs, I suspect there will be some noticeable impact on casual users wishing to make small changes to Android. Until we see devices from other OEMs shipping with 4.4, it’s difficult to really assess how (or if) this will change things. But take note, and bear it in mind.

Source: Android Source Code Documentation

reviewingapps2

If you’ve ever handed someone your phone to someone, whether to show them a funny picture or if they ask to check it out, you know the terror that runs through your mind thinking of what they could stumble upon: your usernames and passwords for different sites, your special ‘recipe,’ your mistress’s phone number, anything.

Well, XDA Forum Member msappz offers a new way to keep your secret life private. In this video, XDA Developer TV Producer Walter White TK reviews Safe N Secure Notepad. TK shows off the application and gives his thoughts, so check out this app review.

READ ON »

keyboardspy

The answer to the question above, as security researcher Philip Marquardt demonstrated, is “yes.” However, it’s not all that likely in practice, and there are several simple ways to protect yourself.

Data security is a rapidly growing concern in our increasingly digital world. In order to help bring these concerns to light, we recently launched a Security forum specifically for discussion of various security-related topics. Not too long ago, we also talked about malware on Android and how this is largely an overstated problem for those running relatively recent builds of the OS. However, when most people think of mobile security, they think of protecting their own device from intrusions. What many people haven’t considered is the possibility of using a mobile device’s various built-in sensor array to spy on unsuspecting victims.

This is exactly what Philip did while at Georgia Tech, as part of a proof-of-concept keystroke logger. This keylogger works in a rather unconventional way. Rather than using a physical connection to intercept data between a target computer and its keyboard or malicious software stored on the target computer, Philip demonstrated that an iPhone 4′s accelerometer could be used to determine the keystrokes pressed on a nearby physical keyboard. Using two neural networks (one for horizontal distance, and one for vertical distance), the software was able to correlate vibrations picked up by the phone’s accelerometer with their associated keystrokes.

Naturally, there are some major limitations preventing this from becoming the next big security scare. First, a rather precise and sensitive accelerometer must be used. For example in Philip’s testing, the iPhone 3GS was not sensitive enough to work properly, but the iPhone 4 was. Second, the mobile device doing the data acquisition must be relatively close to the physical keyboard being used because as we all know, unfocused vibrational energy through most transmission media dissipates according to the square of its distance. Furthermore, even with extended learning time, individual keypress recognition was impossible and whole word recognition was only 46% accurate—with shorter words being correspondingly less accurate.

Despite the initial limitations in individual key press and low individual word accuracy, however, reliability increased dramatically to 73% if second-choice words were also counted. Thus, semantic analysis clearly has a powerful effect in tuning word detection in context. That said, this would render detection of passwords and other non-semantically relevant data impossible.

So while this is extremely unlikely to be used in the wild in its current form, and the current detection accuracy limits its use to dictionary words, I know that I’ll be a bit more careful if I notice some unknown mystery object on my desk. After all, the sensors in our mobile devices are only become more and more accurate. Furthermore, more purpose-built sensors can conceivably be used to achieve a higher detection accuracy.

Ultimately, there are many more likely ways in which your data will be stolen, so this is nothing to lose sleep over. And if you really wish to protect yourself from the possibility of accelerometer-based spying, just make sure there are no hidden devices on your desk next to your keyboard. Now, acoustic emanation word detection (PDF)… That’s something far more worrying and far more difficult to thwart. I guess it’s time to listen to loud, bass-heavy music whenever I type sensitive information. It may go well with my tinfoil hat. :P

You can learn more about Philip’s research by viewing his security research paper (warning: PDF).

Via I Programmer.

[Thanks to security researcher John Doyle for the heads up.]

unnamed

A little over a year ago, we took at Anti Spy Mobile, an application by XDA Senior Member pandata000 that was aimed at helping users make sure that their applications’ permissions were in check. The previously mentioned app worked by figuring out which applications are installed, searching for well known spyware, analyzing permissions and Android intents, and giving an easily understandable output to the user listing potential trouble spots. Anti Spy Mobile unfortunately is not able to track the actual connections made by spyware.

In response to user request, pandataooo has now created a new application aimed at showing all of your current connections. Aptly titled Network Connections, pandataooo’s new app monitors and logs all connections from every network-connected app so that you know where exactly your data is going. Similar to the netstat command, this app works for both inbound and outbound traffic, and it displays the output on a per-app basis. Network Connections is even compatible with non-rooted phones. In other words, you have no excuse for not at least checking periodically.

Head over to the application thread to get started.

Please note: Network Connections normally comes in two forms: lite and premium. There are a few minor restrictions in the light version such as an unobtrusive nag screen and a limit on how long you can leave continuous capture enabled. The application can be relaunched indefinitely, allowing users to still capture connection information after the limit has been reached. However, as a special offering to the XDA community, the developer has made an unlocker available from now until next Saturday (10/19/2013) that will give you a permanently free copy of the premium app.

1

We’ve all heard about the Android malware problem. After all, proponents of other mobile operating systems love to spread FUD stating that Android’s malware situation is out of control. Further, there are various entities such as antivirus firms with vested interests in demonstrating that there is indeed an issue.

Who’s to blame the companies using these unscrupulous tactics? After all, it’s simply good business to undermine your mobile OS competitors or create demand for your product in the case of security solution providers. And up until very recently, Google unfortunately lacked a reliable way of determining and tracking the scope of the problem. That changed recently, however, when Google introduced its current multiple layers of defense, which is seen in the infographic to your right.

According to a presentation by Android Security Chief Adrian Ludwig, it is estimated that less than 0.001% of application installs are able to evade the platform’s multi-layered defense system—a system which includes sandboxed permissions, application verification, trusted sources, and runtime defenses. This figure includes both applications installed through Google Play, as well as the 1.5 billion applications installed through other means (side-loaded or alternate app stores).

So what does the data show? When installing from non-Google sources, under 0.5% of applications are flagged by the application verification system. Of these, under 0.13% of these applications end up being installed by the user, and under 0.001% of these attempt to evade Android’s runtime defenses. The actual number that is able to cause harm and evade these defense mechanisms is unclear, but if the data is to be believed, it would reason that this number is smaller than 0.001% of applications that users attempt to install.

The next major question becomes which apps are most frequently flagged by the application verification system. Research presented by Ludwig demonstrated that nearly 40% of these applications are “fraudware” apps that make premium phone calls and text messages. Another 40% are rooting apps, which are “potentially harmful,” but not malicious per se. Then, 15% of the apps are commercial spyware, which track things such as Internet behavior or collect other personal information. The remainder is a diverse group of truly malicious apps.

In the grand scheme of things, 0.001% is a very small number. That’s 1 in 100,000, which anyone would be hard pressed to label as significant. It’s not 0, but it’s unrealistic to expect it to be 0. That said, it’s close enough so that the vast majority of users should be relatively safe by employing good security practices and installing only applications from trusted sources and reading permissions.

It is important to keep in mind, however, that just like the security providers and proponents of other mobile operating systems that can profit from Android security FUD, Google also has a dog in the fight. No group is truly impartial here. And unfortunately, it is up to the user to decide with his or her own personal data who is to be believed. That said, I know my mobile operating system of choice, despite the FUD. But since I care about my data (and that of my friends and family), I won’t be turning off Verify Apps or installing from untrusted sources any time soon.

Have you or any of your friends ever fallen victim to malware on Android? Let us know your thoughts on the Android malware situation in the comment box below. You can learn more by viewing the full presentation.

Source: Quartz

[via Google+]

Advertisement

XDA TV: Most Recent Video

Buy/Sell on Swappa

  • Nexus 5 (Unlocked) buy | sell
  • Galaxy Note 3 (T-Mobile) buy | sell
  • HTC One M7 (Verizon) buy | sell
  • Galaxy S 5 (Unlocked) buy | sell
  • Nexus 7 2013 buy | sell
  • Swappa is the official marketplace of XDA