• 5,603,687
    REGISTERED
  • 37,313
    ONLINE NOW

Posts Tagged: security

heml.is_

In light of all the recent panic over surveillance and Internet monitoring, there are a plethora of “secure” communication programs being announced and launched. These tend to make bold promises of being secure, protecting users from surveillance, and being better than equivalent services.

Yesterday, 3 notable personalities in the web-o-sphere lost much credibility in my (and anyone interested in security’s) view. Why? For using pseudo-security, and trying to market it as security. They clearly do not have a strong background in cryptography or security theory, and appear out to make money, rather than to create a well-designed and well-architected, resilient and decentralised service. And I’m not against someone making a commercial service, but hey, at least design it well, and make it open source.

Open source doesn’t prevent being a commercial success. Take a look at, say, Android, or RedHat Linux, or SUSE, or indeed any open source project with a company behind it that doesn’t turn a loss (and hey, a company that runs at a loss won’t last long).

Without further ado, let’s just take a look at what they say about their service.

From their own FAQ:

Will it be Open Source?
We have all intentions of opening up the source as much as possible for scrutiny and help! What we really want people to understand however, is that Open Source in itself does not guarantee any privacy or safety. It sure helps with transparency, but technology by itself is not enough. The fundamental benefits of Heml.is will be the app together with our infrastructure, which is what really makes the system interesting and secure.

While it is true that being open source alone is not a guarantee of security, they want to open the source, “as much as possible”. Yet they are intent on offering a closed platform. From history, lessons have been learned about poor security, such as Cryptocat, which is open-source, and has had many security holes. Would these holes (which are critical to the security of the service) have been found if it was not open source? Arguably not, as they arose in review of the source code.

Is it really secure?
Yes and no. Nothing is ever 100% secure. There will not be any way for someone without access to your phone to read anything, but with access to your phone they can of course read the messages. Just as they can use any other app you have installed.

This suggests the decryption keys are stored unprotected on the device, meaning a rooted device permits trivial key retrieval. This can easily be avoided by encrypting the key with a strong, password-derived key. Every rooted user should be aware of this. But likely won’t, as the makers appear unwilling to suggest downsides of the system.

Your server only?
Yes! The way to make the system secure is that we can control the infrastructure. Distributing to other servers makes it impossible to give any guarantees about the security. We’ll have audits from trusted third parties on our platforms regularly, in cooperation with our community.

How is this ANY better than iMessage, or any other large-corporation, closed-source rival? If they control the infrastructure, and one cannot freely review the source code running on it, this is just as bad as iMessage, which is simply not secure. They mention third party audits in cooperation with the community, but what community will there be, when the software is closed source, and entirely centralized?

Will you provide an API and/or allow third party clients?
At this point we don’t see how that would be possible without compromising the security, so for now the answer is no.

In security, the mantra is “trust nothing”. In particular, it is NEVER safe to trust the end user’s device, or client software. The answer above says that they, the experts, cannot see a way to permit an API or third party clients, without compromising security. As everyone familiar with XDA knows, it is trivially easy to modify apps and their underlying code, as is seen with tools like smali and apktool, as well as the Xposed Framework. 

It is clear that this Heml.is system is placing far too much trust in the clients in this system. While alternative systems are trusting nobody but themselves (for example, Bitcoin), this is a step backwards, towards the dependent situation where users are forced to trust a closed network, over which they have no input. This closed network can only run on the servers of the service provider.

Does Heml.is save every message on a server?
Messages will only be stored on our end until they have been delivered to the recipient. We might add support for optional expiry times to messages, in which case messages would be stored until they had been delivered or they expire. Whichever comes first.

Frankly, this answer actually made me laugh out loud, and get many a strange look. This is the kind of answer that public relations (PR) people dream of. The answer here is “yes”, and the people behind Heml.is even admit it. But they fail to recognize that with an untrusted central server, you are forced to go simply on THEIR WORD that they actually do remove these messages. What makes you certain they do remove them? And that they always will? Do you trust the NSA to remove what they store about you? Do you trust iMessage to? Why should you trust Heml.is to?

I honestly cannot understand why they have brought this kind of product to the public at this stage; they have proposed nothing in any way better than ANY other service on the market. There is as much guaranteed security in this system as there is in standing on a pedestal in a crowded city with a megaphone and shouting your correspondence to the world. This is a real shame, as I really hoped that Heml.is would be different. I thought more from its developers, who have reputations for being sensible and privacy/security conscious.

What is on offer here is a closed-box security system. Statistically speaking, any project of this magnitude will have at least 1 major flaw in its cryptographic implementation. And I can already predict that flaw, having no access to the software, or indeed any information beyond that available to us all from their website. So I will make that prediction now, and in public. This system will, in my professional opinion, rely on trusting their centralized server for the identification and authentication of users to each other. Meaning, if the central server is compromised, or the operators are forced through duress, they would be able to modify the server so a request for Bob’s public key would return a public key under the control of the attacker. The only way to alleviate this is to have an entirely open and distributed, decentralized back-end, which never trusts a client not to lie, and a client that never trusts the server.

It is not currently possible to achieve perfect security in this sense (of being assured the key you receive is from the person it claims to be), short of in-person verification of key fingerprints. But it is possible to at least not rely on a centralized server to be trustworthy, when that server is not able to be inspected by yourself, and independently run. This was a major opportunity for a totally distributed network facilitating free and secure private communications, which has been spoiled through a lack of experience by those designing it, with regard to security.

I see no way that even the proposal can withstand any kind of scrutiny from those in the security field such as myself. I would love for the guys behind it to get in touch, and see if they are willing to address some of these issues. Perhaps it’s all a big set of misunderstandings, but from the wording here, this system is wholly insecure, and relies entirely upon their “service in the middle” to honestly relay keys to users. And if they’re going for “all out convenience”, that will be the easiest way to go. But there are plenty of changes they can make to improve this system, and if they are willing to discuss this, I am more than happy to make a few suggestions that would eliminate the issues with their central server, and “no third party clients” (which effectively means they wouldn’t properly open-source the resulting application).

Peter, Leif or Linus, drop me an email (pulser _at_ xda-developers.com) and we can have a constructive chat about this, and if you want to make a response here, add one. As it stands though, this whole project comes across as an exercise to produce money from the “masses” for the promise of secure communications. And there’s nothing wrong with that. But at least make it use proper, well-designed, and robust security principles, which will stand up to users making use of third party clients, or being able to ensure they are not placing any trust in your server for key distribution. If you rely on trusting the user’s client to behave, or on your server to never be compromised (and your staff never placed under legal or physical threats), then one day, the walls will crumble down. And it’s all achievable, while being fully open-source, open-standard, and open-platform.

XDA_Articles-devcon

At XDA, we get downright giddy when we see a heavily locked down device unlocked and rooted. An unlocked bootloader and rooted device opens the door for many options of custom ROMs. Without root we have no recovery, no ROMs, no kernel optimizations, and very limited other development. Most of us are guilty of just flashing away what greater minds say we need to without ever understanding what they do.

Justin Case, aka XDA Elite Recognized Developer jcase, is a mobile security researcher and the developer of many of these Android exploits. He is one of these great minds, and he will be presenting at XDA:DevCon 2013. Jcase will be discussing vulnerabilities and common security shortfalls in Android applications and firmware. He will also be walking the audience through identification of a vulnerability and development of an Android root exploit.

Being one of the great minds that understands Android security, jcase knows that the very same exploits we use to root our phones expose us and others to malicious activities such as spyware, bots, keyloggers, and other forms of malware. At XDA:DevCon, jcase will discuss past vulnerabilities in applications and firmware, as well as how they are mitigated today. He will teach the audience about some of the tools and methods used in identifying vulnerabilities. Finally, he will be speaking about application and firmware security, citing and explaining common mistakes, and how we can mitigate them. To end the presentation, jcase will publish and discuss a brand new root exploit for the LG Optimus series of phones.

Join us August 9 to 11 in Miami for XDA:DevCon 2013. Register to attend using this link for exclusive savings.

Advertisment
hiapplock

With the citizens of the United States debating the Orwellian state of citizen surveillance, security is a hot topic. Perhaps it is a good idea to protect yourself from spying a bit more. Grab a piece of tin foil and fashion a hat, lock your phone, and talk only in short syllables.

XDA Forum Member hiapp has an application to block access to your applications.  In this video, XDA Developer TV Producer TK reviews Hi App Lock. TK shows off the application and gives his thoughts, so check out this app review.

DISCLAIMER: Neither XDA nor the app developer guarantees any protection from government surveillance with the use of this or any app / idea presented in this video.

READ ON »

Screen-Shot-2013-06-10-at-10.40.14-AM-580x328

In case you are someone like I am who doesn’t follow the annual “update” of iOS, this is where they make it more like Android and make use of some features Android has had for years (i.e. notification pull-down), and announce a few changes and “new” things the rest of the world has done for years.

Before I go any further, the previous sentence is intended as a joke, let’s not turn this into an iOS vs whatever war. This is about something that all platforms need to unite on: user data security.

Apple yesterday announced a new feature, whereby your passwords will be synced between all your devices, using their iCloud service. On the face of it, this ought to encourage users to use stronger passwords, as they do not need to remember each password. Unfortunately, this “user friendly” system appears to have a few fundamental flaws. This is called iCloud Keychain.

Firstly, Apple encourages password re-use. Not in the strict sense of using the one password across different sites, rather in the sense of using one password for secure and nonsecure tasks—an iPhone user must enter his/her Apple Account/iCloud password to install or update an app. They must also enter this same iCloud password to restore their cloud device backup to a new phone. And, no doubt, will use this iCloud password to unlock the iCloud Keychain.

At this point, the security-inclined among us will be boiling up in a nerdrage, at the thought of using the same password for a routine, insecure environment task (installing an app a friend recommends), and then re-using that same password to unlock your entire digital life of passwords and credit card details. To quote from Apple, this service will store website logins, credit card numbers, WiFi networks, and account information. Asides from the fact I sincerely doubt it is storing WiFi networks, and rather stores WiFi passwords, this seems rather unsafe.

I know 3 of my friends’ iCloud passwords. Not through some devious social engineering scam, or through some super-sneaky shoulder surfing. No… They each volunteered it to me. For whatever reason they were showing me something on their phone, and Apple decided it was time to ask for their iCloud password again. I was showing one how to update their apps, and before I could hand the phone back to them to log in on, they had told me their iCloud password. AAARGH… Don’t Apple teach security to their users?

I am more than certain that plenty of iPhone (and other Apple product users) are not aware of the need to keep secure their iCloud password, as Apple shields them from the technical nuances to avoid spoiling their marketing of everything being sleek and safe. Having a red warning “IF ANYONE FINDS OUT THIS PASSWORD, THEY WILL OWN YOUR ENTIRE LIFE FOREVER MORE” would be justifiable, but there is no such warning.

Unfortunately, the product launch also introduced some technical words. “Oh, but it protects them with robust AES 256-bit encryption”, I hear you say, quoting from the announcement. And indeed, that is correct. But AES-256 encryption is not quite so robust when a legitimate user can obtain the key through simply knowing their iCloud password. Or when someone just resets your iCloud password. Do you really think Apple will design this system securely, so if a user forgets his/her password, they forever lose access? Or will they build in a user-friendly backdoor to allow the user back into his/her account once they call support? I’ll let you figure that out…  Unfortunately Apple are in a predicament here: They need users to use super-strong, hyper-complex passwords for their iCloud account. And understand the technical reasons they must keep this password secure. The problem is, like most Apple products, they are designed for ease of use, and therefore the majority of users will pick a simple password.

Which means it will be nice and short so it is convenient for them to type in every time they install or update an app.

Which means it’s not secure.

Expect attacks on iCloud accounts to rise in volume and risk, particularly against less technical users. I anticipate a lot of phishing attacks attempting to tell Apple device users their account just needs a “little upgrade”, and to just click this link so one of their geniuses will sort it all out automatically. While the friendly-friendly approach works to a point, it doesn’t work whatsoever when it comes to the harsh realities of security. This is not secure encryption, as it depends on a user who is constantly shielded from the technical intricacies of the process.

privacy

Android, as an operating system, is fairly unique in that it makes users aware of the permissions available to apps in a fairly transparent way. Compared to Blackberry or iOS, which issue granular prompts such as “Can Angry Birds access your location?” or “Can Instagram access your camera to take photos?” There is a somewhat subtle difference here: The rivals give the user a choice about these requests.

Jump over to Android where, after installing an app, it has free reign to use every permission you agreed to. While this doesn’t sound an issue, let’s take a look at the Play Store. Let’s look at a nice, popular app (for better or for worse): Facebook.

The Facebook app has permissions to:

  • Create accounts and set passwords for those accounts, and add or remove accounts - this allows Facebook to store its account login using the AccountManager, and is good to see
  • access your accurate (GPS) location and network location - this is to allow for geo-tagging of posts and status updates, which some people might want, but others may be very against. More on this later.
  • full network access - Google lacks a way to give an app selective internet access, so this is needed, much as it might be nice to limit the remote servers an app can communicate with
  • directly call phone numbers - I confess to not researching this one – I am fairly sure Facebook lets you tap a phone icon to call someone. Would it really be so bad to just let you confirm you want to call, and avoid this permission? How long until we see profiles listing premium rate numbers as their phone number?
  • read your phone status and identity - This one is a privacy stealer – while allowing an app to tell if a call is active or not, it also is fairly invasive, allowing an app to obtain your phone number, IMEI, IMSI, device ID (etc), and if you are on a call, the phone number you are connected to!
  • read to and write from USB storage devices - Perhaps this is for caching of files, but maybe a tad excessive?
  • install shortcuts on your homescreen without user interaction - Not sure when this is used, but the permission is there, so it might like to self-promote Facebook services?
  • read your detailed battery statistics - As described in the descriptions, this gives low-level battery use data, and can allow the app to find out what other applications you use.
  • see what other apps are running - Likely for the Facebook Home feature, again allows Facebook to see what apps are running
  • take photos and videos at any time without prompting - Rather concerning, allows the Facebook app to take photos or videos whenever it wants, without prompting or alerting you
  • draw over other applications – Likely for the Chat Heads feature of messenger, although why that’s not a permission in the messenger app is a good question

Getting tired and out of breath yet? It’s not over yet though! Facebook can also:

  • write to your call log - Why? Just why? This allows for call log erasing and writing
  • read from your call log – and thus see all the calls you have made, and when, and for how long, and who they were to
  • read your contacts - while useful if you sync Facebook contacts, many people  don’t want Facebook to have full reign over their contacts
  • write to your contacts - pretty much as above

What is perhaps most disconcerting is that while Google acknowledges openly the risks in each permission (I suggest you take a read at the detailed description of some of the permissions on a Play Store listing), the company takes no steps to help you with this. Thus, the entire Android ecosystem is built around you trusting the developer to play fair, and not do anything dodgy.

Unfortunately. This. Doesn’t. Happen. It really seems clear that many app developers just DO NOT UNDERSTAND SECURITY. Full stop.

And while I might be unique in my recommendation (which I firmly believe is warranted in this day and age given recent information revealing the extent of mass surveillance that is ongoing) to trust nobody, not even yourself. For this reason, I suggest the Android permissions system is totally flawed, in relying on developers to not abuse permissions, and not request excessive permissions. How many torch apps on Android have more than the required camera permission (to enable the camera)? I’d suggest most do, feel free to take a look!

You’d think the Android community would rally against such behaviou, but it’s reached a point where it is acceptable for developers to declare a need for excessively gratuitous permissions in order to use their apps. What happened to user choice? I then was pointed towards this post on G+ by Steve Kondik (XDA Recognized Developer cyanogen), which I read with much dismay. While I do not use G+ (closed platform, requiring far too much data to be disclosed to Google), I would suggest that with respect, the need for user privacy and security MUST come first, as it’s clear app developers cannot “do” security.

Perhaps if Google introduced zero tolerance for moronic errors in security (plaintext passwords, gathering contacts data, obtaining device IDs that are not hashed suitably with a cryptographic hash etc), it might offer an incentive to consider security? Given many users (wrongly) reuse passwords between services, the sending of plaintext passwords should be sufficient, in this author’s opinion, to justify immediate removal of all of a developer’s apps from the Play Store, forever.

Some people just don’t know how to do security. And for them, I sigh. Users deserve security, and privacy, and unless you go ahead and look at the OpenPDroid project on XDA (which I strongly suggest you check out), you are pretty much being abandoned by even the leader of CyanogenMod. While I appreciate his concerns for app developers, it is simply inexcusable to not look into fixing the glaring hole that is contacts access. This is 2013, the era of social engineering, and I cannot choose selectively which apps see which contacts in my address book? REALLY?

Something needs to happen here, before people wake up and smell the coffee, and realize this isn’t sustainable. It’s time users became more aware about what apps are doing, and the extent of data mining that is ongoing. It’s your data, and it should be entirely your choice who gets it.

You shouldn’t have to avoid an app because you don’t like the look of its permissions; you should be able to (whether as stock Google feature, or custom ROM feature) be able to selectively decline to allow an app to access your data. And this should be done gracefully, either providing empty data (for contacts, or similar), or null data (i.e. requesting phone number or IMEI should return the same response as a tablet lacking these identifiers).

Is it right to deny your users the choice, to make life “easier” for app developers? (arguably to allow them to capture user data more easily) I argue it’s not, and it’s time the Android community unites to put an end to apps having free reign over YOUR data. If this concerns you, why not check out the aforementioned OpenPDroid (and similar) projects on XDA, and see if you can help out, or test, or contribute to the cause?

privacy-card-3x2

The UK newspaper The Guardian has revealed today that US CDMA telecommunications provider Verizon is secretly collecting and disclosing the telephone records of a huge number of subscribers (likely in the order of tens of millions of Americans) to USA’s National Security Agency (NSA), often cynically referred to as “Never Say Anything.” This classified, top secret court order, whose classification does not expire until April 2038, compels Verizon to provide, and continue to provide on an ongoing basis:

[...] an electronic copy of the following tangible things:

All call detail records or “telephony metadata” created by Verizon for communications

(i) between the United States and abroad; or

(ii) wholly within the United States, including local telephone calls

As if to somewhat diminish this, the order goes on to state it does not require Verizon to provide details of calls that start and end outside of  the United States. This is little comfort, however, for any subscriber using the Verizon network, as the order goes on to detail the definition of the metadata requested. This includes the source and end-point telephone numbers, the IMSI and IMEI numbers, and the trunk identifier, among other things. The significance of this is that the presence of both the IMEI and IMSI numbers mean that Verizon is being forced to disclose information that identifies individual devices and handsets in use (the IMEI permits identification of the handset model in use, as well as the individual phone).

Quite why such top-secret blanket surveillance is required is obviously the top question right now. And while the NSA claims this is the equivalent to looking at a traditional letter’s envelope, it seems a somewhat tenuous link since letters do not contain an unchangeable identifier on them (IMEI) that can be tied back to you at the point of purchase.

While the NSA’s aims specifically exclude it from carrying out “spying” or surveillance on non-foreign targets, this is somewhat concerning, no?

Source: The Guardian

owncloud-logo-150x74

Welcome to Part 2 of our Say Sayonara to Google series, raising awareness of the options for using Android without Google services. Today, we look at alternative “cloud” services that are Open Source and can be installed on your own server. While there are no doubt many of these available, one that has gained significant attention recently is OwnCloud. OwnCloud is developed totally in the open (you can even clone and run directly from their Github repositories if you so desire, though this is obviously not recommended for running on a production system), in contrast to the “pseudo-open” development carried out on AOSP by Google.

What is OwnCloud About?

OwnCloud aims to offer an extendable online storage system including synchronization, to allow for contacts, calendars, files and bookmarks to be synchronized across multiple devices while retaining control of your data in the process. When using OwnCloud, all of your data is stored on a system within your control, with an Open Source backend (as opposed to a closed system such as Google).

How can I get Started?

You can set up and run your own OwnCloud instance for free on your own existing server by following the instructions from the OwnCloud website. It is strongly advisable to use an SSL certificate with this though, which may come at a small cost. Additionally, if you trust the third parties, there are a handful of providers offering free OwnCloud installations. Obviously in light of the fact that if you’re doing this, you likely don’t “trust” Google with your data. Thus, I’d suggest you consider these services merely for testing.

OK, so Contact Sync?

Yep. Unfortunately though, CardDav isn’t natively supported in Android. It might be supported in your third party variant of Android. (I’m sure I remember seeing this in an older version of TouchWiz.) It’s most likely that you’ll need to use a third party alternative client to sync your contacts. To get this application (which is free), you’ll need to use the Play Store unfortunately, as the developer has only published the free version there. The free Beta version is available here, although the developer has stated he will Open Source the application when he has the application ready for 1.0 release and the code has been tidied up.

Presuming you have set up OwnCloud (which is fairly straightforward if you have your own server etc, and which I believe to be outwith the scope of this article, unless enough readers want a guide), you can configure the CardDav sync client fairly simply by installing the above linked application, and entering the URL of your OwnCloud server (hopefully you are using SSL!), followed by “remote.php/carddav/” (see the developer’s wiki for more details of syncing with OwnCloud).

Once this is done, you can configure syncing. I suggest you disable the “one-way only” sync option, although be aware of the risks of doing this (i.e. if something goes wrong on your phone, it could overwrite server contacts). Presuming you have a backup strategy in place (which you should already have), you should be fine. By enabling two-way contact sync, you should have full contact syncing, like with Google’s own contacts sync service.

Unfortunately, it appears HTC are being deliberately obstructive on using third party contact syncing, so you may have issues on the HTC One using Sense UI. Let us know if you do manage to get it working though. Apparently the bug is a “feature…” Good one, HTC. One more reason to avoid the One (pun intended).

What now?

Your phone should upload all your existing contacts to your CardDav server at this point. Alternatively, if you are setting up your phone from scratch (recommended) to purge Google from it, you could export your Google contacts as a VCF file and import them into OwnCloud’s web interface.

At this point, it’s worth ensuring that you are no longer syncing contacts with Google by going to the Accounts and Sync menu and disabling contact sync for your Google accounts. If you wish to erase your contacts from Google, head over to Gmail in your browser and delete the contacts from the web interface.

Congratulations, you are now syncing your contacts between devices, only using your own server. We unfortunately have to use one non-Open Source application at present. However, hopefully once Marten Gajda completes his application, it will be open-sourced, offering Android users a way to  sync their contacts using entirely open software and server systems.

rickroll-android-malware

How do you know if your handset is infected with malware? You might not be able to tell until after it’s triggered. And this particular trigger method is very interesting. You know how Google Now listens for you to say the word “Google” to initiate a voice search? Malware might know the same trick. An infected device could be just waiting to hear the right thing before taking action.

This white paper (PDF) from a group of student researchers envisions an “annoyance attack” in a movie theater. Infected phones may be waiting for sound from one of the movie trailers, at which point they would take themselves off of silent mode and start ringing. But the traditional tricks used by malware, like botnet initiated denial of service attacks, still ring true.

If you’re not excited about reading research papers, take a look at the article Darlene Storm published on the subject. She references some examples of real-world malware apps and the mayhem they caused.  In this research case, the thing to focus on is the trigger mechanism. The authors point out that security measures are getting better all the time, making it harder for malicious software to phone home or receive commands from a central server without being detected. By using the array of sensors on a modern smartphone, they can be activated in a multitude of different ways—audio, video (camera or light sensor), vibration, or magnetic—without raising the hackles of the security apps. Of course, the answer is to make sure the malware doesn’t make it onto your device in the first place.

Say Sayonara to Google Apps

June 3, 2013   By:

company-products

What is freedom? This is a big question being asked by people around the world over the past few years. Many of us believe (and often rightly so) that we are fairly free. Arguably, this is correct in many countries throughout the world. You have political freedoms and many many more. But do you have electronic freedom?

For almost everyone reading this article, it is likely you have a Google Account. This means you have a Gmail account. It’s tied deeply into Android via the Google Apps package of proprietary applications (they are not open sourced, unlike the core Android operating system), and rely on closed back-end systems. The problem with such closed systems is:

  • The authentication process (i.e. the process of you showing you are who you say you are) is not transparent. While you know you type in your username and password for Gmail, and possibly also enter a two-factor authentication code, you have no idea how these are stored and verified. Does Google simply check your password against a plain text representation? Unless you use an open back-end, nobody can say for sure. You would be relying on Google to tell the truth about how it worked, as you can’t verify it.
  • If someone is to compromise this back-end authentication system, you would be none-the-wiser. It is fairly certain that Google does not encrypt your emails with a per-user key derived from your password, since they also offer a password reset system (which makes defunct most security anyway).
  • If someone at Google takes a dislike towards you, they could disable your access to the closed system, and you would be unable to really do anything about it since nobody else can replicate the service and offer you it under alternative terms and conditions. By extension, if Google changes their terms and conditions, you are able to leave, but will be unable to use any of the service without agreeing to the new terms and conditions.

This last part is significant. Even if you decide that you can trust Google (and I remind everyone of the flaws of the concept of trust—it is much wiser to trust no-one), they can change their legal policies such that they are no longer effectively trustworthy. Google’s own terms of service are a long read, and definitely worth taking a look at. Try and decipher them for yourself, and figure out what applies to which services.

At this point it’s worth being clear. This is not meant to be a “Google is evil” article. Google does make efforts to care about user privacy; take a look at your Google Dashboard. The company is quite transparent about the information retained. The trouble is that there’s no easy way for you to say, “No. I don’t want you to store this.” Google is a company that makes money from knowing everything it can; it’s not in the company’s interest to encourage you to make this more difficult for them! And while it is commendable Google wants to let you see what they know about you, the company doesn’t really help you adjust information such as how to remove Android devices you no longer want listed as being associated with you, including IMEIs and so on.

Over the course of this series of articles, we’ll look at ways you can move away from being so heavily reliant upon Google services. At all times, we’ll try to use Open Source solutions, which are free to use and modify. As a bonus for security, open source code is able to be scrutinized by anyone who wants to take a look at it. Per the popular Open Source advocate’s expression, “Many eyes make all bugs shallow,” which tends to improve security.

In the upcoming first article of the series, we’ll take a look at how to reduce our reliance on the Google Play Store and why we’d want to do that.

Hi App Lock

Let’s be honest; our phones are probably some of the most personal objects we have. From the memories captured with the camera to the hour-long text conversations, from the bank and credit card details stored within the apps to even our calendars; the consequences of a lost phone or a phone in stranger’s hands would be devastating. So in addition to the password protected lock screen, you may want to add an extra layer of security with HI App Lock.

Developed by XDA Forum Member hiapp, Hi App Lock allows you to lock any app on your phone with a four to eight digit PIN. A more efficient way of protecting information from intruders than apps that require you to navigate to a specific location to view protected files, Hi App Locks simply prompts for a PIN input every time you want to open up an app. This method also allows for the flexibility to protect any sort of data and information that may be on your phone such as emails, bank details, photos, text messages, and so forth.

Force closing the app through the settings menu would only close the app momentarily, upon which HI App Lock would instantly relaunch itself before anyone has the chance to navigate to a protected app. Other important functions include locking incoming and outgoing calls, prevention of uninstalling of apps, a widget for quick lock and unlock, and a security question in the event when you may have forgotten your pin number. Of course, however, rooted devices with ADB debugging left enabled and those without password-protected recoveries or encrypted /data storage can still find themselves at risk, so make sure you turn off ADB and encrypt your device if you’re planning on securing your device.

Hi App Lock is compatible with any device running Android version 2.1 or newer and is free from the Play store. If this has gotten your attention, make sure to check out the application thread for more information.

skysecurityfail

After our earlier article warning users to uninstall the Sky apps from their devices, it’s time to take a look at the technical significance of this attack. Firstly, the attackers have managed to do two key things here, each of which should each be impossibly difficult for the Play Store update system to be secure:

  • Gained access to the Play Store Developer Console of Sky, presumably through gaining access to the associated Google Account
  • Obtained access to, or managed to otherwise generate or reproduce, the private RSA keys used to sign the Sky Android app packages

The former is obviously important to security, since without access to the Developer Console, it is not possible to push out an update to an existing app. While obviously a malicious user could publish his own app, he would not be able to push an update to an existing app already installed via the Play Store, unless he can do so using the account of the developer who originally published the application.

The latter is equally (if not more) important a security measure: Even if an attacker gains access to your Developer Console, they cannot push an update to an existing app if it is not signed using the same keys as before. This check is also enforced on each individual Android device, meaning even if there was a bug in the Play Store implementation of this security, your own device would reject the update! All bets are off though, if the private signing key is compromised or accessed.

In the case of the current Sky attack, it seems likely that both the Developer Account and the private signing keys were compromised. At this point, the safest option would arguably be for Google to use its remote uninstall trigger on these packages, if there is any indication the actual packages themselves were compromised. Sky will no doubt resist this, as they would not want to see their apps disappear from users’ devices. Unfortunately for them though, it is long past that. These users need to uninstall the app now, as Sky can no longer continue to use these keys.

And herein lies the ultimate problem in the Android security chain (and indeed in most certificate based security systems). There is no system for effective, wide-reaching key revocation. If your Android signing keys are compromised, the trust chain ends. There is no way for you to revoke your compromised key, so clients will no longer trust an app update signed with it. There is also (less importantly from security, but more importantly from the developer’s perspective) no way to securely supersede these keys with new ones (while ensuring an attacker cannot replace the keys with his own).

What does this mean for developers?

Take all reasonable steps to protect your Google Play Store Developer account:

  • Use two-factor authentication on the account.
  • Enable a password or PIN lock on your phones which contain the authenticator app. If you enter a backup phone number, consider the security risks of an attacker compromising your telephone provider via social engineering, or compromising your login to issue a new SIM on your account (to allow them to obtain a one-time login token).
  • Use a very, very, very, very, very secure password for your Developer account, and do not use this password anywhere else. Do not use this Google account for anything else. Log into it only from your own computer, only over WPA-2 (or better) encrypted WiFi, which you control. If you can type this password in under a minute, it’s not secure enough. It should also be random. Random is not your partner’s name followed by the year they were born in. Unless their name is “r$kGmn9d4Fl9&*sEm.Xs2Fl0_3fGjdk” and they were born in year “hJfMn?32VwmndkD2lsk34Rojks83″

Take immense care of your private code signing key as well:

  • It is called private for a reason. Users who install your applications are trusting you to keep this key safe. Do that, and do so with your life. You should plan on being dead for many years before anyone can gain access to this. Many years would likely be a number greater than the life of the universe. Or at least the expiry date on the certificate.
  • Do not decide to “just backup the key to Dropbox for safekeeping” (or indeed any other cloud or remote storage system).
  • Don’t give anyone else access to the key. If you work in a team, can you perhaps operate a system whereby only one person signs the final updates? If it’s really necessary, then share the key, and ensure the other person takes equal levels of precautions.
  • Attackers will always attack the weakest point in your system, and will password-reset your Dropbox account if it gets them access to the shared folder where you stored your private key (and password to decrypt the key in the corresponding readme file, which you knew to never do, but did anyway), or snoop on your emails if you ever attach the key.
  • Protect your key with a very strong and random password (remember the lecture before about what random means?) – do not use this password anywhere else. Do not store this password somewhere an attacker can gain access to. Do not store it with or near your key.
  • Store your private key on an encrypted USB flash drive, and disconnect the drive from your computer when not in use. Then put this drive in a safe when not in use. Store a second copy of the key in a safe deposit box in your bank. This will obviously be heavily encrypted, using a long, secure, random key.

Finally though, what lessons should we all learn (and perhaps Google start to ponder)?

  • There is currently no way to revoke a compromised application signing key. While arguably this is because anyone can sign an app using their own key, and install it on Android (thus meaning that revoking a key is of limited use), this isn’t the case as Google pushes forward with trying to force automated updates upon unsuspecting users. Automated updates are a huge security risk until there is a rapid and effective key revocation system available to developers.
  • There is no way for developers to recover from a compromised signing key. Perhaps Google ought to review the signing system, such that developers create a CA key, which is then used to cross-sign other keys (such as their private signing key), such that if an application key is compromised, they can generate a new one, and sign it with their CA key. (this relies upon the developer understanding this, and realising he must guard the CA key with both his and his family’s lives, and guard the application signing key “simply” with his own life)
  • The Sky apps in question had fairly generous permissions access on the devices they were installed upon. Perhaps developers should stop making their apps attractive targets, and stop bestowing themselves with such wide-reaching permissions on our devices. There is no reason for the majority of apps to make use of any permissions whatsoever (perhaps asides from being able to access the SD card, a permission I argue should be sandboxed to an app-specific area anyway)

Hopefully, Google will make use of their remote uninstall ability to remove the app from user devices, and also let them know somehow (email to their Gmail accounts) what happened, and that Sky did not look after their key properly. This is a major embarrassment to Sky, and they should hang their heads firmly in shame for not taking sufficient precautions to ensure that everyone with access to their signing key was suitably competent to prevent it falling into the wrong hands.

At the end of the day, it is the end user whose security is affected by the failings of the developer. And for this reason, security lapses such as these are unforgivable.

sky_apps_playstore_hacked

sky_apps_playstore_hacked

Today is Sunday, 26th May, and across the world, many people have woken up following a leisurely lie-in to the small notification of an updated app being available. Nothing unusual there, or so you’d think.

The only difference is that today, some of these app updates may well have been malicious updates, pushed to some of the Sky UK official Android apps. As reported by PC Pro and Android Police; the  Sky Go, Sky+, SKY WiFi, and Sky News apps all appeared to be targeted in the attacks that involved updates being pushed to the Google Play Store for these applications. READ ON »

SecDroid for Android Security

Personal information security has been a prime concern for computer users since nearly the beginning of computing itself. Malicious users find exploits and develop viruses, trojans, and  rootkits to gain control of our devices to use them for their own advantage. This not only costs us in form of degraded performance and potential data usage costs, but can also have more dire consequences such as our financial information being sniffed and used to withdraw money from our accounts, or identity theft that could land us in serious trouble with law enforcement.

Previously, these issues were major concerns primarily on desktop computers, but with the massive popularity in mobile devices, such malicious individuals and groups have now started targeting popular mobile platforms. While Google has included better security measures in the latest versions of Android and several antivirus vendors have also developed solutions to get rid of such malware from our devices, it’s always a good idea to secure our devices as much as possible to prevent any security breaches from happening in the first place. To help you with this, XDA Senior Member x942 has developed SecDroid, an Android app that secures your devices against several intrusion methods.

SecDroid achieves this by disabling several services on your device that most users will not require to be running all the time. These services include SSH, SSHD, Telnet NC (net cat), and Ping, to keep others from gaining access to your device via a remote terminal. SecDroid also disables Package Manager so that no apps can be installed remotely to your device (you can still install them from Market or using APK files directly on the device itself). Lastly, it also allows you to disable ADBD (the ADB service running on the device that allows you to connect to it through command line from a remote computer) until the next reboot.

SecDroid is currently in active development, and this is its first alpha release. The developer has also released the source code of SecDroid under the GPLv2 license. You can find more details and the download link in the forum thread.

Advertisement

XDA TV: Most Recent Video

Buy/Sell on Swappa