Accessing (and hacking) Windows Phone registry

Monday, December 30, 2013

Although Microsoft’s efforts on securing Windows Phone 8 devices from community hacks, accessing the device’s registry is still possible with some limitations. Writing to the registry is denied by default but read-permissions are quite lax.

First approach

When trying to read the registry, initial approach is (maybe) to invoke a low-level library from WIN32 API, such as winreg.h to import the necessary functions. However, PInvoke/DllImport isn’t available in Windows Phone, so we would have to implement it from scratch. Needless to say that this breaks Microsoft’s requirements for submitting such an application to the Store.

Doing some research shows that much work has already been done and is available for public download in the "XDA Developers" forum. There is a project called "Native Access" by GoodDayToDie that does exactly this. However compiling and using it is not straightforward so we’ll give it a go and show how to do it.


The project’s source code can be download from the following link: get the referenced libraries needed for building the project, it is needed to convert the phone’s DLLs into .lib format (using, for example dll2Lib available from Actually, the needed libraries are in system32 directory, but using the emulator’s libraries will not work on an actual phone. So you will need an image from real devices. There are ISO files available "out there", so you can get and extract them easily.

Once done, you need to place the extracted .LIBs in the Libraries folder of the WP8 SDK (typically in Program Files (x86)\Microsoft SDKs\Windows Phone\v8.0\Libraries).

Problems compiling

However, if you have trouble compiling the code, there’s a shortcut by referencing the .winmd file from an existing project that uses Native Access (WebAccess for example). Just extract the XAP’s contents (which is just a zip file) and search for “Registry.dll” which is a precompiled version of the project.

Now we are ready to use the library and writing code to search for some interesting keys in the registry. The class provides all of the necessary methods to access the registry: ReadDWORD, ReadString, ReadMultiString, ReadBinary, ReadQWORD, GetHKey, GetSubKeyNames, GetValues

A real example

Here are the codes needed to access the different registry hives:
  • 80000000 -> HKEY_CLASSES_ROOT
  • 80000001 -> HKEY_CURRENT_USER
  • 80000002 -> HKEY_LOCAL_MACHINE
  • 80000003 -> HKEY_USERS
  • 80000005 -> HKEY_CURRENT_CONFIG
Example code to access registry in Windows Phone 8
For some registry locations that are highly sensitive, or for writing or creating keys, you need to add special Capabilities to your app. This will require an interop-unlock that has currently been achieved only in Samsung devices by taking advantage of Samsung’s "Diagnosis tool".

Tero de la Rosa

FOCA Final Version, the ultimate FOCA

Monday, December 16, 2013

You all know FOCA. Over the years, it had a great acceptation and became quite popular. Eleven Path has killed the FOCA to turn it into a professional service, FaasT. But FOCA did not die. FOCA Pro is now a portable version called FOCA Final Version that you can download for free.

FOCA Free vs. FOCA Pro
There used to be a FOCA Free and a FOCA Pro. The Pro version included some extra features such as reporting, analysis of error messages in response pages, fuzzing of URLs searching for data type conversions errors in PHP, syntax errors in SQL/LDAP queries, integer overflow errors, and more parallelism in its core. It had no ads either.

But now, FOCA joins in just one version, based on FOCA Pro, but for free. So here it is FOCA Final Version. This final version includes all the plugins available and the tools for you to create your own plugins. Some bug reported by users had been fixed as well.

If you want to know how it works and some secrets, you can buy this new book about pentesting using FOCA.

FOCA Final Version
FOCA is free for download with no registration from Eleven Paths Labs page.

Hope you enjoy it.

Latch, new ElevenPaths' service

Thursday, December 12, 2013

During the time we've been working in ElevenPaths we've faced many kind of events internally, buy one of the most exciting and awaited is the birth of Latch. It's a technology of our own that has been invented, patented and developed by our own team... and, at last, exhibited to the world. We're proud of the work that has been done and we needed to tell about it. Finally we can do so. This is Latch.

We think that users do not use their digital services (online banking, email, social networks...) 24 hours a day. Why would you allow an attacker trying to access them at any time then? Latch is a technology that gives the user full control of his online identity as well as better security for the service providers.

Latch, take control of when it's possible to access your digital services.
Passwords, the oldest authentication system, are a security problem that we have to deal with every day. Second factor of authentication, biometry, password managers... We haven't found yet the ultimate solution for the user not to depend on simple passwords, reusing them, or writing them on a paper. Latch isn't that solution, either. Even advanced users that use good password practices are exposed to their passwords being stolen. Malware that focuses on credentials thievery are very "usual" since long agoBut even the most cautious users may have their passwords stolen by attackers if a third party's database is hacked and exposed. Latch isn't a solution for this problem, either.
Latch doesn't replace passwords, but complements them and makes any authentication system stronger.

Latch's approach is different. Avoiding authentication credentials ending in wrong hands is very difficult. However, it's possible for the user to take control of his digital services, and reduce the time that their are exposed to attacks. "Turn off" your access to email, credit cards, online transactions... When they're not being used. Block them even when the passwords are known. Latch offers the possibility for the user to decide when his accounts or certain services can be used. Not the provider and, of course, nor the attacker.
Latch makes it possible to control your services even if an attacker has stolen the user's password, credit card or any other service that needs authentication, making it impossible for the attacker to use the stolen data in that service out of a defined time interval. In other words, (by just pushing a button) it's possible to make the authentication credentials for any service valid only for that very moment when the user needs to introduce them on the system. 

Latch's scheme
Even though we've talked about passwords, Latch is actually a service to protect service provider's defined processes for interacting with the end user. The background and uses that may be given to these processes are independent of the protection layer that Latch provides.
The main idea of the structure of this protection is limiting the exposure window that an attacker owns for taking advantage of any of these processes. The user will decide if his service accounts are turned ON or OFF and even will detail the actions that can be taken from those services. This makes it possible to reduce in time the possibilities of an attack, associating an external control to every operation. The service provider requests Latch for the user-defined status of a certain operation for a defined time.
Latch's general work scheme
In this figure, a client that tries to execute an operation from a service provider obtains confirmation on whether the operation has been allowed or denied.

The configuration of an operation's state is made through an alternative channel (and considered more "secure" than the regular device), so any attempt to access an operation blocked by the user may be identified as an anomaly. Such an anomaly could imply that the user trying to access the blocked operation is not reality who he's claiming to be and a possible fraud attempt is identified.

How it works in practice

The user will only need a smartphone to "activate" or "deactivate" the services paired with Latch. To do so, he or she will need to:
  • Create a Latch user account. This account will be the one used by the user to configure the state of the operations (setting to ON or OFF his services accounts).
  • Pairing the usual account with the service provider (an email account or a blog, for example) that the user wants to control. This step allows Latch to synchronize with the service provider and to provide the adequate responses (defined by the user) depending on which operation is tried to be used. The service provider must be compatible with Latch, of course. This allows the users to decide whether to use Latch or not. Latch is offered but not imposed.
Latch for the service providers

Latch allows the users to configure the access to their services, and to accomplish this, the service providers need to integrate Latch in their systems. We've programmed different SDKs in many different languages (.NET, PHP, ASP...) and we've created plugins for already existing platforms such as Wordpress, PrestaShop, Drupal and Joomla. The webpages using these platforms are able to offer Latch to their users quite easily...  so the users deciding  to use the service may take advantage of Latch also very easily.

The integration is easy and straightforward, giving the service provider a great opportunity to improve the security offered to its users, and therefore, their online identity.

And that is not all...

Latch offers more ways to protect users, their credentials, online services and online identities. We will introduce them soon. Stay tuned.

EmetRules: The tool to create "Pin Rules" in EMET

Friday, December 6, 2013

EMET, the Microsoft tool, introduced in its 4.0 version the chance to pin root certificates to domains, only in Internet Explorer. Although useful and necessary, the ability to associate domains to certificates does not seem to be very used nowadays. It may be hard to set and use... we have tried to fix it with EmetRules.

To pin a domain with EMET it is necessary
  • Check the certificate in that domain
  • Check its root certificate
  • Check its thumbprint
  • Create the rule locating the certificate in the store
  • Pin the domain with its rule

Steps are summarized in this figure:

It is quite a tedious process, much more if your target is to pin a big number of domains at once. In Eleven Paths we have studied how EMET works, and created EmetRules, a little command line tool that allows to complete all the work in just one step. Besides it allows batch work. So it will connect to domain or list indicated, will visit 443 port, will extract SubjectKey from its root certificate, will validate certificate chain, will create the rule in EMET and pin it with the domain. All in one step.

EmetRules de ElevenPaths
The way it works is simple. The tools needs a list of domains, and will create its correspondent XML file, ready to be imported to EMET, even from the tool itself (command line).

Some options are:

  • "urls.txt" Is a file containing the domains, separated by "\n". Domains may have "www" on them or not. If not, EMET will try both, unless stated in "d" option (see below).

  • "output.xml" specifies the path and filename of the output file where the XML config file that EMET needs will be created. If it already exists, the program will ask if it should overwrite, unless stated otherwise with "-s" option (see below).

  •  t|timeout=X. Sets the timeout in milliseconds for the request. Between 500 and 1000 is recommended, but it depends on the trheads used. 0 (by default) states for no timeout. In this case, the program will try the connection until it expires.
  • "s", Silent mode. No output is generated or question asked. Once finished it will not ask if you wish to import the generated XML to EMET.
  • "e", This option will generate a TXT file named "error.txt" listing the domains that have generated any errors during connection. This list may be used again as an input for the program.
  • "d". This option disables double checking, meaning trying to connect to main domain and "www" subdomain. If the domain with "www" is used in "url.txt", no other will be connected. If not, both will be connected. With this option, it will not.
  •  c|concurrency=X. Sets the number of threads the program will run with. 8 are recommended. By default, only one will be used.
  • "u". Every time the program runs, it will contact central servers to check for a new version. This option disables it.

Tool is intended mainly for admins or power users that use Internet Explorer and want to receive an alert when a connection to a domain is suspected to be "altered". Pinning system in EMET is far to be perfect, and even the warning displayed is very shy (it allows to get to the suspected site), but we think is the first step to what it will be, for sure, an improved feature in the future.

We encourage you to use it.

December 12h: Innovation Day in Eleven Paths

Thursday, November 28, 2013

On December 12th, 2013, in Madrid, Eleven Paths will come out in society in an event we have named Innovation Day. In this event Eleven Paths will introduce old an new services, besides some surprises. Registration is necessary to assist, from this web.

Eleven Paths started working inside Telefónica Digital six months ago. After quite a lot of hard word, it is time to show part of the effort we have been trough during this time. Besides Eleven Paths, Telefónica de España and Vertical de Seguridad de Telefónica Digital will present their products and services as well, in this Innovation Day.

We will talk about Teléfónica CiberSecurity services, FaastMetaShield Protector family products, Saqqara, antiAPT services... and, finally, about a project that has remained secret so far and dubbed "Path 2" internally. December 12th and later on, this technology will be revealed step by step. For Eleven Paths, it has been a real challenge to deploy it during this period. But right now, it is a reality. It is already integrated in several sites and world level patented.

Clients, security professionals and systems administrators... they are all invited. The event will occur on Thursday, December 12th during the afternoon (from 16:00)  in the central building Auditorio of campus DistritoTelefónica in Madrid. Besides announcing all this exiting technology, we will enjoy live music concerts. Finally, there will be a great party, thanks to all security partners in Telefónica.

Registration is limited, so a pre-registering form is available. Once filled up, a confirmation email will be sent (if it is still possible to assist).

The "cryptographic race" between Microsoft and Google

Thursday, November 21, 2013

Google and Microsoft are taking bold steps forward to improve the security of cryptography in general and TLS / SSL in particular, raising standards in protocols and certificates. In a scenario as reactive as the security world, these movements are surprising. Of course, these are not altruistic gestures (they improve their image in the eyes of potential customers, among other things). But in practice, are these movements useful?

Google: what have they done

Google announced months ago that it was going to improve the security in certificates using 2048 bits RSA keys as minimum. They have finished before they thought. They want to remove 2014 bit certificates from the industry before 2014 and create all of them with 2048 bit key length from now on. Something quite optimistic, keeping in mind that they are still being used a lot. Beginning 2014, Chrome will warn users when certificates don't match their requisites. Rising the key bits up to 2048 in certificates involves that trying to break the cipher by brute forcing it becomes less practical with current technology.

Besides, related with this effort towards encrypted communications, since October 2011, Google encrypts traffic for logged users. Last September, it started to make it in every single search. Google is trying as well to establish "certificate pinning" and HSTS to stop intermediate certificates when browsing the web. If that wasn't enough, their certificate transparency project goes on.

Seems like Google is particularly worried about its users security and, specifically (although it may sound funny for many of us) for their privacy. In fact, they asset that"the deprecation of 1024-bit RSA is an industry-wide effort that we’re happy to support, particularly light of concerns about overbroad government surveillance and other forms of unwanted intrusion.".

Microsoft: what have they done

In the latest Microsoft update, important measures to improve cryptography in Windows were announced. In the first place, it will no longer support RC4, very weak right now (it was created in 1987) and responsible for quite a lot of attacks. Microsoft is introducing some tools to disable it in all their systems and wants to eradicate it soon from every single program. In fact, in Windows 8.1 with Internet Explorer 11, the default TLS version is raised to TLS 1.2 (that is able to use AES-GMC instead of RC4). Besides, this protocol also uses SHA2 usually.

Another change in certificates is that it will no longer allow hashing with SHA1 for certificates used in SSL or code signing. SHA1 is an algorithm that produces a 160 bits output, and is used when generating certificates with RSA protocol to hash the certificate. This hash will be signed by the Certificate Authority, showing its trust this way. It has been a while since NIST encouraged to stop using SHA1, but fewer cared about that claim. It looks like a quite proactive for Microsoft, that got us used to an exasperate reactive behavior.

Why all this? Is this useful?

Microsoft and Google are determined to improve cryptography in general and TLS/SSL in particular. With these measures adopted between both of them, the security of the way traffic is encrypted, is substantially raised.

2048 bits certificate using SHA2
Certificates that identify public keys calculated with 512 bits RSA keys, were broken in practice in 2011. In 2010,  a 768 bits number (212 digit) was factored with a general purpose algorithm and in a distributed way. The higher known. So, in practice, using a 1024 bits number is "safe", although it could be discussed if it represents a threat in near future. Google is playing safe.

But there are some other problems to focus on. Using stronger certificates in SSL, is not the main obstacle for users. In fact, introducing new warnings (Chrome will warn about 1024 bit certificates) may just make the user even more confused: "What does using 1024 bits mean? Is it safe or not? is this the right place? what decision should I take?". Too many warnings just relaxes security ("which is the right warning when I am warned about safe and unsafe sites?"). The problem with SSL is that it's socially broken, and is not understood... it's not about the technical standpoint but from the users. Users will be happy that their browser of choice uses stronger cryptography (so the NSA can't spy on them...), but it will be useless if, confused, accepts an invalid certificate when browsing, not being aware that it's introducing a man-in-the-middle.

If we adopt the theory that NSA is able to break into communications because it already has adequate technology as to bruteforce 1024 bits certificates, this is very useful. There would be a problem if it wasn't necessary to break or brute force anything at all, because the companies were already cooperating to give NSA plain text traffic... We could dismiss that NSA had already their own advanced systems ready to break 2048 bit keys, and that is why they "allow" its standardization... couldn't we? We just have to look back a few years to remember some conspiracy tales like these in the world of SSL.
Selfsigned certificate created in Windows 8 and using
MD5 and 1024 bits.

The case of Microsoft is funny, too. Obviously, this movement in certificates is motivated because of  TheFlame. Using MD5 with RSA played a bad trick, allowing the attackers to sign code in its name. It can't happen again. This puts Microsoft ahead of deprecating SHA1 for certificates, because the industry will follow. But if RC4 is really broken, SHA1 health is not that bad. We have just started getting rid of MD5 in some certificates, when Microsoft claims the death of SHA1. This leaves as just with the possibility of using SHA2 (sha256RSA or sha256withRSAEncryption normally in certificates, although SHA2 allows the use from 224 to 512 bits output). It's the right moment, because XP is dying, and SHA2 wasn't even natively supported (just from service pack 3). There is still a lot of work to be done, because SHA1 is very extended (Windows 7 signs most of its binaries with SHA1, Windows 8, with SHA2), that is why deadline is 2016 in signing certificates and 2017 for SSL certificates. The way Certification Authorities will react... is still unknown.

On the other hand, regarding the user of mandatory TLS 1.2 (related in a way, because it's the protocol supporting SHA2), we have to be aware of the recent attacks against SSL to know what it's really trying to solve. Very briefly:
  • BEAST, in 2011. The problem was based in CBC and RC4. It was really solved with  TLS 1.1 and 1.2. But both sides (server and browser) have to be able to support these versions.
  • CRIME: This attack allows to retrieve cookies if TLS compression is used. Disabling TLS compression solves the problem.
  • BREACH: Allows to retrieve cookies, but is based on HTTP compression, not TLS, so it may not be "disabled" from the browser. One is vulnerable whatever TLS version is being used.
  • Lucky 13: Solved in software mainly and in TLS 1.2.
  • TIME: A CRIME evolution. It doesn't require an attacker to be in the middle, just JavaScript. It's a problem in browsers, not TLS itself.
A very common certificate yet, using
SHA1withRSAEncryption and 1024 keys
We are not aware of these attacks being used in the wild by attackers. Imposing 1.2 without RC4 is a necessary movement, but risky yet. Internet Explorer (until 10) supports TLS 1.2 but  is disabled by default (only Safari enables it by default, and the others just started to implement it). Version 11 will enable it by default. Servers have to support TLS 1.2 too, so we don't know how they will react.

To summarize, it looks like these measures will bring technical security (at least in the long term). Even if there are self interests to satisfy  (avoiding problems they already had) and an image to improve (leading to the "cryptographic race"), any enhancement is welcome and this "war" to lead the cryptography, (that fundamentally means being more proactive that your competitors), will raise up the bar.

Sergio de los Santos

Fokirtor, a sophisticated? malware for Linux

Monday, November 18, 2013

Symantec has just released some details about how a new malware for Linux works. It is relevant for its relative sophistication. It was discovered in June as a fundamental part of a targeted attack to a hosting provider, but it's now when they disclose technical details about how it works. Although sophisticated for Linux environment, technically it's not so relevant if we compare it with malware for Windows.

In May 2013, an important hosting provided was attacked. They knew exactly what they were doing and what errors to avoid. They wanted financial data and user passwords (oddly enough they were stored ciphered, but they cannot rule out the master key was not compromised...). This happens everyday, but the difference is the method used: Fokirtor, that is the way Symantec has baptised the trojan used as the attacking tool.

It was a quite important company, and they needed to evade the security systems, so they tried to be unnoticed injecting the trojan to some servers process as a SSH daemon. In this way, they disguised their presence physically (no new processes were needed) and in the traffic (that would be merged with the one generated by the SSH service itself). This is a "standard" method in malware for Windows, where regular trojans usually inject themselves inside the browser and cover their traffic under HTTP.

Of course, the malware needed connectivity with the outside world to receive commands. In the world of Windows, malware usually connects outbound periodically (to elude inbound firewall) towards a C&C via HTTP. In the case of Fokirtor, what it did was hooking functions and wait for commands injected in SSH process, preceded by " :!;. " characters (without quotes). This would indicate that the attacker wanted to make some action. This method isn't new. Usually, when some legitimate software is trojanized in Linux's world, a reacting way for a certain pattern is embedded in its code, and then is published so it's downloaded by the future victims. What isn't so usual is to make it "on the fly" injecting it in a process. Although the news doesn't make it clear, we understand that the attacker had to get root privileges in the compromised machine.

The attacker just had to connect via SSH to the servers and send the magic sequence to take over the machine. Received commands were coded in base64 and ciphered with blowfish (designed by Bruce Schneier in 1993). This traffic wasn't logged.


In absolute terms, technically it's under the "standard" malware for Windows, and light years behind professional malware as a "ciberwapon" (TheFlame, Stuxnet, etc). Nevertheless, it does represent an interesting milestone that doesn't usually happen: finding specific malware for Linux servers that actively seeks to be unnoticed. 

To recall similar news, we have to go a year back. An user sent an email to the security list "Full Disclosure", stating he had found his Debian servers infected with what seemed to be a "rootkit working with nginx". It was about an administrator that had realized that the visitors of its web were being redirected to infected sites. Some kind of requests to that web server, returned an iframe injected in the page, that took to a point where Windows users tried to be infected. The administrator discovered some hidden processes and kernel modules responsible for the problem, and attached them to the email so it could be studied. After analyzed, we didn't have too many news about that rootkit.

Some questions without answers

Something that calls the eye but doesn't seem to have an explanation, is that Symantec detected this malware in June, with the same name, but hasn't offered technical details about the way it works since now. What happened during these five months? Probably they have been studying it in cooperation with the affected company. Unless they have come across with administrative or legal problems, technically it's not necessary to spend so much time to analyze a malware like this. And what happened before June? The attack was detected in May, but nothing is said about for how long the company was infected. It would be interesting to know the success of its hiding strategy during a real infection period. Being a hosting provider, have webpages of their costumers been compromised?

They say nothing about the trojan being able to replicate itself, or about detecting it in any other system. Possibly it's a targeted attack to a specific company, and the attackers didn't add this functionality to their tool. Just the strictly necessary to accomplish their task.

Although we instinctively relate Windows systems with malware world, when the attackers have a clear target, whatever operating system it is, there are no barriers. Or they are even weaker. Do not forget malware, technically speaking, is just a program "as any other" and only the will of programming it separates it from becoming a reality for a specific platform.

Sergio de los Santos