December 12h: Innovation Day in Eleven Paths

Thursday, November 28, 2013

On December 12th, 2013, in Madrid, Eleven Paths will come out in society in an event we have named Innovation Day. In this event Eleven Paths will introduce old an new services, besides some surprises. Registration is necessary to assist, from this web.

Eleven Paths started working inside Telefónica Digital six months ago. After quite a lot of hard word, it is time to show part of the effort we have been trough during this time. Besides Eleven Paths, Telefónica de España and Vertical de Seguridad de Telefónica Digital will present their products and services as well, in this Innovation Day.


We will talk about Teléfónica CiberSecurity services, FaastMetaShield Protector family products, Saqqara, antiAPT services... and, finally, about a project that has remained secret so far and dubbed "Path 2" internally. December 12th and later on, this technology will be revealed step by step. For Eleven Paths, it has been a real challenge to deploy it during this period. But right now, it is a reality. It is already integrated in several sites and world level patented.

Clients, security professionals and systems administrators... they are all invited. The event will occur on Thursday, December 12th during the afternoon (from 16:00)  in the central building Auditorio of campus DistritoTelefónica in Madrid. Besides announcing all this exiting technology, we will enjoy live music concerts. Finally, there will be a great party, thanks to all security partners in Telefónica.

Registration is limited, so a pre-registering form is available. Once filled up, a confirmation email will be sent (if it is still possible to assist).

The "cryptographic race" between Microsoft and Google

Thursday, November 21, 2013

Google and Microsoft are taking bold steps forward to improve the security of cryptography in general and TLS / SSL in particular, raising standards in protocols and certificates. In a scenario as reactive as the security world, these movements are surprising. Of course, these are not altruistic gestures (they improve their image in the eyes of potential customers, among other things). But in practice, are these movements useful?

Google: what have they done

Google announced months ago that it was going to improve the security in certificates using 2048 bits RSA keys as minimum. They have finished before they thought. They want to remove 2014 bit certificates from the industry before 2014 and create all of them with 2048 bit key length from now on. Something quite optimistic, keeping in mind that they are still being used a lot. Beginning 2014, Chrome will warn users when certificates don't match their requisites. Rising the key bits up to 2048 in certificates involves that trying to break the cipher by brute forcing it becomes less practical with current technology.

Besides, related with this effort towards encrypted communications, since October 2011, Google encrypts traffic for logged users. Last September, it started to make it in every single search. Google is trying as well to establish "certificate pinning" and HSTS to stop intermediate certificates when browsing the web. If that wasn't enough, their certificate transparency project goes on.

Seems like Google is particularly worried about its users security and, specifically (although it may sound funny for many of us) for their privacy. In fact, they asset that"the deprecation of 1024-bit RSA is an industry-wide effort that we’re happy to support, particularly light of concerns about overbroad government surveillance and other forms of unwanted intrusion.".

Microsoft: what have they done

In the latest Microsoft update, important measures to improve cryptography in Windows were announced. In the first place, it will no longer support RC4, very weak right now (it was created in 1987) and responsible for quite a lot of attacks. Microsoft is introducing some tools to disable it in all their systems and wants to eradicate it soon from every single program. In fact, in Windows 8.1 with Internet Explorer 11, the default TLS version is raised to TLS 1.2 (that is able to use AES-GMC instead of RC4). Besides, this protocol also uses SHA2 usually.

Another change in certificates is that it will no longer allow hashing with SHA1 for certificates used in SSL or code signing. SHA1 is an algorithm that produces a 160 bits output, and is used when generating certificates with RSA protocol to hash the certificate. This hash will be signed by the Certificate Authority, showing its trust this way. It has been a while since NIST encouraged to stop using SHA1, but fewer cared about that claim. It looks like a quite proactive for Microsoft, that got us used to an exasperate reactive behavior.

Why all this? Is this useful?

Microsoft and Google are determined to improve cryptography in general and TLS/SSL in particular. With these measures adopted between both of them, the security of the way traffic is encrypted, is substantially raised.

2048 bits certificate using SHA2
(SHA256).
Certificates that identify public keys calculated with 512 bits RSA keys, were broken in practice in 2011. In 2010,  a 768 bits number (212 digit) was factored with a general purpose algorithm and in a distributed way. The higher known. So, in practice, using a 1024 bits number is "safe", although it could be discussed if it represents a threat in near future. Google is playing safe.

But there are some other problems to focus on. Using stronger certificates in SSL, is not the main obstacle for users. In fact, introducing new warnings (Chrome will warn about 1024 bit certificates) may just make the user even more confused: "What does using 1024 bits mean? Is it safe or not? is this the right place? what decision should I take?". Too many warnings just relaxes security ("which is the right warning when I am warned about safe and unsafe sites?"). The problem with SSL is that it's socially broken, and is not understood... it's not about the technical standpoint but from the users. Users will be happy that their browser of choice uses stronger cryptography (so the NSA can't spy on them...), but it will be useless if, confused, accepts an invalid certificate when browsing, not being aware that it's introducing a man-in-the-middle.

If we adopt the theory that NSA is able to break into communications because it already has adequate technology as to bruteforce 1024 bits certificates, this is very useful. There would be a problem if it wasn't necessary to break or brute force anything at all, because the companies were already cooperating to give NSA plain text traffic... We could dismiss that NSA had already their own advanced systems ready to break 2048 bit keys, and that is why they "allow" its standardization... couldn't we? We just have to look back a few years to remember some conspiracy tales like these in the world of SSL.
Selfsigned certificate created in Windows 8 and using
MD5 and 1024 bits.

The case of Microsoft is funny, too. Obviously, this movement in certificates is motivated because of  TheFlame. Using MD5 with RSA played a bad trick, allowing the attackers to sign code in its name. It can't happen again. This puts Microsoft ahead of deprecating SHA1 for certificates, because the industry will follow. But if RC4 is really broken, SHA1 health is not that bad. We have just started getting rid of MD5 in some certificates, when Microsoft claims the death of SHA1. This leaves as just with the possibility of using SHA2 (sha256RSA or sha256withRSAEncryption normally in certificates, although SHA2 allows the use from 224 to 512 bits output). It's the right moment, because XP is dying, and SHA2 wasn't even natively supported (just from service pack 3). There is still a lot of work to be done, because SHA1 is very extended (Windows 7 signs most of its binaries with SHA1, Windows 8, with SHA2), that is why deadline is 2016 in signing certificates and 2017 for SSL certificates. The way Certification Authorities will react... is still unknown.

On the other hand, regarding the user of mandatory TLS 1.2 (related in a way, because it's the protocol supporting SHA2), we have to be aware of the recent attacks against SSL to know what it's really trying to solve. Very briefly:
  • BEAST, in 2011. The problem was based in CBC and RC4. It was really solved with  TLS 1.1 and 1.2. But both sides (server and browser) have to be able to support these versions.
  • CRIME: This attack allows to retrieve cookies if TLS compression is used. Disabling TLS compression solves the problem.
  • BREACH: Allows to retrieve cookies, but is based on HTTP compression, not TLS, so it may not be "disabled" from the browser. One is vulnerable whatever TLS version is being used.
  • Lucky 13: Solved in software mainly and in TLS 1.2.
  • TIME: A CRIME evolution. It doesn't require an attacker to be in the middle, just JavaScript. It's a problem in browsers, not TLS itself.
A very common certificate yet, using
SHA1withRSAEncryption and 1024 keys
We are not aware of these attacks being used in the wild by attackers. Imposing 1.2 without RC4 is a necessary movement, but risky yet. Internet Explorer (until 10) supports TLS 1.2 but  is disabled by default (only Safari enables it by default, and the others just started to implement it). Version 11 will enable it by default. Servers have to support TLS 1.2 too, so we don't know how they will react.

To summarize, it looks like these measures will bring technical security (at least in the long term). Even if there are self interests to satisfy  (avoiding problems they already had) and an image to improve (leading to the "cryptographic race"), any enhancement is welcome and this "war" to lead the cryptography, (that fundamentally means being more proactive that your competitors), will raise up the bar.

Sergio de los Santos
ssantos@11paths.com

Fokirtor, a sophisticated? malware for Linux

Monday, November 18, 2013

Symantec has just released some details about how a new malware for Linux works. It is relevant for its relative sophistication. It was discovered in June as a fundamental part of a targeted attack to a hosting provider, but it's now when they disclose technical details about how it works. Although sophisticated for Linux environment, technically it's not so relevant if we compare it with malware for Windows.


In May 2013, an important hosting provided was attacked. They knew exactly what they were doing and what errors to avoid. They wanted financial data and user passwords (oddly enough they were stored ciphered, but they cannot rule out the master key was not compromised...). This happens everyday, but the difference is the method used: Fokirtor, that is the way Symantec has baptised the trojan used as the attacking tool.


It was a quite important company, and they needed to evade the security systems, so they tried to be unnoticed injecting the trojan to some servers process as a SSH daemon. In this way, they disguised their presence physically (no new processes were needed) and in the traffic (that would be merged with the one generated by the SSH service itself). This is a "standard" method in malware for Windows, where regular trojans usually inject themselves inside the browser and cover their traffic under HTTP.



Of course, the malware needed connectivity with the outside world to receive commands. In the world of Windows, malware usually connects outbound periodically (to elude inbound firewall) towards a C&C via HTTP. In the case of Fokirtor, what it did was hooking functions and wait for commands injected in SSH process, preceded by " :!;. " characters (without quotes). This would indicate that the attacker wanted to make some action. This method isn't new. Usually, when some legitimate software is trojanized in Linux's world, a reacting way for a certain pattern is embedded in its code, and then is published so it's downloaded by the future victims. What isn't so usual is to make it "on the fly" injecting it in a process. Although the news doesn't make it clear, we understand that the attacker had to get root privileges in the compromised machine.


The attacker just had to connect via SSH to the servers and send the magic sequence to take over the machine. Received commands were coded in base64 and ciphered with blowfish (designed by Bruce Schneier in 1993). This traffic wasn't logged.

Sophisticated?

In absolute terms, technically it's under the "standard" malware for Windows, and light years behind professional malware as a "ciberwapon" (TheFlame, Stuxnet, etc). Nevertheless, it does represent an interesting milestone that doesn't usually happen: finding specific malware for Linux servers that actively seeks to be unnoticed. 



To recall similar news, we have to go a year back. An user sent an email to the security list "Full Disclosure", stating he had found his Debian servers infected with what seemed to be a "rootkit working with nginx". It was about an administrator that had realized that the visitors of its web were being redirected to infected sites. Some kind of requests to that web server, returned an iframe injected in the page, that took to a point where Windows users tried to be infected. The administrator discovered some hidden processes and kernel modules responsible for the problem, and attached them to the email so it could be studied. After analyzed, we didn't have too many news about that rootkit.

Some questions without answers


Something that calls the eye but doesn't seem to have an explanation, is that Symantec detected this malware in June, with the same name, but hasn't offered technical details about the way it works since now. What happened during these five months? Probably they have been studying it in cooperation with the affected company. Unless they have come across with administrative or legal problems, technically it's not necessary to spend so much time to analyze a malware like this. And what happened before June? The attack was detected in May, but nothing is said about for how long the company was infected. It would be interesting to know the success of its hiding strategy during a real infection period. Being a hosting provider, have webpages of their costumers been compromised?



They say nothing about the trojan being able to replicate itself, or about detecting it in any other system. Possibly it's a targeted attack to a specific company, and the attackers didn't add this functionality to their tool. Just the strictly necessary to accomplish their task.

Although we instinctively relate Windows systems with malware world, when the attackers have a clear target, whatever operating system it is, there are no barriers. Or they are even weaker. Do not forget malware, technically speaking, is just a program "as any other" and only the will of programming it separates it from becoming a reality for a specific platform.

Sergio de los Santos
ssantos@11paths.com