- Properly done encryption is still safe.
- They go after the weak things like passwords, bugs in software, ...
- Some software might contain backdoors.
I don't think this should really surprise anybody.
There is also speculation that maybe the NSA has better algorithms for reducing the complexity of public key cryptography or that maybe RC4 might be broken. Those things clearly are possible, but it's not clear. Nobody has suggested that AES has any problems, and I think that's still safe to use.
One of the question becomes what does properly done encryption mean. There are various factors to this and many applications using various protocols. SSL/TLS is the most used, so I'm going to concentrate on the parts in that.
Bit sizes
As far as I understand it, 80 bit of security is around what is currently the upper practical limit of a brute force attack. In 2003 NIST said to stop using 80 bit by 2015, in 2005 they said to stop using it by 2010.
When using something like SSL/TLS you using various things each having it's own size. So it's important to known which is the weakest part.
Public key encryption (RSA, DSA, DH) doesn't offer the same amount of security per bit than symmetric keys. The equivalent sizes I find are:
Symmetric | Public |
80 | 1024 |
112 | 2048 |
128 | 3072 |
256 | 15360 |
There are other sources that have other numbers, but they're similar
So this basically means you really should stop using 80 bit symmetric and 1024 bit public keys and that your public key really should be at least 2048 bit, and should maybe even consider going to 3072 bit.
There seems to be a push to moving Diffie-Hellman (DH) to provide Perfect Forward Secrecy (PFS). There are many variants to this. We want to use ephemeral (temporary) keys. But it basically boils down to 2 variants:- Standard Ephemeral DH (DHE/EDH) with a public key
- Elliptic Curve Ephemeral DH (ECDHE)
With DHE you first negotiate a temporary DH key for this session (hopefully) using a random number on both sides, and then using that DH key to exchange a normal symmetric key for the session. This temporary DH key is what provides the PFS, assuming that this temporary key is thrown away. The drawback of this is that creating a sessions takes a lot more CPU time.
The standard DH has the same security as other public key encryption. So we really also want to stop using 1024 keys for that.
With ECDHE the key size needs to be the double of the symmetric size. This is also faster than DHE, so people want to move to this. There are various curves that can be used for this, and some of them are generated the the NSA and people have no trust in them. Some people even don't trust any of the curves. Most people seem to think that ECDH is the way forward, but then limit the curves they support.
There are also hashes used for various things including signatures and MACs. Depending on how it's used the properties of the hash are important. They have 2 important properties collision resistance and preimage resistance.
For a collision attack the upper limit is a little more than the output size over 2. But there are attacks that are known to give worst results. I found those numbers:
MD5 | 2^20 |
SHA-1 | 2^60 |
SHA-2 | No known attack (so SHA-256 would have 1.2 * 2^128) |
For preimage attacks I found:
MD5 | 2^123 |
SHA-1 | No known attack (2^160) |
SHA-256 | No known attack (2^256) |
So I think that MD5 is still safe when a collision attack isn't possible, but should really be avoided for anything else and SHA-1 should probably be considered the same. So you want to avoid them in things like certificates.
SSL/TLS also use MD5 / SHA-1, but as part of a HMAC. I understand that those are not vulnerable to the collision attack but preimage resistance is important then and so are still considered safe.
Randomness
A lot of the algorithms depend on good random numbers. That is that the attacker can't guess what a (likely) random number you've selected. There have been many cases of bad RNG that then resulted in things getting broken. It's hard to tell from the output of most random number generators that they are secure or not.
One important thing is that the RNGs gets seeded with random information (entropy) to begin with. If it gets no random information, very limited amount of possible inputs or information that is guessable as input it can appear to give random numbers, but they end up being predictable There have been many cases where this was broken.
Ciphers
There are various encryption algorithms. But since the public key algorithms take a lot of time we usually want to limit the amount we do with them and do most with a symmetric key encryption. There are basically 2 types of symmetric ciphers: stream and block ciphers.
For block ciphers there are various ways of combining the blocks. The most popular was CBC. But in combination with TLS 1.0 it was vulnerable to the BEAST attack, but there is a workaround for this. An other mode is GCM but it's not yet widely deployed. This resulted in the stream cipher RC4 suddenly getting popular again.
Some ciphers have known weaknesses. For instance triple DES has 3*56=168 bits, but only provides for 2*56=112 bits of security. Other attacks make this even less.
The algorithms them self might be secure, but they might be attacked by other mechanism known as sidechannel attacks. The most important of that is timing attacks. If the algorithm has branches they might result in different amount of time taking in each branch. This might result in an attacker finding out some information about the secret key that is being used. It's important that the difference should be reduced as much as possible. This is usually not a problem if it's implemented in hardware.
SSL/TLS Session
A typical SSL/TLS session uses those things:- A certificate chain. Each of those certificates contain:
- A public key. That needs to be big enough.
- A signature which is usually a combination of a hash with an encryption algorithm. This is signed by public key of next CA in the chain, or a self-signed certificate like a root CA. The hash algorithm's collision resistance is important. But this isn't important for a self-signed certificate, like a root CA, since you either trust that or don't.
- a key exchange method so that you can share a secret key for the session. This might contain a separate temporary key in the case of (EC)DHE, in which case it needs to be big enough.
- a cipher
- a MAC
- You need to check that they are still valid. This can be done using a CRL or via OCSP. But nobody seems to be downloading the CRLs and not all CAs offer OCSP.
- You need check that the hostname of the certificate you get actually matches the hostname you're trying to connect to.
The client sends it's list of supported ciphers to the server and the server will then pick one. This is usually the first from that list that is supported by the server, but it might also be the order that is configured on the server that is used.
Current status in software
If you go looking at all of those things, you find problems in most if not all software. Some of that is to be compatible with old clients or servers, but most of it is not.
If you want to test web servers for various of those properties you can do that at ssllabs. Based on that there are stats available at SSL-pulse.
Apache only supports 1024 bit DHE keys and they don't want to apply a patch to support other bit size. Apache 2.2 doesn't offer ECDHE but 2.4 does.
I don't know of any good site has lists which browser supports which ciphers, but you can test your own browser here. It lacks information about EC curves that your client offers. You can of course as look at this with wireshark.
Mozilla currently has a proposal to change it's list of supported ciphers here. They currently don't have TLS 1.1 or 1.2 enabled yet, but as far as I understand the NSS library should support it. They should now have support for GCM, but they still have an open bug for a constant time patch. In general Mozilla seems to be moving very slowly with adopting things.
Last I heard Apple still didn't patch Safari for the BEAST attack.
For XMPP see client to server, server to server, clients.
For TOR see here.