-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
very weak cipher for *.userstorage.mega.co.nz subdomains #103
Comments
In general you might be right. But if I'm not mistaken, this URL only serves already encrypted chunks. So the communication (from a confidentiality perspective) could just as well be conducted under an unencrypted, plain socket or HTTP connection. However, that would break the web site using HTTPS. So I presume this was a strategic move to reduce the security parameters for HTTPS (TLS) negotiation to the lowest possible for efficiency on the encryption overhead and TLS hand shake. I believe while I was still working for Mega RC4 may have still be permissible (which was used for those hosts). Wouldn't go there usually, but for these purposes of carrying pre-encrypted chunks that's fully sufficient. |
I understand that and I respect the technical compromises done for speed. also the overhead if it is for bandwidthm then it is the same for all ciphers and if it is for cpu usage for user and server, I think 10 years ago that issue was reviewed and was not an issue (cpu support for HW acceleration mostly fixes that) because these ciphers are for handshake and the actual transfer of data uses symmetrical one that is, again, mostly the same for all tls1.2 ciphers. now if mega decided to support tls1.3 on the top of tls1.2 then many would at least be able to uses it securely. using ssllabs to scan on these subdomains and also mega main domains show that they get low score because they either support the deprecated tls1.0 and tls.1.1 or they have (like this issue) very weak ciphers. even Microsoft and firefox have removed tls1.0. and that was after postponing it for covid websites compatibility of some health official websites. https://www.ssllabs.com/ssltest/analyze.html?d=gfs214n109.userstorage.mega.co.nz maybe the whole website can be upgraded to the recommended tls practice here: I would have suggested the Modern compatibility but maybe some people in the world dont have access to firefox that is released in the last year (big if) but Intermediate compatibility (recommended) is accessible everywhere. |
I don't work there any more, so I'm predominantly interested in this discussion from a cryptography/security point of view. Looking at the cipher suite used, I believe you're right, that it is a relatively cheap "throw away" to use FS negotiation schemes, but only if they are using elliptic curve crypto for the Diffie-Hellman key agreement, as they require by far less entropy and computational overhead to generate new key pairs. As this is pretty much a "one off" for establishing a connection, it won't have a detrimental impact on ongoing throughput. Even though this is a public key algorithm involved (which is 4-5 orders of magnitude slower than symmetric ciphers), it's relatively painless and quick. AES-NI significantly improves encryption performance in hardware. Though there is still a significant impact on resources (especially if the server's I/O is fully saturated by that kind of activity, as those servers would be). So tuning down the symmetric load can have a significant impact in throughput. Particularly as it's not the block cipher (AES) alone but it also involves continuous computation of the authentication tag. Moving here from CBC-SHA to GCM would probably be beneficial, but still be impactful on the overall processing effort. In another case (in this case AES payload encryption in Parquet files using Hadoop in Java 9), the overhead of doing AES-CTR only (stream encryption in counter mode, no authentication) vs. AES-GCM (stream encryption using Galoise counter mode with authentication) is between 3 and 4.5. So, this all is purely theoretical reasoning. And I can see and appreciate both sides of the argument. And any decisions or advice on what should be done with Mega servers is well out of my realm of influence. Having said all that, I believe due to the nature of how Mega handles data encryption and transfers, there is no gain or loss of security possible here, as all data is pre-encrypted (with completely different keys), so FS or not, weak cipher suites or not ... the data won't be at risk. The big difference is (IMHO) what the browsers report and what the customers see, mainly due to the browser vendors (gladly) tightening their cryptography requirements (which is highly beneficial for the "average web site" use, but not so much an issue when dealing with Mega). |
great answer even if much of the technical terms flew over my head. maybe in near future mega moved to that on web (if their server allow it) and maybe move performant options (less load faster speed) to the desktop and mobile apps that can implement whatever library they want .maybe a pure http or even a non standard faster one that combines the transfer and encryption in one and doesnt have to use https layer over it. |
It's a rally good thing the browsers are tightening their min requirements for cipher suites. And I'm pretty sure that Mega will "move with them". I've seen it happen way back then with the removal of the RC4 config from the servers back then, and I'm pretty sure that (eventually) this will happen here, too. After all, if the browsers choke, a good part of the value proposition goes. Lastly, I believe that the non-browser clients already are using unencrypted connections to the servers to enhance transfer capabilities. At least that's how things were back then, and I'd be surprised if that has changed. |
Hi, As @pohutukawa mentioned, there's no security gain in enforcing longer keys (i.e. more secure) for RSA TLS used for HTTPs communications. Generally, this is an important aspect to keep the communication between the client and the server secure. The longer keys used the harder (or closer to being practically impossible with the current available hardware and CPUs) it is to break the encryption (i.e. finding/figuring the private key). With MEGA service, HTTPs encrypted communication is not necessary to maintain the security of the communication between the client and the server, since the payload (the data) will be encrypted using a synchronous encryption AES with the client being the only party that has the the key for it. Therefore, we value the the lighter RSA key length for TLS in web-client to reduce the load on the server and on the client during communication. |
@khaleddaifallah-mega Thanks for weighing in here from the side of Mega. As for the keys, I'd still suggest to ditch RSA keys (for TLS certificates or key negotiation) altogether and adopt the usage of ECDHE (elliptic curve Diffie-Helman ephemeral). This should still speed up the process (key negotiation only happens once at the beginning of a session, and EC keys are usually a bit faster than clunky RSA), as well as allow the browser or SSLlabs to report forward secrecy (which is technically a non-issue, but practically shows the user that things are more "straight"). As for the stream encryption: Go for the lowest symmetric cipher suite possible that still keeps the modern browsers happy. |
@pohutukawa first part: absolutely. the hardware that user uses and hardware that mega server uses if (they are not older than 15 years )don't get that much performance loss from going to a modern/intermediate protocol for start of handshake, because as @khaleddaifallah-mega have said the main part is the data stream and that uses (AS ALL TLS IN BROWSER DO) symmetric encryption. the connection start is asymmetric, so slow but short in terms of time and resources spent and the data transfer in symmetric , so fast and almost always hardware accelerated (only one I know that is not is the relatively very new XChaCha20Poly1305 in tls1.3). second part: no. my suggestion is to drop all pre-tls1.2 and not-safe tls1.2 and basically use https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 part. so basically go from TLS_RSA_WITH_AES_128_CBC_SHA (that is vulnerable from CBC and AES and small RSA) to at least ECDHE-ECDSA-AES128-GCM-SHA256 |
Since Tor browser 12 has already disabled some ciphersuites, we cannot use mega unless those TLS ciphersuites are upgraded.. And python also disabled some ciphersuites since version 3.10. |
NIST Retires SHA-1 Cryptographic Algorithm, SHA-1 is dead. |
It looks like SHA256 has just been supported, so we can use MEGA on Tor browser 12. But it still doesn't satisfy python's requirements because ciphers without forward secrecy are disabled. (see python/cpython#88164 for detail) |
I still don't get why mega uses non-fs ciphers. please consider using a secure modern tls practice instead of current one. |
somehow i missed that. you mean with python libraries concerned with tls, right now you cant download from mega? would that not be a good reason to move from non-fs to fs ciphers for mega? python is not exactly an small use case. or maybe mega has its own tls python libraries that user has to use? I understand the mega's argument that already encrypt the stream before putting it on http, but I dont think using at least a fs cipher would change server load that much. maybe a comparison between the two in terms of server load/cost would be great. |
I see that web client in mega.nz website uses only TLS_RSA_WITH_AES_128_CBC_SHA.
this is one of the very weak ciphers that is very bad for this security oriented website.
also when I use firefox addon to download files in firefox I get
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at "https://gfs262n351.userstorage.mega.co.nz/dl/xxxxxxxxxxxxxxxx. (Reason: CORS request did not succeed). Status code: (null)."
The text was updated successfully, but these errors were encountered: