Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some DNS providers suck and only let you set 1024 bit long keys. For example wordpress.com.


1024 is still many orders of magnitude hard to crack than 512. For the record, the last RSA number having been broken was RSA-250 (829 bits) and it took 2700 core-years to crack back in 2020[1]. In comparison, RSA-155 (512 bits) was factorized as early as 1999!

You aren't in danger.

[1]: https://sympa.inria.fr/sympa/arc/cado-nfs/2020-02/msg00001.h...


There are several open source GNFS tools that can do 1024 very efficiently on GPUs, and even cheap consumer GPUs have 10s of thousands of cores now, even by your measure "2700 core-years" is only around a month or so on a single consumer grade GPU.

Not "free", but any malicious actor has access to a lot more than a single GPU.

The UK government also has several huge arm based solutions dedicated to cracking internet encryption, zero chance that isn't breaking mostly everything, for sure the Chinese and Russians have similar.


> The UK government also has several huge arm based solutions dedicated to cracking internet encryption, zero chance that isn't breaking mostly everything, for sure the Chinese and Russians have similar.

So you seriously think that almost all current RSA is being decrypted in real time by at least UK, China and Russia (and I would assume US)? Do you have any source or reference for this at all?


Yet nobody is collecting the RSA-numbers bounties?

RSA-270 (much, much easier than 1024 compute-wise) has a bounty of $75k, why would it be unclaimed then when you can spend three years worth of cloud rented H100 (I'm being conservative here and count $3/h which is far from the best deal you can get) and still make a profit?

Also a GPU core and CPU cores really aren't comparable individually, so your “consumer graphic card having thousands of core already” is comparing apples to oranges.


had a bounty. the RSA challenge unexpectly finished in 2007, task is left to the reader to speculate what happened in 2007.


Oh, I wasn't aware of the end of the challenge. But 1024 was definitely not broken by then, at least not by brute force.


none of it is "brute force", GNFS is a process that rapidly excludes numbers from the search space that cannot be the answer, in principle similar to the way they broke enigma.

numberphile has a great video on that one https://www.youtube.com/watch?v=V4V2bpZlqx8

Also, taking the OP as a "worse case", afaik:

512bit = $8

so

1024 = 8^2 = $64

2048 = 8^2^2 = $4,096

4096 = 8^2^2 = $16,777,216

noting $8 for 512 seems very expensive to me.


That's not correct. Consider, for example, a processor that can handle 2^31 computations per second. 2^32 operations can be computed in 2 time units, whereas 2^64 operations will take 2^33 time units.

search_space(n: number_of_bits) = 2^n * k

so search_space(1024)/search_space(512)=2^512, not 2^2.

Asymptotics in GNFS are better[0], but only on the order of e^(cbrt(512 * 64/9)) times more work, not 2^2.

This would give an approximation of math.exp(math.cbrt(512 * 64/9))*$8 = $40 million for 1024 bits.

[0] https://en.wikipedia.org/wiki/General_number_field_sieve


Pretty sure the search cost of GNFS is (bits)^2, the search cost of brute force is 2^(bits), if it was 2^(bits) GNFS would be no better than brute force.

->but only on the order of e^(cbrt(512 * 64/9))

e^(log(n)) = n


Come on! Not only your math is ridiculous (you can't just square amounts of money, just consider how it would work if you changed currency: $8 is 1260¥, squares it makes it 1587000¥ which is $10k != $64) but believing that RSA-2048 is factorizable in $4k is hilarious.

> none of it is "brute force"

It's not exhaustive search like it would be for symmetric encryption, but it's still somewhat brute-force (especially since RSA keys are inflated in size compared to symmetric encryption to accommodate for their vulnerabilities), put more clearly what I meant was “not without theoretical breakthrough unknown to the public”.

BTW, it's not a very good idea to lecture people with actual crypto knowledge (even though mine is quite rusty now for I have not done any serious stuff for 15 years) when your own comes from ill-understood YouTube vulgarization videos.


> (you can't just square amounts of money, just consider how it would work if you changed currency: $8 is 1260¥, squares it makes it 1587000¥ which is $10k != $64)

What can you square then? For example, can you square lengths? E.g. 1km is 1000m, what is its square?


That is because "square length" is its own unit, which we call area. Square money is not meaningful as a unit, that is the problem. You can square anything you want but it turns it into a different unit, which the original commenter did not do (they presumed squaring dollars still gives you dollars back).


its a cost function, In that case:

cost = 2.828^(2*(bits/512))

It didn't "square the cost", it doubled the number of bits to find the cost, I just skipped a load of the math.


1km² aka 1,000,000m², note how the resulting units aren't km and m but square kilometers and square meters.

> What can you square then?

In that case it's the number of operations (which is unitless) that must be squared and then multiplied by the cost of each operation. For instance (figures are completely made up for illustration purpose) if one individual operation costs 0.1 cent, and you have 8000 ops for the factorization, it costs $8, and the operation number squared means you have 64,000,000 operations, and the total cost is $64k. In practice we're talking about trillions of very cheap operations but when you square that number you get an insanely big number which even multiplied by a small individual cost ends up costing trillions of dollars, putting it out of reach of factorization from anyone.


the search space is p1×p2 = cost therefore

(p1^2) x (p2^2) = (p1xp2)^2 =cost^2

and gnfs search cost increases in cost by (roughly) the square of the number of bits.


I'll keep trying to explain it to you: you cannot take the square of a price.

If my Yuan exchange rate example didn't convince you, let's have a few thoughts experiments:

- let say you can do can do some amount of work for less than $1 (maybe even factoring a 512 bits number) let's call that amount of work X, and you do it for say $0.9. Do you think you can do X² work for price^2, which is $0.81 ? Yes, much more work for less than the price of doing X, isn't that magical?

- a hard drive with 1TB of storage costs $40. Do you think you can have 1 Yottabyte (10^12 squared is 10^24) of storage for just $1600?

There's a reason to all these paradoxes, it simply makes no sense to take the square of a sum of money, because you can't pay it in square dollars.


every time you double the number of bits, you increase the search space by the square of what came before

2^16 = 65536

2^32 = 4294967296

4294967296/65536 = 65536

so if a search space of 65536 costs you $8, then a search space of 4294967296 = 65536 x 65536 = 8 x 8 = 8^2 = $64

2^64 = 1.844674e+19

1.844674e+19/4294967296 = 4294967296

so a search space of 1.844674e+19 = 4294967296 x 4294967296 = 65536 x 65536 x 65536 x 65536 = 8 x 8 x 8 x 8 = 8^2^2 = 8^4 = $4096

where here $8 is the cost of finding (or not) 1 number in a haystack of 2^512 numbers, and the rest is identical.


> so if a search space of 65536 costs you $8, then a search space of 4294967296 = 65536 x 65536 = 8 x 8 = 8^2 = 64

So close, yet so far: the correct answer here is “65536 x 8 = $525k”, not “8x8 = $64”. If a $8 worth of hard drive can store 65536, then to store 4294967296 you need 65536 of such drives, not 8…

Man this is really embarrassing.


thats linear search

in gnfs the search space is number fields, so you increase the number of the number fields by the square of what came before.


Right, so if it costs $8 to search the space for a 16-bit key, a proper equation for the cost of cracking an n-bit key is

Cost(n) = search_space(n) * $8 / search_space(16)

And search_space(x) = 2^x so

Cost(n) = 2^n * $2^3 / 2^16 = $2^(n - 13)

Cost(32) = $2^(32 - 13) = $524288 Cost(64) = $2^(64 - 13) = $2251799813685248

So it quickly becomes astronomically expensive.

If you double the number of bits n you get

Cost(2n) = $2^(2n - 13)

You were assuming that Cost(32) = Cost(16)^2, in other words Cost(2n) = Cost(n)^2. But this equality doesn't hold:

Cost(n)^2 = $2^(n - 13)^2 = $2^(2n - 26)

This is significantly smaller than Cost(2n) = $2^(2n - 13) as stated above.

An equation with different units on each side like "512 bit = $8" doesn't work mathematically and will lead to contradictory conclusions.


Firstly, the number of prime numbers < n grows by roughly n/log(n)

So moving from 2^16 = 65536

to

2^32 = 4294967296

Increases the size of the total potential search space from 5909 to 193635251, which is ~ 5909 x 32769

secondly, the reason it grows by only n^2, is you only need to search along the curve n = a x b - which is the "sieve" part.

if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64

Thirdly, your stupidly high costs are because you seem to think you need to check if 4817 x 5693 is a prime fractorisation of 26768069 when in fact you already know the prime factorisation by that point.


> if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64

Can you answer

    a) If 1 apple costs you 8$ then (1)^2 apple costs you: ???
    b) If 10 apples cost you 8$ then (10)^2 apples costs you: ???
edit: between the price and the amount, usually there is a ~linear relationship. So if you can buy 2^512 something for 8$, then chances are that for 8 times the price you'll only get ~8 times more amount, and not 2^512 times more


the ^2's are

https://en.m.wikipedia.org/wiki/Big_O_notation

The tldr is bigO gives you how the cost of apples changes with the number of apples in the worst case.

its a rough simplification, the precise formula is close but not exactly that. the simplification is also actually (2n)^2 but in my defense I was going from memory of work from more than 2 decades ago (testing generated prime factors were good prime factors, overwhelmingly they were not).

using your apples example if the bigO of eating apples is O(n^2), and it takes you 8 minutes to eat 2 apples, it will take you no more than 64 minutes to eat 4 apples.


Okay I hadn't read about how many calculations it takes to cover the search space, so my equations aren't right about that.

However, you will do yourself a big favor if you take the time to understand why this is wrong:

> if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64

The cost per calculation is some constant C:

Cost(n calcs) = n calcs * C

Therefore,

Cost(n^2 calcs) = n^2 calcs * C

In this example, C = $8 / 2^512 = $2^-509

So Cost(2^512^2) = Cost(2^1024) = 2^1024 * $2^-509 = $2^425


I wouldnt argue with a claim of $2425 tbh.

The $8 will vary, and the actual cost function completely depends on the implimentation, its definitely possible to do worse, very likely possible to do better - there was rumors a few years ago that some Riemann surface based math can do it in O(1), but I know nothing about Riemann surfaces so can't judge their veracity.


> if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64

“The earth is flat blah blah blah”


opposing side of an argument resorting to a strawman is their implicit admission they were wrong.


We're trying to help you understand, and you're stubbornly refusing to swallow your pride and think about what we're saying


Do you even read?


Did you? you linked https://en.wikipedia.org/wiki/General_number_field_sieve

already That gives the worst case complexity right at the top.

> 2^16

[1] 65536

> n<-2^32

> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))

[1] 38178499

> 38178499/65536

[1] 582 2^16 -> 2^32 ~ x 2^9

> n<-2^64

> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))

[1] 84794674511

> 84794674511/38178499

[1] 2221

2^32-> 2^64 ~ x 2^11

> n<-2^128

> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))

[1] 2.507616e+15

> 2.507616e+15/84794674511

[1] 29572

2^64 -> 2^128 ~ x 2^15

So in GNFS 2^32 -> 2^64 complexity doesn't increase 4294967296 times, worst case it increases 2221 times

_In practice_ the cost grows closer to (nbits)^2 mostly because as an "embarrassingly parallel" algorithm you get significant benefits from cached results (quickly discard entire number fields that were previously calculated).

if O(x) = 8 then O(x)^2 = 8^2

I did miss a x2 earlier because e.g. 128 bits is 128^2 (16384 rather than 29572) harder than 64 bits, not 64^2

So its (2xO(x))^2 = (2 x 8)^2 = $256


You still cannot take the square of a dollar count… What don't you understand.

You're trying to seek refuge in math you barely understand to escape contradiction but it's not even a problem with GNFS, the fundamental problem is that you're trying to do something you mathematically cannot do, which is squaring sums of money. It's equivalent to a division by zero in a demonstration, it just nullifies all the reasoning around it.

And I've given plenty of illustrations why you cannot do that you definitely should read instead of obsessing yourself in proving you're right.


function ops(n) { return Math.exp (((64/9) * (1/3) + 1) * (Math.log(n) * (1/3)) * (Math.log(Math.log(n)) * (2/3))) }

> ops(2*16)

121106.42245436447

> ops(2*32)

38178499.24944067

> ops(2*32) / ops(2*16)

315.24751929508244

So if ops(2*16) costs $8, then ops(2*32) costs $8 * ops(2*32) / ops(2*16) = $2521.98. Far more than $8^2.

The cost reaches the millions for 64 bits, and ~$165 trillion for 128 bits:

> 8 * ops(2*64) / ops(2*16)

5601332.962726709 (far more than $8^2^2)

> 8 * ops(2*128) / ops(2*16)

165647073370.16437 (far more than $8^2^2^2)

Note that this is increasing faster than the number of bits squared:

> ops(2*32) / 32 * 2

37283.690673281904

> ops(2*64) / 64 * 2

20701824.831895076

> ops(2*128) / 128 * 2

153052707259.34015

As the wiki page says, it's super-polynomial in the number of bits.

If you still disagree with all of this, can you explain what's wrong with this method of calculating the worst-case cost of factoring the number n?

Cost(n) = ops(n) * Cost(2^16) / ops(2^16)

Or what you don't understand about this way of calculating it?


Note: Y combinator messed up my formatting in the above example, many asterisks * are supposed to be double asterisks (expoentiation in JS)


->The cost reaches the millions for 64 bits, and ~$165 trillion for 128 bits:

meanwhile 512 bits costs $8

But you just keep believing 128 bits costs $165 trillion ROFL.

>> ops(2 * 16)

>121106.42245436447

>> ops(2 * 32)

>38178499.24944067

>> ops(2 * 32) / ops(2 * 16)

>315.24751929508244

So if ops(216) costs $8, then ops(232) costs $8 * ops(232) / ops(216) = $2521.98. Far more than $8^2.

And I said $256, because as an "embarrassingly parallel" algorithm you get significant benefits from cached results (quickly discard entire number fields that were previously calculated).

Which, btw, is how they break 512bit DH in less than a minute.

Also still a lot closer than your >$165 trillion

sigh


I was going by the example cost of $8 for 16 bits you stated in an earlier comment:

"2^16 = 65536

...

so if a search space of 65536 costs you $8"

If you think the numbers I'm arriving at are wrong then can you specify exactly where my cost function goes wrong?


Their answer:

> So if ops(2*16) costs $8, then ops(232) costs $8 ops(232) / ops(216) = $2521.98. Far more than $8^2. > The cost reaches the millions for 64 bits, and ~$165 trillion for 128 bits:

Your answer

> meanwhile 512 bits costs $8 > But you just keep believing 128 bits costs $165 trillion ROFL.

At this point the only conclusion that doesn't involve questioning your sanity is just to conclude that you don't know anything about math and you struggle even reading mathematical notation (“if <> then <>” being the most basic construct one can learn about math, and you still struggle with it!).


> if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64

This and the stuff they've been writing about suggest that we are observing a mind that used to know things, but suffered some serious damages/degrade over time. That's always so sad.


The wiki article on GNFS says:

> The size of the input to the algorithm is log2 n or the number of bits in the binary representation of n. Any element of the order nc for a constant c is exponential in log n. The running time of the number field sieve is super-polynomial but sub-exponential in the size of the input.

"Super-polynomial" means that the running time increases by more than the square.

In any case, even if the algorithm were just polynomial, the argument about squaring costs doesn't work out.


I personally factored 512 bit numbers in 2007 for a lot less than $8, so tbh I'm going to say your overestimation of your knowledge of cryptology is far more hilarious than my paranoia about the potential truth of claims made by people claiming to be experts in cryoptology.

Your claim that factoring a 256bit number would cost fractions of a cent rather than my claim of roughly $3 is also very easily verifiable.

Further I'll note you sound exactly like the kind of person insisting diffie hillman was a good key exchange mechanism prior to Snowdens disclosures. good luck with that.


I'm charitably sharing this to you: https://news.ycombinator.com/item?id=42645216 because someone actually interested in learning things asked the right question. May you sleep less ignorant tonight.

> Further I'll note you sound exactly like the kind of person insisting diffie hillman was a good key exchange mechanism prior to Snowdens disclosures. good luck with that.

Before or after Snowden, Diffie-Hellman (it's Martin Hellman with an “e”) is a good key exchange mechanism! When using it on Z/pZ as field it's not the most practical one nowadays because you need big keys to get the desired security level (exactly the same problem as RSA), but you if you use an elliptic curves instead you can use shorter keys again (and this is exactly what ECDH is doing: it litterally means Elliptic Curve Diffie Hellman! Diffie-Hellman went nowhere).


->Before or after Snowden, Diffie-Hellman (it's Martin Hellman with an “e”) is a good key exchange mechanism!

Meanwhile, ~10 years ago https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf

After a week-long precomputation for a specified 512-bit group, we can compute arbitrary discrete logs in that group

in about a minute.

We find that 82% of vulnerable servers use a single 512-bit group, allowing us to compromise connections to 7% of Alexa Top Million HTTPS sites. In response, major browsers are being changed to reject short groups. We go on to consider Diffie-Hellman with 768- and 1024-bit groups. We estimate that even in the 1024-bit case, the computations are plausible given nation-state resources.


512 was known to be too low for a looong time, why do you think it was the export-grade security?

1024 being at risk against state-level adversaries isn't shocking to anyone, but there's a significant gap between this and costing $64, and this gap is much bigger than 10 years of Moore's Law (NSA had much more than 32*$64 of available compute ;).

You're making grandiose claims and there's nothing that holds in your reasoning, it really feels like I'm discussing physics with a flat-earther…


>There are several open source GNFS tools that can do 1024 very efficiently on GPUs, ...

Reference? Why has no one demonstrated this?


You and I aren't the ones in immediate danger. The service providers we rely on are. In discussions like these we have a "tragicomic" tendency to forget mankind's unstoppable progress. RSA-1024 offers 80 symmetric equivalent bits of security and we've been heading down this path for decades at an exponentially increasing pace.


Those service providers have had plenty of time to migrate to 2048 and most of them have already.

> a "tragicomic" tendency to forget mankind's unstoppable progress

When it comes to compute, it's no faster than Moore's Law, which means roughly one bit of symmetric encryption every two years.

> and we've been heading down this path for decades at an exponentially increasing pace.

Given that the encryption security is itself exponential in bit length, we are in fact heading down this path linearly! (A doubling in compute power means the ability to crack twice as hard cryptosystems, which means ones that have 1 bit more of security).

Key must be extended over time, and they are, and have been for decades. A PoC of an attack of a security system broken since 1999 should be treated exactly like how we are amazed at little computing power was available to the Apollo program: this is a cool bit of trivia that shows the growth of available computing power, but not a display of any kind of security issue.


Yikes. NIST wants to forbid even 2048-bit RSA by 2035, because it doesn't offer a good enough security level.


2048 achieves the same security level NIST requires from AEADs, doesn't it? What plausibly attacks it? Pushing people past 2048 seems counterproductive.


It should achieve the same level, yes. It's not exactly described what could attack it. Right now it seems that 2048-bits would be the last allowed step and they're not going to push people past 2048, they want to phase out RSA in general.


Counterproductive how and to what/whom? For the sake of keeping DNS TXT entries and e-mail headers compact? Would you stand by this statement also in the context of a certificate authority's root signing key, or an RSA host key for an ssh daemon?


There is no plausible threat to 2048, but you'd still rather people switch off RSA, either to curves (because they're more resilient) or to lattices (for post-quantum). Pushing people to higher RSA key sizes is a waste of effort.


RSA-1024 seems to be about 8 million times better than RSA-512, so cracking that would be $64 million in compute.

Not NSA-proof, but should be more than enough to keep spammers out, especially considering that DKIM is just one layer of protection.


That's just the extra compute required. There is a large increase in required memory that needs to be quickly accessible in a particular way. One estimate I saw claimed that breaking 1024 bit RSA would take more than 2 billion dollars over a period of years.


512 extra bits of key only gets you 23 bits of entropy?


Yep, the formula is a bit complicated, it comes from the general number field sieve (GNFS) algorithm, you can find some equivalences online between symmetric key algorithms and RSA and 23 bits seems about right. I have also seen lists where they give RSA-512 64 bits and RSA-1024 80 bits, just a 16 bit difference, but it looks a bit arbitrary to me. I think the NIST doesn't even look at RSA-512 anymore, as it is definitely broken, it only starts at 1024.

A RSA key is the product of two primes, not any number, so you need a lot more bits to get equivalent security to, say, AES. That's also a reason for elliptic-curve cryptography, which needs a lot less bits than RSA for the same level of security.


> A RSA key is the product of two primes, not any number, so you need a lot more bits to get equivalent security

This explanation doesn't seem right to me. For 1024 bit numbers, about 0.14% are prime. So that difference only loses a handful of bits. There are more than 2^2000 usable RSA-2048 keys, and simply guessing and dividing would require more than 2^1000 guesses. Those few bits lost to the prime number restriction aren't why the level of security is so low.


DKIM records are just DNS TXT records. Do they have a limit on the size of TXT records? Or are they going out of their way to try to parse the TXT records you're adding that look like DKIM records, failing on them, and then refusing to add them?


RFC1035 imposes a 255 character limit per string on TXT records .


Yes, so you use multiple strings (in a single record) if you need longer values:

"first 255 bytes" "second 255 bytes" "etc"

DNS clients combine the 255-byte strings back into a single string.


> DNS clients combine the 255-byte strings back into a single string.

No, DKIM clients and SPF clients do that. Generic DNS clients, however, are in theory free to ascribe any semantic meaning they like to the string separations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: