On 20/08/10 05:55, Aryeh Gregor wrote:
On Thu, Aug 19, 2010 at 2:37 AM, Tim Starling tstarling@wikimedia.org wrote:
The number of WHIRLPOOL iterations is specified in the output string as a base-2 logarithm (whimsically padded out to 3 decimal digits to allow for future universe-sized computers). This number can be upgraded by taking the hash part of the output and applying more rounds to it. A count of 2^7 = 128 gives a time of 55ms on my laptop, and 12ms on one of our servers, so a reasonable default is probably 2^6 or 2^7.
That seems reasonable. It could probably be done a lot faster on GPUs, I guess.
Well, a GPU is fast because it is massively parallel, with hundreds of cores. Each core is typically slower than a CPU. I chose a function which is non-parallelisable, so you'd expect computation of a single hash to be slower on a GPU than on a CPU. But a GPU can calculate hundreds of them at a time.
My idle fantasy of choosing a parallelisable function and then using GPUs to accelerate password hashing ended when I found out how much it would cost to fit out the Wikimedia cluster with half a dozen Tesla cards. I don't think the powers that be would be particularly interested in spending that kind of money for a tiny improvement in security.
[...]
Another thing to consider is if we could pick a function that's particularly inconvenient to execute on GPUs. Those are a great way for crackers to easily outdo any CPU implementation.
I think that would be a more useful way to go than provably secure hash functions. The relevant Wikipedia article suggests using a "memory bound function", which is sensitive to memory access time, and gets faster when more memory is available. Personally, I think it would be interesting to attempt to construct a function which is limited by branch prediction errors. They are said to be particularly expensive for GPUs. They also get progressively more expensive for more recent CPUs, which means that people with old hardware would have access to more secure hashing than they would otherwise.
-- Tim Starling