Abstract: |
The output of cryptographic functions, be it encryption routines or hash functions, should be statistically indistinguishable from a truly random data for an external observer. The property can be partially tested automatically using batteries of statistical tests. However, it is not easy in practice: multiple incompatible test suites exist, with possibly overlapping and correlated tests, making the statistically robust interpretation of results difficult. Additionally, a significant amount of data processing is required to test every separate cryptographic function. Due to these obstacles, no large-scale systematic analysis of the the round-reduced cryptographic functions w.r.t their input mixing capability, which would provide an insight into the behaviour of the whole classes of functions rather than few selected ones, was yet published. We created a framework to consistently run 414 statistical tests and their variants from the commonly used statistical testing batteries (NIST STS, Dieharder, TestU01, and BoolTest). Using the distributed computational cluster providing required significant processing power, we analyzed the output of 109 round-reduced cryptographic functions (hash, lightweight, and block-based encryption functions) in the multiple configurations, scrutinizing the mixing property of each one. As a result, we established the fraction of a function’s rounds with still detectable bias (a.k.a. security margin) when analyzed by randomness statistical tests. |