Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you have any idea about what Linux uses now, and how it differs from Fortuna?

I must say (and I say this very much as a rabid Linux fanboy) that it bothered me a bit how Linux's random number generator for a long time did not appear to be developed to a particular published and reviewed specification, but rather it seemed to be a somewhat ad hoc affair. This might be completely mischaracterizing it unfairly, but that's just how it comes across.

Does Linux's latest RNG design (which people say is finally good) conform to any mature and well analyzed and characterized published standard? I'm not asking if it uses standardized ciphers in particular stages, but whether the entire RNG follows a standard.



There are good reasons not to use Fortuna-as-written nowadays, so I don't fault Linux that. But it would be nice if someone familiar with the design published something like the Fortuna whitepaper, or Microsoft's Windows 10 RNG whitepaper[1].

(Why not use Fortuna as-written? The main reason is that it is not designed to scale. It's a reasonable design for the single core machines of the era when it was written, but it does not worry about contention at all. You could also swap out AES for Chacha as PRF, because: Chacha is faster to seed, faster to generate on machines without AES-NI, and doesn't have some of the theoretical problems -- discussed in the Fortuna chapter -- that AES has with its relatively small 128-bit blocks.)

[1]: https://aka.ms/win10rng


    random(4): Fortuna: allow increased concurrency
    Add experimental feature to increase concurrency in Fortuna.  As this
    diverges slightly from canonical Fortuna, and due to the security
    sensitivity of random(4), it is off by default.  To enable it, set the
    tunable kern.random.fortuna.concurrent_read="1".  The rest of this commit
    message describes the behavior when enabled.

    […]
* https://github.com/freebsd/freebsd-src/commit/179f62805cf05c...

    random(4): Flip default Fortuna generator over to Chacha20
    The implementation was landed in r344913 and has had some bake time (at
    least on my personal systems).  There is some discussion of the motivation
    for defaulting to this cipher as a PRF in the commit log for r344913.
    
    As documented in that commit, administrators can retain the prior (AES-ICM)
    mode of operation by setting the 'kern.random.use_chacha20_cipher' tunable
    to 0 in loader.conf(5).
    
    Approved by: csprng(delphij, markm)
    Differential Revision: https://reviews.freebsd.org/D22878
* https://github.com/freebsd/freebsd-src/commit/68b97d40fbe826...


I note that the author of the comment you're responding to and the author of these commits is one and the same, fwiw.


The haphazard design of the LRNG has bugged cryptography engineers for awhile, but at this point it's also pretty well studied†. I would be careful with the assumption that something like Fortuna is a "published and reviewed specification"; Fortuna is really just a case study Ferguson and Schneier wrote in _Practical Cryptography_. It's not a standard, or the winner of some kind of RNG design competition.

There are standard RNG designs that NIST publishes, but they're low level, on the order of constructions, not whole system designs. A lot of the important details are OS-specific. It shouldn't make you more comfortable to hear that a system uses something like HMAC_DRBG; that detail doesn't tell you a whole lot about how the system as a whole works.

See, for instance: https://eprint.iacr.org/2013/338.pdf --- skip to Section 5 for a pretty detailed description of the LRNG.


At which point? A quick skim of git log --after="2013" drivers/char/random.c to pick out a few that seem to just decide to change some part of the design. They might all be done for good reasons as I said it just seems to be pretty ad hoc.

    random: try to actively add entropy rather than passively wait for it
    random: only read from /dev/random after its pool has  received 128 bits
    random: Return nbytes filled from hw RNG
    random: mix rdrand with entropy sent in from userspace
    random: use a different mixing algorithm for add_device_randomness()
    random: add backtracking protection to the CRNG
    random: replace non-blocking pool with a Chacha20-based CRNG
    random: use an improved fast_mix() function
    random: cap the rate which the /dev/urandom pool gets reseeded
    random: account for entropy loss due to overwrites
    random: allow fractional bits to be tracked
And often very little justification or reasoning is recorded:

    random: replace non-blocking pool with a Chacha20-based CRNG
    
    The CRNG is faster, and we don't pretend to track entropy usage in the
    CRNG any more.
> I would be careful with the assumption that something like Fortuna is a "published and reviewed specification"; Fortuna is really just a case study Ferguson and Schneier wrote in _Practical Cryptography_. It's not a standard, or the winner of some kind of RNG design competition.

Well that's basically what a standard is, in my mind. I place little weight on additional initials of organizations which publish standards, I just think they should be there so the design can be analyzed and examined as a whole, and by people who are not experts in C programming or want to wade through the Linux source.

The section of the paper you linked would be a fine standard for Linux's RNG except that it's a continually moving target.


I think the ad hoc stuff you're talking about is mostly the kind of systems design detail you're going to get plugging any CSPRNG into a kernel. In particular: there's no way to get to a point where you can just reconcile a commit against a standard to know whether it's good or not; you're going to have to do the work of following and grokking the changelog no matter what, because no "standard" for a kernel CSPRNG is going to capture all the details of safely providing randomness to a Linux (or FreeBSD) kernel.


It's not mostly that, there are many which change internal details of the algorithms (my post butchered the list of commits, I edited it and now they show up).

And a complete design specification can certainly address the practical details of integration into a kernel, the random number subsystem in Linux has just a small interface with the rest of the kernel that can easily be captured abstractly.


I don't know what I'm getting myself into rhetorically here. I'm not an apologist for the LRNG. My takes are just these:

1. It's not an unalloyed good thing to use "Fortuna", which is not so much a peer-reviewed standard as "a thing in a book with Bruce Schneier's name on it". I might be a little more nervous about a design that advertises being based on "Fortuna" than a random design, because the hard parts of doing a kernel CSPRNG are I think mostly not what they wrote about in that book.

2. Whether or not you have a reference design to base off of, you're still going to end up with a stream of fiddly commits determining i.e. when the generator is seeded well enough to release random bits to callers, or the order in which raw events are fed into the generator. Those annoying fiddly bits are the challenge of maintaining a secure RNG, and there's no standard anywhere that will release you from having to pay attention to that stuff (thankfully, Jason Donenfeld is doing that thankless work for us now).

The rest of it, sure. The LRNG sucks, and is ill-specified (if relatively well studied), and has a janky history. It's in good hands now! It should not be rewritten to comply with some "standard"! But sure, the rest of your arguments, well taken, &c &c.


Fortuna was analyzed (and generalized) in https://eprint.iacr.org/2014/167. It's a solid design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: