This looks like an excellent idea. I will implement support for it in my libc immediately when it's available in a release kernel.
Currently userspace has incentive to roll their own RNG stuff. This removes that, which is good for everyone. The less incentive you give people to write code that has already been written by other, more experienced people, the better.
I would go even further and export the kernel ciphers via vDSO. Then user space could rely on those ciphers being optimized for the host CPU and side channel free instead of everybody bringing their own crypto primitives. I don't think there is a good reason why gnupg and openssl would bring different crypto primitives.
For those who need it: vDSO is almost a hack that allows implementation of syscalls without context switch.
Making providing high-quality randomness without compromises a first-class OS feature seems like a great idea. Especially because it reduces the chances of using a bad/incorrectly seeded/inadequately seeded/cloned from snapshot userspace cryptographic PRNG, and of using a non-cryptographic PRNG for cryptographic stuff.
I'm a kernel expert, so I don't know if VDSO is the right implementation, but the idea seems sound. Make it harder for people to make mistakes that break security!
On many operating systems, including macOS and Windows, the only ABI-stable interface is a userland-to-userland interface. Application code loads a shared library vended by the system and calls functions from that library like open() or CreateFileW(), in userland. These functions are in turn are usually thin wrappers around system calls with equivalent argument lists – but not always, and even when they are, it's only an implementation detail. Trying to call system calls directly without going through the wrappers risks incompatibility with future OS versions, e.g. [1].
On Linux, traditionally, the userland-kernel interface itself is ABI-stable. The userland code can be fully custom and doesn't even need to support dynamic linking. Syscall numbers and arguments are fixed, and application code can perform its own syscall instructions. You can then layer something like glibc on top of that, which provides its own syscall wrapper functions with a corresponding stable (userland-to-userland) ABI, but that's separate.
The vDSO has always been a step away from that. It's userland code, automatically mapped by the kernel, that provides its own system call wrappers. Applications are still allowed to make system calls manually, but they're encouraged to use the vDSO instead. Its original purpose was to allow certain functions such as gettimeofday() to be completed in userland rather than actually performing a syscall [2], but it's been used for a few other things. It's worked pretty well, but it does have the drawback that statically linked binaries no longer control all of the code in their address space. This, for instance, caused a problem with the Go runtime [3], which expected userland code to follow a certain stack discipline.
Anyway, this patch seems to me like a significant further step. Not just putting an RNG into the vDSO, which is more complicated than anything the vDSO currently does, but also essentially saying that you must use the vDSO's RNG to be secure (to quote the RFC, "userspace rolling its own RNG from a getrandom() seed is fraught"), and explicitly choosing not to provide stable APIs for custom userland RNGs to access the same entropy information.
I don't think that's necessarily a bad thing. It's not that complicated, and to me, macOS' and Windows' approach always seemed more sensible in the first place. But it's a step worth noting.
[1] https://github.com/jart/cosmopolitan/issues/426
[2] https://man7.org/linux/man-pages/man7/vdso.7.html
[3] https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/
> hyperspeed card shuffling
i think that's mostly scientific computing where you want the ability to control the RNG and even intentionally use deterministic seeds for reproducibility.
i think if the kernel is going to provide secure random numbers (which seems like a good idea), it should be through a (new) specific system call that fails unless a hardware entropy facility is available. performance seems like a secondary goal, where the primary is ensuring that people are using the right thing to generate keys and such.
There is always the option to reseed userspace PRNG with with getrandom() regularly. This makes userspace PRNG safe and more versatile than getrandom().
Nowhere near an expert, but this seems like a bad idea?
Why not just have the kernel map a page containing random bytes, that it rewrites with newly seeded random bytes when needed? Then userspace CSPRNGs could use that as a basis for their own reseeding.
This change makes me sad, not because it isn't brilliant work - it is - but because this kind of brilliant work is unlikely to move the needle in the real-world. I can't use this RNG because it isn't FIPS validated. I can't sponsor getting it FIPS validated because the cryptography it uses isn't even FIPS compatible. It wouldn't make it past the cursory skim of a reviewer. That says more about FIPS than it does this work, but it still means that it's a non-starter for many libraries and applications that end up having US Federal Government users ... which is to say that basically everything important gets pushed away from benefiting from work like this.
Seperately, I'm also a little sad that there are no distinct RNGs for secret and non-secret output. It is an indispensable layer of defense to have separately seeded RNGs for these use-cases. That way if there is an implementation flaw or bug in the RNG that leads to re-use or predictability, it is vastly harder for that flaw to be abused. s2n, BouncyCastle, OpenSSL, and more user-space libraries use separately seeded RNGs for this reason and I don't think I could justify removing that protection.