-r--r--r-- 20038 librandombytes-20230905/doc/html/security.html raw
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style type="text/css">
html{overflow-y:scroll}
body{font-family:"Noto Sans","Droid Sans","DejaVu Sans","Arial",sans-serif;line-height:1.5}
tt,code{background-color:#f0f0f0;font-family:"Noto Sans Mono","Droid Sans Mono","DejaVu Sans Mono","Courier New",monospace,sans-serif;font-size:1em;}
pre{margin-left:3em}
p,ul,ol,blockquote,pre{font-size:1.0em;line-height:1.6}
li p{font-size:1.0em}
blockquote p{font-size:1.0em}
h1{font-size:1.5em}
h2{font-size:1.3em}
h3{font-size:1.0em}
h1 a{text-decoration:none}
table{border-collapse:collapse}
th,td{border:1px solid black}
table a{text-decoration:none}
table tr{font-size:1.0em;line-height:1.6}
.links a:hover{text-decoration:underline}
.links a:active{text-decoration:underline}
.links img{width:200px;padding-left:1em}
.links td{border:0px;padding-top:0.5em;padding-bottom:0.5em}
.headline{padding:0;font-weight:bold;font-size:1.5em;vertical-align:top;padding-bottom:0.5em;color:#2f8a59}
.navt{display:inline-block;box-sizing:border-box;-moz-box-sizing:border-box;-webkit-box-sizing:border-box;
min-width:14%;margin:0;padding:0;padding-left:0.5em;padding-right:0.5em;vertical-align:center;
font-weight:bold;font-size:1.1em;text-align:center;border:1px solid black}
.here{border-bottom:0px;background-color:#ffffff}
.away{background-color:#2f8a59;}
.away a{text-decoration:none;display:block;color:#ffffff}
.away a:hover,.away a:active{text-decoration:underline}
.main{margin:0;padding-top:0em;padding-bottom:1%;clear:both}
</style>
<title>
librandombytes: Security</title>
</head>
<body>
<div class=headline>
librandombytes</div>
<div class=nav>
<div class="navt away"><a href=index.html>Intro</a>
</div><div class="navt away"><a href=download.html>Download</a>
</div><div class="navt away"><a href=install.html>Install</a>
</div><div class="navt away"><a href=api.html>API</a>
</div><div class="navt here">Security
</div><div class="navt away"><a href=license.html>License</a>
</div></div>
<div class=main>
<p>The standard security goal for a random-number generator (RNG) is that
no feasible computation can distinguish the RNG output from true
randomness, i.e., from a uniform random string of the same length. This
allows applications to treat the RNG output as true randomness.</p>
<p>Beyond merely asking for indistinguishability, some applications ask for
"forward secrecy". This means that the RNG output is indistinguishable
from true randomness for an attacker who sees the subsequent state of
the entire device, including the internal state of the RNG.</p>
<p>A typical explanation of the importance of forward secrecy is as
follows: "A rogue country's spy agency has intercepted ciphertexts that
a whistleblower has sent to a journalist. Agents then grab the
whistleblower's computer while the computer is unlocked, take it away
for analysis, and see that the whistleblower deleted the messages after
sending them. Can the agency use the computer's current RNG state to
reconstruct old keys and decrypt the ciphertexts?"</p>
<p>Sometimes there are also requests for "backward secrecy". A strict
interpretation of backward secrecy says that an attacker who sees the
state of the entire device cannot distinguish the <em>next</em> RNG output from
uniform. A weaker variant is for secrecy to <em>eventually</em> be restored
after a compromise. Either way, backward secrecy is advertised as
providing "self-healing", "post-compromise security", etc. Beware that
this assumes a questionable concept of compromise as a one-time event
rather than a naturally persistent state; meanwhile the complications of
trying to achieve "post-compromise security" have a long history of
distracting attention from the more fundamental task of preventing
compromises in the first place.</p>
<p>Whether or not backward secrecy is desired, there are many ways for RNGs
to fail to provide forward secrecy, or to fail to even reach the
standard security goal. For example:</p>
<ul>
<li>
<p>After source code for the RC4 stream cipher was posted anonymously in
1994, RC4 was rapidly shown to flunk the standard definition of cipher
security. Before and after this, RC4 was pushed by NSA (which had set
up <a href="https://cr.yp.to/export/dtn/V3N4_10_92.pdf">special procedures</a>
to allow RC4 export licenses) and by the RSA company. RC4 remained in
wide use for two decades for TLS encryption and for RNGs.</p>
</li>
<li>
<p>Around 2005, NSA paid the RSA company
<a href="https://www.reuters.com/article/us-usa-security-rsa-idUSBRE9BJ1C220131220">$10 million</a>
to use Dual EC as the default RNG in RSA's BSAFE library. Dual EC
was an RNG designed to be
<a href="https://projectbullrun.org/dual-ec/index.html">predictable by NSA</a>.</p>
</li>
<li>
<p>Newer RNGs have often turned out to be feasible to distinguish from
uniform. Common contributing factors include inadequate attention to
security (which is hard to measure) and much more attention to
performance (which is much easier to measure). For example, a
<a href="https://tosc.iacr.org/index.php/ToSC/article/view/8700">2020 attack</a>
uses just 2<sup>57</sup> CPU cycles to recover the secret "seed" and secret
"increment" of the "PCG64" RNG.</p>
</li>
<li>
<p>Many RNG constructions advertised as "provably secure" constructions
based on well-studied cryptographic primitives turn out to provide
much lower security levels than desired. For example, AES-256-CTR, the
usual "provably secure" way to use AES-256 as a stream cipher and RNG,
provably flunks a simple collision test after 2<sup>68</sup> bytes of output.
This is not a sharp cutoff: one needs a much smaller limit on the
output length to avoid having this attack succeed with probability
measurably above 0.</p>
</li>
<li>
<p>Even after the removal of Dual EC, NIST's "SP 800-90" RNG standard
remains unnecessarily complicated and unnecessarily difficult to
analyze. A partial analysis in 2018 showed that SP 800-90A
<a href="https://eprint.iacr.org/2018/349">failed to provide</a>
some of its claimed forward-secrecy properties. Also, because of
performance problems in how NIST handles short outputs, applications
are encouraged to add their own larger output buffers, typically
compromising forward secrecy.</p>
</li>
</ul>
<p>Furthermore, RNG security can be compromised by implementation failures.
For example:</p>
<ul>
<li>
<p>The AES design encourages implementors to use "T-table"
implementations on many platforms, exposing secret keys to
<a href="https://cr.yp.to/papers.html#cachetiming">cache-timing attacks</a>.</p>
</li>
<li>
<p>If a virtual machine generates RNG output (perhaps used for a nonce
sent to an attacker) and is then restored from a snapshot, many types
of RNGs will generate the same RNG output again (perhaps then used for
a secret key). This recycling compromises the security of various
applications that would otherwise be safe.</p>
</li>
<li>
<p>Similarly, many types of RNGs will produce identical results in parent
and child processes after <code>fork()</code>, again compromising security of
various applications that would otherwise be safe.</p>
</li>
<li>
<p>In 2006, the Debian OS distribution incorrectly removed essential
lines of code from the OpenSSL RNG, producing easily breakable keys in
various cryptographic applications. Other Debian-based distributions,
such as Ubuntu, copied the change.</p>
<p>The Debian error was publicly discovered and corrected in 2008. A
<a href="https://fahrplan.events.ccc.de/congress/2008/Fahrplan/events/2995.en.html">review</a>
showed that the error was triggered by warnings from a code-analysis
tool that did not understand what was going on in the RNG code.</p>
</li>
<li>
<p>In 2012, two
<a href="https://eprint.iacr.org/2012/064">independent</a>
<a href="https://factorable.net/">papers</a>
showed that a noticeable percentage of RSA public keys available on
the Internet were factorable because two keys had factors in common.</p>
<p>The second paper observed vulnerable keys from Linux, FreeBSD, and
Windows, and gave a convincing explanation of how randomness would
repeat on embedded Linux systems:</p>
<ul>
<li>
<p>The Linux kernel accumulated entropy very slowly on small devices.</p>
</li>
<li>
<p>Keys were generated by the device on first boot, starting very
early in the boot process.</p>
</li>
<li>
<p><code>/dev/urandom</code>, the source of randomness, returned data without
waiting for 256 bits of entropy to be collected.</p>
</li>
</ul>
<p>Another available mechanism, <code>/dev/random</code>, waited for entropy but
was justifably avoided by many applications. This mechanism had the
serious misfeature of also waiting for further entropy for every
subsequent call; slow entropy collection thus turned <code>/dev/random</code>
into a <a href="https://github.com/openssl/openssl/issues/9078">problematic</a>
bottleneck.</p>
</li>
<li>
<p>In 2018, a Linux kernel bug was
<a href="https://lkml.org/lkml/2018/4/12/711">discovered</a>
that would result in <code>/dev/random</code> not waiting long enough for
entropy to be collected.</p>
</li>
</ul>
<p>RNG failures are often used as motivation for the development of further
RNGs, but if this process is not carefully managed then it increases the
load on reviewers and encourages further security problems.</p>
<p>librandombytes does not provide a new RNG; it is a wrapper around
existing RNGs. It does not wrap every available RNG; it limits the
number of options to simplify review. It takes the maximally centralized
option, the OS kernel's RNG, by default; it provides one backup option,
the OpenSSL RNG, just in case this is critical for system performance.</p>
<p>Usage of the OS kernel's RNG has an imperfect track record, as
illustrated by the papers from 2012 cited above. This is concerning.
However, there are reasons to believe that risks have been reduced:</p>
<ul>
<li>
<p>There is increasing adoption of OpenBSD's approach of provisioning
all devices with initial randomness at installation time, for
example by having a computer generate initial randomness for a
device that it is flashing or for a new virtual-machine image.</p>
</li>
<li>
<p>There is increasing usage of mechanisms to quickly collect entropy,
even on small devices. This serves as a backup if initial randomness
fails, and can rescue forward secrecy when flash does not adequately
erase randomness that was stored for the next boot. Also, people
trying to achieve backward secrecy require these mechanisms.</p>
</li>
<li>
<p>There is increasing usage of mechanisms for virtual machines to
collect entropy from the host machine, such as <code>virtio-rng</code>.</p>
</li>
<li>
<p>It is increasingly common for CPUs to include RNG hardware, although
this hardware is inherently difficult to audit and
<a href="https://www.iacr.org/archive/ches2013/80860203/80860203.pdf">easy to sabotage</a>.
A <a href="https://lore.kernel.org/all/776cb5c2d33e7fd0d2893904724c0e52b394f24a.1565817448.git.thomas.lendacky@amd.com/T/#u">BIOS bug</a>
removed all entropy from the RNG on AMD CPUs after suspend/resume;
the bug wasn't addressed for five years.</p>
</li>
<li>
<p>There is increasing availability of <code>getrandom</code> and/or <code>getentropy</code>.
These functions wait for initial entropy to be collected (without the
<code>/dev/random</code> misfeature of also waiting for further entropy for every
subsequent call). This avoids security problems in the disaster
scenario of initial randomness failing and entropy collection being
slow. Some history:</p>
<ul>
<li>
<p>2014: <code>getentropy</code> was introduced in OpenBSD 5.6, and <code>getrandom</code>
was introduced (as a variant of <code>getentropy</code>) in Linux kernel 3.17.</p>
</li>
<li>
<p>2015: <code>getentropy</code> and <code>getrandom</code> were added to
<a href="https://blogs.oracle.com/solaris/post/solaris-new-system-calls-getentropy2-and-getrandom2">Solaris 11.3</a>.</p>
</li>
<li>
<p>2017: <code>getentropy</code> and <code>getrandom</code> were added to
<a href="https://sourceware.org/legacy-ml/libc-alpha/2017-02/msg00079.html">glibc 2.25</a>,
with <code>getentropy</code> provided as a wrapper around <code>getrandom</code> on
Linux systems.</p>
</li>
<li>
<p>2018: <code>getentropy</code> and <code>getrandom</code> were added to
<a href="https://www.freebsd.org/releases/12.0R/relnotes/">FreeBSD 12.0</a>.</p>
</li>
</ul>
<p>FreeBSD already changed <code>/dev/urandom</code>
<a href="https://papers.freebsd.org/2017/vbsdcon/gurney-A_Deep_Dive_into_FreeBSDs_Kernel_RNG.files/gurney-A_Deep_Dive_into_FreeBSDs_Kernel_RNG.pdf">in 2000</a>
to wait for initial entropy to be collected, already bringing the
same benefit without requiring applications to adopt a new
interface. For OpenBSD, randomness was always available from
installation time, so <code>/dev/urandom</code> never had a reason to wait.
(But <code>getrandom</code> and <code>getentropy</code> have separate advantages of not
relying on opening a file; this simplifies <code>chroot</code> setups and
avoids various failure cases.)</p>
</li>
<li>
<p>There is increasing convergence upon the ChaCha20 stream cipher as
the foundation of random-number generation.</p>
<p>ChaCha20 is a 2008 variant of the 2005 Salsa20 stream cipher, and is
one of just two ciphers in TLS 1.3. ChaCha20 is stronger against
known attacks than the other TLS 1.3 cipher, AES-256-CTR; recall
that AES-256-CTR flunks a feasible collision test. ChaCha20 also
does not have AES's problem of encouraging variable-time software.</p>
<p>Regarding possible attack improvements, the challenge of breaking
scaled-down versions of Salsa20 and ChaCha20 has attracted 20 attacks
from <a href="https://cr.yp.to/snuffle.html#security">43 cryptanalysts</a>.
With the published techniques, ChaCha6 is feasible to break by a
large-scale attack, and ChaCha7 is subject to an attack faster than
brute force. (For comparison, even if the aforementioned collision
failure is ignored, 6 out of the 14 rounds of AES-256 are breakable
by a feasible attack, and 8 out of the 14 rounds of AES-256 are
subject to an attack faster than brute force.) The long-term
security of ChaCha8 is unclear, but the recommended ChaCha20 has
very low risk.</p>
</li>
<li>
<p>There is increasing convergence upon a simple, efficient, easily
analyzed mechanism to provide forward secrecy given a strong stream
cipher: <a href="https://blog.cr.yp.to/20170723-random.html">fast-key-erasure RNGs</a>.
Examples of these RNGs include</p>
<ul>
<li>
<p>2013: Nick Mathewson's <a href="https://github.com/nmathewson/libottery">libottery</a>,
which says it is "based on a construction described in XXX, as
summarized by DJB";</p>
</li>
<li>
<p>2013: OpenBSD's
<a href="https://github.com/openbsd/src/commit/90c1fad70a3483c2c72c3c90acf438a5f235c776"><code>arc4random</code> update</a>,
with credit to libottery, replacing previous use of RC4;</p>
</li>
<li>
<p>2018: FreeBSD's
<a href="https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=182610"><code>arc4random</code> update</a>,
with credit to libottery and OpenBSD; and</p>
</li>
<li>
<p>2022: Linux's
<a href="https://www.zx2c4.com/projects/linux-rng-5.17-5.18/inside-linux-kernel-rng-presentation-sept-13-2022.pdf">RNG updates</a>.</p>
</li>
</ul>
<p>A 2005 paper by
<a href="https://eprint.iacr.org/2005/029">Barak and Halevi</a>
analyzes the following special case of fast-key-erasure RNGs: an
m-bit key is used to generate 2m bits of stream-cipher output, split
into m bits of RNG output and m bits to overwrite the key. A general
fast-key-erasure RNG obtains the same security properties at much
higher speed by generating more output from each key: e.g., 768
bytes of output from a 32-byte key.</p>
</li>
</ul>
<p><code>librandombytes-kernel</code> uses <code>getrandom</code> if it finds it, otherwise
<code>getentropy</code> if it finds it, otherwise <code>/dev/urandom</code>. In the case of
<code>/dev/urandom</code>, <code>librandombytes-kernel</code> waits at program startup for
<code>/dev/random</code> to become readable; however, it skips this if the file
<code>/var/run/urandom-ready</code> exists (or if <code>/dev/random</code> does not exist).</p>
<p>The idea is that system administrators of Linux systems too old to have
<code>getrandom</code> can run</p>
<pre><code>dd count=1 bs=64 if=/dev/random of=/dev/urandom status=none \
&& findmnt -t tmpfs -T /var/run >/dev/null \
&& touch /var/run/urandom-ready &
</code></pre>
<p>from boot scripts so that <code>librandombytes-kernel</code> doesn't wait after
initial per-boot entropy collection. Note that systems where <code>/var/run</code>
is a persistent directory (rather than <code>tmpfs</code>), cleared by boot
scripts, should not create <code>/var/run/urandom-ready</code>, since
<code>librandombytes-kernel</code> might be running earlier in the boot process.</p>
<p>There are various other ways that one can imagine <code>/dev/urandom</code> being
read too early on old Linux systems (and not so old, as in the 2018
Linux bug mentioned above); <code>librandombytes-kernel</code> tries to reduce
risks but does not make guarantees. Provisioning systems with initial
randomness is recommended in all cases. There are also many other
security reasons to recommend retiring old Linux systems if they cannot
be upgraded.</p>
<p><code>librandombytes-openssl</code>, which simply calls OpenSSL's <code>RAND_bytes</code>,
raises more security concerns:</p>
<ul>
<li>
<p>OpenSSL's <code>RAND_bytes</code> maintains an RNG state in user space,
evidently because of a belief that this produces an important
speedup. With its current defaults, OpenSSL can take an hour before
it reseeds the user RNG state from the OS. This means that whatever
entropy-injection mechanisms are in the OS RNG, for example to
protect virtual machines, can leave OpenSSL unprotected for an hour.</p>
</li>
<li>
<p>OpenSSL's default RNG,
starting in <a href="https://www.openssl.org/blog/blog/2017/08/12/random/">2017</a>,
is a NIST RNG, specifically CTR-DRBG built on top of AES. A
<a href="https://eprint.iacr.org/2019/996">2019 attack</a>
showed that OpenSSL in FIPS mode was using T-table implementations
of AES and was leaking RNG secrets to a timing attack. Even with a
constant-time AES implementation, the security level of this RNG is
not clear from the existing literature.</p>
</li>
<li>
<p>When OpenSSL first seeds its RNG from <code>/dev/urandom</code> on
pre-<code>getrandom</code> Linux systems, it waits (starting with OpenSSL
1.1.1d in 2019) for <code>/dev/random</code> to become readable, and then
creates shared memory segment 114. It skips the <code>/dev/random</code> wait
if shared memory segment 114 exists already. This does not have the
same security properties as checking for existence of a file that
can be created only by root: if <em>any</em> account on the same system is
running code from an attacker then the attacker can create shared
memory segment 114, disabling the <code>/dev/random</code> test for all OpenSSL
instances on the same system.</p>
</li>
</ul>
<p>These OpenSSL issues are not obviously showstoppers. For systems where
using the OS RNG is a performance problem, OpenSSL's RNG seems to be a
reasonable backup plan. Of course, it would be possible to patch OpenSSL
to use a fast-key-erasure RNG on top of ChaCha20, to use a safer
<code>/dev/random</code> test (which could also be handled as a wrapper around
OpenSSL), and to reseed more frequently.</p><hr><font size=1><b>Version:</b>
This is version 2023.09.04 of the "Security" web page.
</font>
</div>
</body>
</html>