document minor restored kernel hardening features

This commit is contained in:
Daniel Micay 2022-10-29 03:53:57 -04:00
parent 53c6cdf17a
commit 45ff49c34d

View File

@ -310,31 +310,40 @@
<li>
Hardened kernel
<ul>
<li>4-level page tables are enabled on arm64 to provide a much larger
address space (48-bit instead of 39-bit) with significantly higher
entropy Address Space Layout Randomization (33-bit instead of
24-bit).</li>
<li>Random canaries with a leading zero are added to the kernel heap
(slub) to block C string overflows, absorb small overflows and detect
linear overflows or other heap corruption when the canary value is
checked (on free, copies to/from userspace, etc.).</li>
<li>Memory is wiped (zeroed) as soon as it's released in both the
low-level kernel page allocator and higher level kernel heap allocator
(slub). This substantially reduces the lifetime of sensitive data in
memory, mitigates use-after-free vulnerabilities and makes most
uninitialized data usage vulnerabilities harmless. Without our
changes, memory that's released retains data indefinitely until the
memory is handed out for other uses and gets partially or fully
overwritten by new data.</li>
<li>Kernel stack allocations are zeroed to make most uninitialized
data usage vulnerabilities harmless.</li>
<li>Assorted attack surface reduction through disabling features or
setting up infrastructure to dynamically enable/disable them only as
needed (perf, ptrace).</li>
<li>Assorted upstream hardening features are enabled, including many
which we played a part in developing and landing upstream as part of
our linux-hardened project (which we intend to revive as a more active
project again).</li>
<li>4-level page tables are enabled on arm64 to provide a much
larger address space (48-bit instead of 39-bit) with
significantly higher entropy Address Space Layout
Randomization (33-bit instead of 24-bit).</li>
<li>Random canaries with a leading zero are added to the
kernel heap (slub) to block C string overflows, absorb small
overflows and detect linear overflows or other heap corruption
when the canary value is checked (on free, copies to/from
userspace, etc.).</li>
<li>Memory is wiped (zeroed) as soon as it's released in both
the low-level kernel page allocator and higher level kernel
heap allocator (slub). This substantially reduces the lifetime
of sensitive data in memory, mitigates use-after-free
vulnerabilities and makes most uninitialized data usage
vulnerabilities harmless. Without our changes, memory that's
released retains data indefinitely until the memory is handed
out for other uses and gets partially or fully overwritten by
new data.</li>
<li>Kernel stack allocations are zeroed to make most
uninitialized data usage vulnerabilities harmless.</li>
<li>Assorted attack surface reduction through disabling
features or setting up infrastructure to dynamically
enable/disable them only as needed (perf, ptrace).</li>
<li>Assorted upstream hardening features are enabled,
including many which we played a part in developing and
landing upstream as part of our linux-hardened project (which
we intend to revive as a more active project again).</li>
<li>Forced kernel module signing with per-build keys and
lockdown mode set to forced confidentiality mode help to
enforce a low-level boundary between the kernel and userspace
even if mistakes are made in SELinux policy or there's a deep
userspace compromise.</li>
<li>Additional consistency / integrity checks are enabled for
frequently targeted kernel data structures.</li>
</ul>
</li>
<li>Android Runtime Just-In-Time (JIT) compilation/profiling is fully