When Secure Boot is not available (unsupported or disabled), Lanzaboote
will attempt to boot kernels and initrds even when they fail the hash
verification. Previously, this would happen by falling back to use
LoadImage on the kernel, which fails if Secure Boot is available, as the
kernel is not signed.
The SecureBoot variable offers a more explicit way of checking whether
Secure Boot is available. If the firmware supports Secure Boot, it
initializes this variable to 1 if it is enabled, and to 0 if it is
disabled. Applications are not supposed to modify this variable, and in
particular, since only trusted applications are loaded when Secure Boot
is active, we can assume it is never changed to 0 or deleted if Secure
Boot is active.
Hence, we can be sure of Secure Boot being inactive if this variable is
absent or set to 0, and thus treat all hash verification errors as
non-fatal and proceed to boot arbitrary kernels and initrds (a warning
is still logged in this case). In all other cases, we treat all hash
verification failures as fatal security violations, as it must be done
in the case where Secure Boot is active (it is expected that this does
not lead to any false positives in practice, unless there are bigger
problems anyway).
goblin 0.7.1 introduces certification support for PE files. This seems to be broken, because we get:
Parsing PE failed Malformed entity: Unable to extract certificate. Probably cert_size:1599360838 is malformed!
from goblin when trying to parse our PE file in memory.
See #237 for context.
Atomic write works by first writing a temporary file, then syncing that
temporary file to ensure it is fully on disk before the program can
continue, and in the last step renaming the temporary file to the
target. The middle step was missing, which is likely to lead to a
truncated target file being present after power loss. Add this step.
Furthermore, even with this fix, atomicity is not fully guaranteed,
because FAT32 can become corrupted after power loss due to its design
shortcomings. Even though we cannot really do anything about this case,
adjust the comment to at least acknowledge the situation.
Since most files (stubs, kernels and initrds) on the ESP are properly
input-addressed or content-addressed now, there is no point in
overwriting them any more. Hence we detect what generations are already
properly installed, and don't reinstall them any more.
This approach leads to two distinct improvements:
* Rollbacks are more reliable, because initrd secrets and stubs do not
change any more for existing generations (with the necessary exception
of stubs in case of signature key rotation). In particular, the risk
of a newer stub breaking (for example, because of bad interactions
with certain firmware) old and previously working generations is
avoided.
* Kernels and initrds that are not going to be (re)installed anyway are
not read and hashed any more. This significantly reduces the I/O and
CPU time required for the installation process, particularly when
there is a large number of generations.
The following drawbacks are noted:
* The first time installation is performed after these changes, most of
the ESP is re-written at a different path; as a result, the disk usage
increases to roughly the double until the GC is performed.
* If multiple generations share a bare initrd, but have different
secrets scripts, the final initrds will now be separated, leading to
increased disk usage. However, this situation should be rare, and the
previous behavior was arguably incorrect anyway.
* If the files on the ESP are corrupted, running the installation again
will not overwrite them with the correct versions. Since the files are
written atomically, this situation should not happen except in case of
file system corruption, and it is questionable whether overwriting
really fixes the problem in this case.
The stubs on the ESP are now input-addressed, where the inputs are the
system toplevel and the public key used for signature. This way, it is
guaranteed that any stub at a given path will boot the desired system,
even in the presence of one of the two edge-cases where it was not
previously guaranteed:
* The latest generation was deleted at one point, and its generation
number was reused by a different system configuration. This is
detected because the toplevel will change.
* The secure boot signing key was rotated, so old stubs would not boot
at all any more. This is detected because the public key will change.
Avoiding these two cases will allow to skip reinstallation of stubs that
are already in place at the correct path.
Kernels and initrds on the ESP are now content-addressed. By definition,
it is impossible for two different kernels or initrds to ever end up at
the same place, even in the presence of changing initrd secrets or other
unreproducibility.
The basic advantage of this is that installing the kernel or initrd for
a generation can never break another generation. In turn, this enables
the following two improvements:
* All generations can be installed independently. In particular, the
installation can be performed in one pass, one generation at a time.
As a result, the code is significantly simplified, and memory usage
(due to the temporary files) does not grow with the number of
generations any more.
* Generations that already have their files in place on the ESP do not
need to be reinstalled. This will be taken advantage of in a
subsequent commit.
Architecture is now a generic structure that can be specialized
via an "external" trait for generating the paths you care about
depending on your target bootloader.
systemd-boot is now installed once for many generations rather than multiple times.
This means it is not really possible to manage different system in the same "machine", which is a very
obscure usecase, theoretically possible, but not yet encountered.
We will hard fail in case of encountering different architectures in bootspec.
This should still be compatible with cross-compiling systems in the future.
This generates `lzbt-systemd` binary instead of `lzbt`
which is using a special systemd-specific entrypoint.
This is part of the effort to enable multiple backends.
We introduce `linux-bootloader` a crate made to build Rust-based Linux-oriented bootloaders.
It follows systemd/UAPI group and semantics as much as possible, e.g. BLS/loader capabilities/stub capabilities.
A compile time feature is introduced that allows to build "fat" stubs
that can be used to build "fat" UKIs. "fat" here means that the actual
kernel and initrd are embedded in the PE binary, not only the file path
and hash. This brings us one step closer to feature partiy with
systemd-stub and thus one step closer to replacing it fully. Such a
"fat" or "real" UKI is also interesting for image-based deployments of
NixOS.
Bootspec has a mechanism called synthesis where you can synthesize
bootspecs if they are not present based on the generation link only.
This is useful for "vanilla bootspec" which does not contain any
extensions, as this is what we do right now.
If we need extensions, we can also implement our synthesis mechanism on
the top of it.
Enabling synthesis gives us the superpower to support non-bootspec
users. :-)
The message about malformed generatiosn should semantically be a
warning. However, since users might have hundres of old and thus
malformed generations and can do little about it, this should remain a
debug message. This way the user is not spammed with no-op warnings
while still enabling debugging.
lzbt currently happily nukes all boot entries, if it can't parse any
bootspecs. With the upcoming incompatible bootspec change, this might
be a problem that's worth avoiding. :)
I changed lzbt to fail hard in case, it can't generate any boot
items.
People reportedly want to compile the stub on i686 and AArch64
platforms for testing. Make compilation possible by providing proper
`make_instruction_cache_coherent` implementations on these platforms.
For x86 (just as x86_64), this is a no-op, because Intel made the
instruction cache coherent for compatibility with code that was written
before caches existed.
For AArch64, adapt the procedure from their manual to multiple
instructions.
... because this might not work, if we were not loaded from a file
system. It also removes the issue where we might not load the signed
image that was actually loaded.
Fixes#123
Due to the use of hash maps, the order of file installation was not
deterministic. I've changed the code the use BTreeMaps instead, which
makes this deterministic. While I was here, I tried to simplify the
code a bit.
To minimize writes to the ESP but still find necessary changes, compare
the hashes of the files on the ESP with the "expected" hashes. Only copy
and overwrite already existing files if the hashes don't match. This
ensures a working-as-expected state on the ESP as opposed to previously
where already existing files were just ignored.
Previously, generations were installed one after another. Now all
artifacts (kernels, initrd etc.) are first collected and then installed.
This way the writes to the ESP are reduced as duplicate paths are
already removed in the collection phase.