1
0
forked from rosa/hakurei

155 Commits

Author SHA1 Message Date
Kat
954b60dbab TODO: consider writing tests for the test runner. 2026-04-19 02:35:47 +10:00
Kat
a06e622a84 TODO: actually write tests lol. 2026-04-19 02:35:47 +10:00
Kat
a2af3a0db0 cmd/pkgserver/ui_test: implement skipping from DSL 2026-04-19 02:35:47 +10:00
Kat
4c9d1c9d87 cmd/pkgserver/ui_test: add JSON reporter for go test integration 2026-04-19 02:35:47 +10:00
Kat
178cb4d2d2 cmd/pkgserver/ui_test: implement DSL and runner 2026-04-19 02:35:47 +10:00
Kat
cd3224e4af cmd/pkgserver/ui_test: add DOM reporter 2026-04-19 02:35:47 +10:00
Kat
bae2735a0d cmd/pkgserver/ui_test: add basic CLI reporter 2026-04-19 02:35:47 +10:00
mae
8a38b614c6 cmd/pkgserver: update 2026-04-18 11:32:08 -05:00
mae
3286fff076 cmd/pkgserver: fix gitignore 2026-04-18 11:30:57 -05:00
mae
fd1884a84b cmd/pkgserver: better no results handling 2026-04-18 11:30:57 -05:00
mae
fe6424bd6d cmd/pkgserver: better no results handling 2026-04-18 11:30:57 -05:00
mae
004ac511a9 cmd/pkgserver: finish search implementation 2026-04-18 11:30:57 -05:00
mae
ba17f9d4f3 cmd/pkgserver: remove get endpoint count field 2026-04-18 11:30:56 -05:00
mae
ea62f64b8f cmd/pkgserver: search endpoint 2026-04-18 11:30:56 -05:00
mae
86669363ac cmd/pkgserver: pagination bugfix 2026-04-18 11:30:56 -05:00
6f5b7964f4 cmd/pkgserver: guard sass/ts behind build tag
Packaging nodejs and ruby is an immense burden for the Rosa OS base system, and these files diff poorly.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
mae
a195c3760c cmd/pkgserver: add size 2026-04-18 11:30:56 -05:00
cfe52dce82 cmd/pkgserver: expose size and store pre-encoded ident
This change also handles SIGSEGV correctly in newStatusHandler, and makes serving status fully zero copy.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
8483d8a005 cmd/pkgserver: look up status by name once
This has far less overhead.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
5bc5aed024 cmd/pkgserver: refer to preset in index
This enables referencing back to internal/rosa through an entry obtained via the index.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
9465649d13 cmd/pkgserver: handle unversioned value
This omits the field for an unversioned artifact, and only does so once on startup.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
33c461aa67 cmd/pkgserver: determine disposition route in mux
This removes duplicate checks and uses the more sound check in mux.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
dee0204fc0 cmd/pkgserver: format get error messages
This improves source code readability on smaller displays.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
2f916ed0c0 cmd/pkgserver: constant string in pattern
This resolves patterns at compile time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
55ce3a2f90 cmd/pkgserver: satisfy handler signature in method
This is somewhat cleaner.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
3f6a07ef59 cmd/pkgserver: log instead of write encoding error
This message is unlikely to be useful to the user, and output may be partially written at this point, causing the error to be even less intelligible.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
02941e7c23 cmd/pkgserver: appropriately mark test helpers
This improves usefulness of test log messages.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
b9601881b7 cmd/pkgserver: do not omit report field
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
58596f0af5 cmd/pkgserver: gracefully shut down on signal
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
02cde40289 cmd/pkgserver: specify full addr string in flag
This allows greater flexibility.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
5014534884 cmd/pkgserver: make report argument optional
This allows serving metadata only without a populated report. This also removes the out-of-bounds read on args when no arguments are passed.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
13cf99ced4 cmd/pkgserver: embed internal/rosa metadata
This change also cleans up and reduces some unnecessary copies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
6bfb258fd0 cmd/pkgserver: do not assume default mux
This helps with testing.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
b649645189 cmd/pkgserver: create index without report
This is useful for testing, where report testdata is not available.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 11:30:56 -05:00
mae
3ddba4e21f cmd/pkgserver: add sort orders, change pagination rules 2026-04-18 11:30:56 -05:00
mae
40a906c6c2 cmd/pkgserver: add /status endpoint 2026-04-18 11:30:56 -05:00
mae
06894e2104 cmd/pkgserver: minimum viable frontend 2026-04-18 11:30:56 -05:00
mae
56f0392b86 cmd/pkgserver: api versioning 2026-04-18 11:30:56 -05:00
mae
e2315f6c1a cmd/pkgserver: add get endpoint 2026-04-18 11:30:56 -05:00
mae
e4aee49eb0 cmd/pkgserver: add count endpoint and restructure 2026-04-18 11:30:56 -05:00
mae
6c03cc8b8a cmd/pkgserver: add status endpoint 2026-04-18 11:30:56 -05:00
mae
59ade6a86b cmd/pkgserver: add createPackageIndex 2026-04-18 11:30:56 -05:00
mae
59ab493035 cmd/pkgserver: add command handler 2026-04-18 11:30:56 -05:00
mae
d80a3346e2 cmd/pkgserver: replace favicon 2026-04-18 11:30:56 -05:00
mae
327a34aacb cmd/pkgserver: pagination 2026-04-18 11:30:56 -05:00
mae
ea7c6b3b48 cmd/pkgserver: basic web ui 2026-04-18 11:30:56 -05:00
5647c3a91f internal/rosa/meson: run meson test suite
Tests requiring internet access or unreasonable dependencies are removed.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 01:07:20 +09:00
992139c75d internal/rosa/python: extra script after install
This is generally for test suite, due to the lack of standard or widely agreed upon convention.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:35:24 +09:00
57c69b533e internal/rosa/meson: migrate to helper
This also migrates to source from the Microsoft Github release.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:16:22 +09:00
6f0c2a80f2 internal/rosa/python: migrate setuptools to helper
This is much cleaner, and should be functionally equivalent.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:04:19 +09:00
08dfefb28d internal/rosa/python: pip helper
Binary pip releases are not considered acceptable, this more generic helper is required for building from source.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:03:36 +09:00
b081629662 internal/rosa/libxml2: 2.15.2 to 2.15.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:05:49 +09:00
fba541f301 internal/rosa/nss: 3.122.1 to 3.123
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:05:23 +09:00
5f0da3d5c2 internal/rosa/gnu: mpc 1.4.0 to 1.4.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:04:33 +09:00
4d5841dd62 internal/rosa: elfutils 0.194 to 0.195
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:03:49 +09:00
9e752b588a internal/pkg: drop cached error on cancel
This avoids disabling the artifact when using the individual cancel method. Unfortunately this makes the method blocking.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 03:24:48 +09:00
27b1aaae38 internal/pkg: pending error alongside done channel
This significantly simplifies synchronisation of access to identErr.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 03:10:37 +09:00
9e18de1dc2 internal/pkg: flush cached errors on abort
This avoids disabling the artifact until cache is reopened. The same has to be implemented for Cancel in a future change.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 02:59:44 +09:00
b80ea91a42 cmd/mbf: abort remote cures
This command arranges for all pending cures to be aborted. It does not wait for cures to complete.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:47:02 +09:00
30a9dfa4b8 internal/pkg: abort all pending cures
This cancels all current pending cures without closing the cache.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:40:35 +09:00
8d657b6fdf cmd/mbf: cancel remote cure
This exposes the new fine-grained cancel API in cmd/mbf.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:00:04 +09:00
ae9b9adfd2 internal/rosa: retry in SIGSEGV test
Munmap is not always immediate.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 20:45:19 +09:00
dd6a480a21 cmd/mbf: handle flags in serve
This enables easier expansion of the protocol.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 20:14:09 +09:00
3942272c30 internal/pkg: fine-grained cancellation
This enables a specific artifact to be targeted for cancellation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 19:33:21 +09:00
9036986156 cmd/mbf: optionally ignore reply
An acknowledgement is not always required in this use case. This change also adds 64 bits of connection configuration for future expansion.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 16:46:49 +09:00
a394971dd7 cmd/mbf: do not abort cache acquisition during testing
This can sometimes fire during testing due to how short the test is.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 02:06:51 +09:00
9daba60809 cmd/mbf: daemon command
This services internal/pkg artifact IR with Rosa OS extensions originating from another process.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 02:05:59 +09:00
bcd79a22ff cmd/mbf: do not open cache for IR encoding
This can now be allocated independently.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 01:04:39 +09:00
0ff7ab915b internal/pkg: move IR primitives out of cache
These are memory management and caching primitives. Having them as part of Cache is cumbersome and requires a temporary directory that is never used. This change isolates them from Cache to enable independent use.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 01:02:13 +09:00
823575acac cmd/mbf: move info command
This is cleaner with less shared state.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-16 17:43:52 +09:00
136bc0917b cmd/mbf: optionally open cache
Some commands do not require the cache. This change also makes acquisition of locked cache cancelable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-16 15:59:34 +09:00
d6b082dd0b internal/rosa/ninja: bootstrap with verbose output
This otherwise outputs nothing, and appears to hang until the (fully single-threaded) bootstrap completes.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:19:05 +09:00
89d6d9576b internal/rosa/make: optionally format value as is
This enables correct formatting for awkward configure scripts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:17:58 +09:00
fafce04a5d internal/rosa/kernel: firmware 20260309 to 20260410
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:16:47 +09:00
5d760a1db9 internal/rosa/kernel: 6.12.80 to 6.12.81
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:16:30 +09:00
d197e40b2a internal/rosa/python: mako 1.3.10 to 1.3.11
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:21:54 +09:00
2008902247 internal/rosa/python: packaging 26.0 to 26.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:15:18 +09:00
30ac985fd2 internal/rosa/meson: 1.10.2 to 1.11.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:14:52 +09:00
e9fec368f8 internal/rosa/nss: 3.122 to 3.122.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:13:45 +09:00
46add42f58 internal/rosa/openssl: disable building docs
These take very long and are never used in the Rosa OS environment.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:13:18 +09:00
377b61e342 internal/rosa/openssl: do not double test job count
The test suite is racy, this reduces flakiness.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:12:36 +09:00
520c36db6d internal/rosa: respect preferred job count
This discontinues use of nproc, and also overrides detection behaviour in ninja.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 18:49:36 +09:00
3352bb975b internal/pkg: job count in container environment
This exposes preferred job count to the container initial process.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 15:49:21 +09:00
f7f48d57e9 internal/pkg: pass impure job count
This is cleaner than checking cpu count during cure, it is impossible to avoid impurity in both situations but this is configurable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 15:36:44 +09:00
5c2345128e internal/rosa/llvm: autodetect stage0 target
This is fine, now that stages beyond stage0 have explicit target.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 03:10:26 +09:00
78f9676b1f internal/rosa/llvm: centralise llvm source
This avoids having to sidestep the NewPackage name formatting machinery to take the cache fast path.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 03:03:06 +09:00
5b5b676132 internal/rosa/cmake: remove variant
This has no effect outside formatting of name and is a remnant of the old llvm helpers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:57:47 +09:00
78383fb6e8 internal/rosa/llvm: migrate libclc
This eliminates newLLVMVariant.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:40:13 +09:00
e97f6a393f internal/rosa/llvm: migrate runtimes and clang
This eliminates most newLLVM family of functions.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:07:13 +09:00
eeffefd22b internal/rosa/llvm: migrate compiler-rt helper
This also removes unused dependencies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 01:12:56 +09:00
ac825640ab internal/rosa/llvm: migrate musl
This removes the pointless special treatment given to musl.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:35:42 +09:00
a7f7ce1795 internal/rosa/llvm: migrate compiler-rt
The newLLVM family of functions predate the package system. This change migrates compiler-rt without changing any resulting artifacts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:19:33 +09:00
38c639e35c internal/rosa/llvm: remove project/runtime helper
More remnants from early days, these are not reusable at all but that was not known at the time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:03:23 +09:00
b2cb13e94c internal/rosa/llvm: centralise patches
This enables easier reuse of the patchset.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 23:52:44 +09:00
46f98d12d6 internal/rosa/llvm: remove conditional flags in helper
The llvm helper is a remnant from very early days, and ended up not being very useful, but was never removed. This change begins its removal, without changing the resulting artifacts for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 23:38:11 +09:00
503c7f953c internal/rosa/x: libpciaccess artifact
Required by userspace gpu drivers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 19:04:38 +09:00
15c9f6545d internal/rosa/perl: populate anitya identifiers
These are also tracked by Anitya.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 18:44:43 +09:00
83b0e32c55 internal/rosa: helpers for common url formats
This cleans up call site of NewPackage.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 18:02:57 +09:00
eeaf26e7a2 internal/rosa: wrapper around git helper
This results in much cleaner call site for the majority of use cases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 15:20:51 +09:00
b587caf2e8 internal/rosa: assume file source is xz-compressed
XZ happens to be the only widely-used format that is awful to deal with, everything else is natively supported.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 15:07:30 +09:00
f1c2ca4928 internal/rosa/mesa: libdrm artifact
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:27:09 +09:00
0ca301219f internal/rosa/python: pyyaml artifact
Mesa unfortunately requires this horrible format.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:18:47 +09:00
e2199e1276 internal/rosa/python: mako artifact
This unfortunately pulls in platform-specific package.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:11:38 +09:00
86eacb3208 cmd/mbf: checksum command
This computes and encodes sha384 checksum of data streamed from standard input.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:09:21 +09:00
8541bdd858 internal/rosa: wrap per-arch values
This is cleaner syntax in some specific cases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:59:55 +09:00
46be0b0dc8 internal/rosa/nss: buildcatrust 0.4.0 to 0.5.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:18:21 +09:00
cbe37e87e7 internal/rosa/python: pytest 9.0.2 to 9.0.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:18:02 +09:00
66d741fb07 internal/rosa/python: pygments 2.19.2 to 2.20.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:13:04 +09:00
0d449011f6 internal/rosa/python: use predictable URLs
This is much cleaner and more maintainable than specifying URL prefix manually. This change also populates Anitya project identifiers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:08:22 +09:00
46428ed85d internal/rosa/python: url pip wheel helper
This enables a cleaner higher-level helper.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 01:59:28 +09:00
081d6b463c internal/rosa/llvm: libclc artifact
This is built independently of llvm build system to avoid having to build llvm again.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:57:04 +09:00
11b3171180 internal/rosa/glslang: glslang artifact
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:34:17 +09:00
adbb84c3dd internal/rosa/glslang: spirv-tools artifact
Required by glslang.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:27:49 +09:00
1084e31d95 internal/rosa/glslang: spirv-headers artifact
Required by spirv-tools.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:16:29 +09:00
27a1b8fe0a internal/rosa/mesa: libglvnd artifact
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 21:27:30 +09:00
b2141a41d7 internal/rosa/dbus: xdg-dbus-proxy artifact
This is currently a hakurei runtime dependency, but will eventually be removed.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 19:41:49 +09:00
c0dff5bc87 internal/rosa/gnu: gcc set with-multilib-list as needed
This breaks riscv64.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 18:03:45 +09:00
04513c0510 internal/rosa/gnu: gmp explicit CC
The configure script is hard coded to use gcc without fallback on riscv64.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 17:25:15 +09:00
28ebf973d6 nix: add sharefs supplementary group
This works around vfs inode file attribute race.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-11 23:28:18 +09:00
41aeb404ec internal/rosa/hakurei: 0.3.7 to 0.4.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-11 10:53:29 +09:00
0b1009786f release: 0.4.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-11 10:49:43 +09:00
b390640376 internal/landlock: relocate from package container
This is not possible to use directly, so remove it from the public API.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 23:56:45 +09:00
ad2c9f36cd container: unexport PR_SET_NO_NEW_PRIVS wrapper
This is subtle to use correctly. It also does not make sense as part of the container API.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 23:45:51 +09:00
67db3fbb8d check: use encoding interfaces
This turned out not to require specific treatment, so the shared interfaces are cleaner.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 22:11:53 +09:00
560cb626a1 hst: remove enablement json adapter
The go116 behaviour of built-in new function makes this cleaner.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 20:47:30 +09:00
c33a6a5b7e hst: optionally reject insecure options
This prevents inadvertent use of insecure compatibility features.

Closes #30.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 19:34:02 +09:00
952082bd9b internal/rosa/python: 3.14.3 to 3.14.4
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:38:22 +09:00
24a9b24823 internal/rosa/openssl: 3.6.1 to 3.6.2
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:38:02 +09:00
c2e61e7987 internal/rosa/libcap: 2.77 to 2.78
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:37:04 +09:00
86787b3bc5 internal/rosa/tamago: 1.26.1 to 1.26.2
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:31:57 +09:00
cdfcfe6ce0 internal/rosa/go: 1.26.1 to 1.26.2
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:18:27 +09:00
68a2f0c240 internal/rosa/llvm: remove unused field
This change also renames confusingly named flags field and corrects its doc comment.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 02:13:26 +09:00
7319c7adf9 internal/rosa/llvm: use latest version on arm64
This also removes arch-specific patches because they were not useful.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 01:07:25 +09:00
e9c890cbb2 internal/rosa/llvm: enable cross compilation
This now passes the test suite.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 00:59:14 +09:00
6f924336fc internal/rosa/llvm: increase stack size
Some aarch64 regression tests fail intermittently on the default size.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-10 00:56:51 +09:00
bd88f10524 internal/rosa/llvm: 22.1.2 to 22.1.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-09 17:36:23 +09:00
e34e3b917e internal/kobject: process uevent message
This deals with environment variables generally present in every message.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-08 18:00:04 +09:00
b0ba165107 cmd/sharefs: group-accessible permission bits
This works around the race in vfs via supplementary group.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-08 16:14:47 +09:00
351d6c5a35 cmd/sharefs: reproduce vfs inode file attribute race
This happens in the vfs permissions check only and stale data appears to never reach userspace.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-08 15:51:36 +09:00
f23f73701c cmd/mbf: optional host abstract
This works around kernels with Landlock LSM disabled. Does not affect cure outcome.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 18:15:49 +09:00
876917229a internal/rosa/go: enable riscv64 bootstrap path
This is quite expensive, but no other option, unfortunately.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 18:11:42 +09:00
0558032c2d container: do not set static deadline
This usually ends up in the buffer, or completes well before the deadline, however this can still timeout on a very slow system.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 17:00:20 +09:00
c61cdc505f internal/params: relocate from package container
This does not make sense as part of the public API, so make it internal.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 16:37:44 +09:00
062edb3487 container: remove setup pipe helper
The API forces use of finalizer to close the read end of the setup pipe, which is no longer considered acceptable. Exporting this as part of package container also imposes unnecessary maintenance burden.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 16:05:33 +09:00
e4355279a1 all: optionally forbid degrading in tests
This enables transparently degradable tests to be forced on in environments known to support them.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 15:22:52 +09:00
289fdebead container: transparently degrade landlock in tests
Explicitly requiring landlock in tests will be supported in a future change.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 15:03:48 +09:00
9c9e190db9 ldd: remove timeout
The program generally never blocks, and it is more flexible to leave it up to the caller to set a timeout.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 14:49:20 +09:00
d7d42c69a1 internal/pkg: transparently degrade landlock in tests
This does not test package container, so should transparently cope with Landlock LSM being unavailable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 14:44:34 +09:00
c758e762bd container: skip landlock on hostnet
This overlaps with net namespace, so can be skipped without degrading security.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 14:36:44 +09:00
10f8b1c221 internal/pkg: optional landlock LSM
The alpine linux riscv64 kernel does not enable Landlock LSM, and kernel compilation is not yet feasible.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-07 12:44:07 +09:00
6907700d67 cmd/dist: set hsu tar header mode bits
This has no effect, but is nice to have.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-06 23:37:38 +09:00
0243f3ffbd internal/rosa/stage0: add riscv64 tarball
This had not yet passed all test suites because emulator is prohibitively slow.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-06 13:57:43 +09:00
cd0beeaf8e internal/uevent: optionally pass UUID during coldboot
This enables rejection of non-coldboot synthetic events.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-06 12:42:47 +09:00
a69273ab2a cmd/dist: replace dist/release.sh
This is much more robust than a shell script.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-05 23:58:08 +09:00
4cd0f57e48 dist: remove redundant cleanup
This breaks on shells that do not evaluate pathnames.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-05 16:16:37 +09:00
200 changed files with 17927 additions and 2087 deletions

33
.gitignore vendored
View File

@@ -1,37 +1,16 @@
# Binaries for programs and plugins # produced by tools and text editors
*.exe *.qcow2
*.exe~
*.dll
*.so
*.dylib
*.pkg
/hakurei
# Test binary, built with `go test -c`
*.test *.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out *.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
go.work.sum
# env file
.env
.idea .idea
.vscode .vscode
# go generate # go generate
/cmd/hakurei/LICENSE /cmd/hakurei/LICENSE
/cmd/pkgserver/ui/static/*.js
/cmd/pkgserver/ui_test/static
/internal/pkg/testdata/testtool /internal/pkg/testdata/testtool
/internal/rosa/hakurei_current.tar.gz /internal/rosa/hakurei_current.tar.gz
# release # cmd/dist default destination
/dist/hakurei-* /dist
# interactive nixos vm
nixos.qcow2

6
all.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/bin/sh -e
TOOLCHAIN_VERSION="$(go version)"
cd "$(dirname -- "$0")/"
echo "# Building cmd/dist using ${TOOLCHAIN_VERSION}."
go run -v --tags=dist ./cmd/dist

View File

@@ -2,7 +2,7 @@
package check package check
import ( import (
"encoding/json" "encoding"
"errors" "errors"
"fmt" "fmt"
"path/filepath" "path/filepath"
@@ -30,6 +30,16 @@ func (e AbsoluteError) Is(target error) bool {
// Absolute holds a pathname checked to be absolute. // Absolute holds a pathname checked to be absolute.
type Absolute struct{ pathname unique.Handle[string] } type Absolute struct{ pathname unique.Handle[string] }
var (
_ encoding.TextAppender = new(Absolute)
_ encoding.TextMarshaler = new(Absolute)
_ encoding.TextUnmarshaler = new(Absolute)
_ encoding.BinaryAppender = new(Absolute)
_ encoding.BinaryMarshaler = new(Absolute)
_ encoding.BinaryUnmarshaler = new(Absolute)
)
// ok returns whether [Absolute] is not the zero value. // ok returns whether [Absolute] is not the zero value.
func (a *Absolute) ok() bool { return a != nil && *a != (Absolute{}) } func (a *Absolute) ok() bool { return a != nil && *a != (Absolute{}) }
@@ -84,13 +94,16 @@ func (a *Absolute) Append(elem ...string) *Absolute {
// Dir calls [filepath.Dir] with [Absolute] as its argument. // Dir calls [filepath.Dir] with [Absolute] as its argument.
func (a *Absolute) Dir() *Absolute { return unsafeAbs(filepath.Dir(a.String())) } func (a *Absolute) Dir() *Absolute { return unsafeAbs(filepath.Dir(a.String())) }
// GobEncode returns the checked pathname. // AppendText appends the checked pathname.
func (a *Absolute) GobEncode() ([]byte, error) { func (a *Absolute) AppendText(data []byte) ([]byte, error) {
return []byte(a.String()), nil return append(data, a.String()...), nil
} }
// GobDecode stores data if it represents an absolute pathname. // MarshalText returns the checked pathname.
func (a *Absolute) GobDecode(data []byte) error { func (a *Absolute) MarshalText() ([]byte, error) { return a.AppendText(nil) }
// UnmarshalText stores data if it represents an absolute pathname.
func (a *Absolute) UnmarshalText(data []byte) error {
pathname := string(data) pathname := string(data)
if !filepath.IsAbs(pathname) { if !filepath.IsAbs(pathname) {
return AbsoluteError(pathname) return AbsoluteError(pathname)
@@ -99,23 +112,9 @@ func (a *Absolute) GobDecode(data []byte) error {
return nil return nil
} }
// MarshalJSON returns a JSON representation of the checked pathname. func (a *Absolute) AppendBinary(data []byte) ([]byte, error) { return a.AppendText(data) }
func (a *Absolute) MarshalJSON() ([]byte, error) { func (a *Absolute) MarshalBinary() ([]byte, error) { return a.MarshalText() }
return json.Marshal(a.String()) func (a *Absolute) UnmarshalBinary(data []byte) error { return a.UnmarshalText(data) }
}
// UnmarshalJSON stores data if it represents an absolute pathname.
func (a *Absolute) UnmarshalJSON(data []byte) error {
var pathname string
if err := json.Unmarshal(data, &pathname); err != nil {
return err
}
if !filepath.IsAbs(pathname) {
return AbsoluteError(pathname)
}
a.pathname = unique.Make(pathname)
return nil
}
// SortAbs calls [slices.SortFunc] for a slice of [Absolute]. // SortAbs calls [slices.SortFunc] for a slice of [Absolute].
func SortAbs(x []*Absolute) { func SortAbs(x []*Absolute) {

View File

@@ -170,20 +170,20 @@ func TestCodecAbsolute(t *testing.T) {
{"good", MustAbs("/etc"), {"good", MustAbs("/etc"),
nil, nil,
"\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\b\xff\x80\x00\x04/etc", "\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\b\xff\x80\x00\x04/etc",
",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\x0f\xff\x84\x01\x04/etc\x01\xfc\xc0\xed\x00\x00\x00", ",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\x0f\xff\x84\x01\x04/etc\x01\xfc\xc0\xed\x00\x00\x00",
`"/etc"`, `{"val":"/etc","magic":3236757504}`}, `"/etc"`, `{"val":"/etc","magic":3236757504}`},
{"not absolute", nil, {"not absolute", nil,
AbsoluteError("etc"), AbsoluteError("etc"),
"\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\a\xff\x80\x00\x03etc", "\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\a\xff\x80\x00\x03etc",
",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\x0f\xff\x84\x01\x03etc\x01\xfb\x01\x81\xda\x00\x00\x00", ",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\x0f\xff\x84\x01\x03etc\x01\xfb\x01\x81\xda\x00\x00\x00",
`"etc"`, `{"val":"etc","magic":3236757504}`}, `"etc"`, `{"val":"etc","magic":3236757504}`},
{"zero", nil, {"zero", nil,
new(AbsoluteError), new(AbsoluteError),
"\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\x04\xff\x80\x00\x00", "\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\x04\xff\x80\x00\x00",
",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\f\xff\x84\x01\x00\x01\xfb\x01\x81\xda\x00\x00\x00", ",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x06\x00\x00\x00\t\x7f\x06\x01\x02\xff\x82\x00\x00\x00\f\xff\x84\x01\x00\x01\xfb\x01\x81\xda\x00\x00\x00",
`""`, `{"val":"","magic":3236757504}`}, `""`, `{"val":"","magic":3236757504}`},
} }
@@ -347,15 +347,6 @@ func TestCodecAbsolute(t *testing.T) {
}) })
}) })
} }
t.Run("json passthrough", func(t *testing.T) {
t.Parallel()
wantErr := "invalid character ':' looking for beginning of value"
if err := new(Absolute).UnmarshalJSON([]byte(":3")); err == nil || err.Error() != wantErr {
t.Errorf("UnmarshalJSON: error = %v, want %s", err, wantErr)
}
})
} }
func TestAbsoluteWrap(t *testing.T) { func TestAbsoluteWrap(t *testing.T) {

237
cmd/dist/main.go vendored Normal file
View File

@@ -0,0 +1,237 @@
//go:build dist
package main
import (
"archive/tar"
"compress/gzip"
"context"
"crypto/sha512"
_ "embed"
"encoding/hex"
"fmt"
"io"
"io/fs"
"log"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
)
// getenv looks up an environment variable, and returns fallback if it is unset.
func getenv(key, fallback string) string {
if v, ok := os.LookupEnv(key); ok {
return v
}
return fallback
}
// mustRun runs a command with the current process's environment and panics
// on error or non-zero exit code.
func mustRun(ctx context.Context, name string, arg ...string) {
cmd := exec.CommandContext(ctx, name, arg...)
cmd.Stdin, cmd.Stdout, cmd.Stderr = os.Stdin, os.Stdout, os.Stderr
if err := cmd.Run(); err != nil {
panic(err)
}
}
//go:embed comp/_hakurei
var comp []byte
func main() {
fmt.Println()
log.SetFlags(0)
log.SetPrefix("# ")
version := getenv("HAKUREI_VERSION", "untagged")
prefix := getenv("PREFIX", "/usr")
destdir := getenv("DESTDIR", "dist")
if err := os.MkdirAll(destdir, 0755); err != nil {
log.Fatal(err)
}
s, err := os.MkdirTemp(destdir, ".dist.*")
if err != nil {
log.Fatal(err)
}
defer func() {
var code int
if err = os.RemoveAll(s); err != nil {
code = 1
log.Println(err)
}
if r := recover(); r != nil {
code = 1
log.Println(r)
}
os.Exit(code)
}()
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
log.Println("Building hakurei.")
mustRun(ctx, "go", "generate", "./...")
mustRun(
ctx, "go", "build",
"-trimpath",
"-v", "-o", s,
"-ldflags=-s -w "+
"-buildid= -linkmode external -extldflags=-static "+
"-X hakurei.app/internal/info.buildVersion="+version+" "+
"-X hakurei.app/internal/info.hakureiPath="+prefix+"/bin/hakurei "+
"-X hakurei.app/internal/info.hsuPath="+prefix+"/bin/hsu "+
"-X main.hakureiPath="+prefix+"/bin/hakurei",
"./...",
)
fmt.Println()
log.Println("Testing Hakurei.")
mustRun(
ctx, "go", "test",
"-ldflags=-buildid= -linkmode external -extldflags=-static",
"./...",
)
fmt.Println()
log.Println("Creating distribution.")
const suffix = ".tar.gz"
distName := "hakurei-" + version + "-" + runtime.GOARCH
var f *os.File
if f, err = os.OpenFile(
filepath.Join(s, distName+suffix),
os.O_CREATE|os.O_EXCL|os.O_WRONLY,
0644,
); err != nil {
panic(err)
}
defer func() {
if f == nil {
return
}
if err = f.Close(); err != nil {
log.Println(err)
}
}()
h := sha512.New()
gw := gzip.NewWriter(io.MultiWriter(f, h))
tw := tar.NewWriter(gw)
mustWriteHeader := func(name string, size int64, mode os.FileMode) {
header := tar.Header{
Name: filepath.Join(distName, name),
Size: size,
Mode: int64(mode),
Uname: "root",
Gname: "root",
}
if mode&os.ModeDir != 0 {
header.Typeflag = tar.TypeDir
fmt.Printf("%s %s\n", mode, name)
} else {
header.Typeflag = tar.TypeReg
fmt.Printf("%s %s (%d bytes)\n", mode, name, size)
}
if err = tw.WriteHeader(&header); err != nil {
panic(err)
}
}
mustWriteFile := func(name string, data []byte, mode os.FileMode) {
mustWriteHeader(name, int64(len(data)), mode)
if mode&os.ModeDir != 0 {
return
}
if _, err = tw.Write(data); err != nil {
panic(err)
}
}
mustWriteFromPath := func(dst, src string, mode os.FileMode) {
var r *os.File
if r, err = os.Open(src); err != nil {
panic(err)
}
var fi os.FileInfo
if fi, err = r.Stat(); err != nil {
_ = r.Close()
panic(err)
}
if mode == 0 {
mode = fi.Mode()
}
mustWriteHeader(dst, fi.Size(), mode)
if _, err = io.Copy(tw, r); err != nil {
_ = r.Close()
panic(err)
} else if err = r.Close(); err != nil {
panic(err)
}
}
mustWriteFile(".", nil, fs.ModeDir|0755)
mustWriteFile("comp/", nil, os.ModeDir|0755)
mustWriteFile("comp/_hakurei", comp, 0644)
mustWriteFile("install.sh", []byte(`#!/bin/sh -e
cd "$(dirname -- "$0")" || exit 1
install -vDm0755 "bin/hakurei" "${DESTDIR}`+prefix+`/bin/hakurei"
install -vDm0755 "bin/sharefs" "${DESTDIR}`+prefix+`/bin/sharefs"
install -vDm4511 "bin/hsu" "${DESTDIR}`+prefix+`/bin/hsu"
if [ ! -f "${DESTDIR}/etc/hsurc" ]; then
install -vDm0400 "hsurc.default" "${DESTDIR}/etc/hsurc"
fi
install -vDm0644 "comp/_hakurei" "${DESTDIR}`+prefix+`/share/zsh/site-functions/_hakurei"
`), 0755)
mustWriteFromPath("README.md", "README.md", 0)
mustWriteFile("hsurc.default", []byte("1000 0"), 0400)
mustWriteFromPath("bin/hsu", filepath.Join(s, "hsu"), 04511)
for _, name := range []string{
"hakurei",
"sharefs",
} {
mustWriteFromPath(
filepath.Join("bin", name),
filepath.Join(s, name),
0,
)
}
if err = tw.Close(); err != nil {
panic(err)
} else if err = gw.Close(); err != nil {
panic(err)
} else if err = f.Close(); err != nil {
panic(err)
}
f = nil
if err = os.WriteFile(
filepath.Join(destdir, distName+suffix+".sha512"),
append(hex.AppendEncode(nil, h.Sum(nil)), " "+distName+suffix+"\n"...),
0644,
); err != nil {
panic(err)
}
if err = os.Rename(
filepath.Join(s, distName+suffix),
filepath.Join(destdir, distName+suffix),
); err != nil {
panic(err)
}
}

View File

@@ -39,6 +39,7 @@ var errSuccess = errors.New("success")
func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErrs, out io.Writer) command.Command { func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErrs, out io.Writer) command.Command {
var ( var (
flagVerbose bool flagVerbose bool
flagInsecure bool
flagJSON bool flagJSON bool
) )
c := command.New(out, log.Printf, "hakurei", func([]string) error { c := command.New(out, log.Printf, "hakurei", func([]string) error {
@@ -57,6 +58,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
return nil return nil
}). }).
Flag(&flagVerbose, "v", command.BoolFlag(false), "Increase log verbosity"). Flag(&flagVerbose, "v", command.BoolFlag(false), "Increase log verbosity").
Flag(&flagInsecure, "insecure", command.BoolFlag(false), "Allow use of insecure compatibility options").
Flag(&flagJSON, "json", command.BoolFlag(false), "Serialise output in JSON when applicable") Flag(&flagJSON, "json", command.BoolFlag(false), "Serialise output in JSON when applicable")
c.Command("shim", command.UsageInternal, func([]string) error { outcome.Shim(msg); return errSuccess }) c.Command("shim", command.UsageInternal, func([]string) error { outcome.Shim(msg); return errSuccess })
@@ -75,7 +77,12 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
config.Container.Args = append(config.Container.Args, args[1:]...) config.Container.Args = append(config.Container.Args, args[1:]...)
} }
outcome.Main(ctx, msg, config, flagIdentifierFile) var flags int
if flagInsecure {
flags |= hst.VAllowInsecure
}
outcome.Main(ctx, msg, config, flags, flagIdentifierFile)
panic("unreachable") panic("unreachable")
}). }).
Flag(&flagIdentifierFile, "identifier-fd", command.IntFlag(-1), Flag(&flagIdentifierFile, "identifier-fd", command.IntFlag(-1),
@@ -145,7 +152,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
} }
} }
var et hst.Enablement var et hst.Enablements
if flagWayland { if flagWayland {
et |= hst.EWayland et |= hst.EWayland
} }
@@ -163,7 +170,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
ID: flagID, ID: flagID,
Identity: flagIdentity, Identity: flagIdentity,
Groups: flagGroups, Groups: flagGroups,
Enablements: hst.NewEnablements(et), Enablements: &et,
Container: &hst.ContainerConfig{ Container: &hst.ContainerConfig{
Filesystem: []hst.FilesystemConfigJSON{ Filesystem: []hst.FilesystemConfigJSON{
@@ -282,7 +289,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
} }
} }
outcome.Main(ctx, msg, &config, -1) outcome.Main(ctx, msg, &config, 0, -1)
panic("unreachable") panic("unreachable")
}). }).
Flag(&flagDBusConfigSession, "dbus-config", command.StringFlag("builtin"), Flag(&flagDBusConfigSession, "dbus-config", command.StringFlag("builtin"),

View File

@@ -20,7 +20,7 @@ func TestHelp(t *testing.T) {
}{ }{
{ {
"main", []string{}, ` "main", []string{}, `
Usage: hakurei [-h | --help] [-v] [--json] COMMAND [OPTIONS] Usage: hakurei [-h | --help] [-v] [--insecure] [--json] COMMAND [OPTIONS]
Commands: Commands:
run Load and start container from configuration file run Load and start container from configuration file

View File

@@ -56,7 +56,7 @@ func printShowInstance(
t := newPrinter(output) t := newPrinter(output)
defer t.MustFlush() defer t.MustFlush()
if err := config.Validate(); err != nil { if err := config.Validate(hst.VAllowInsecure); err != nil {
valid = false valid = false
if m, ok := message.GetMessage(err); ok { if m, ok := message.GetMessage(err); ok {
mustPrint(output, "Error: "+m+"!\n\n") mustPrint(output, "Error: "+m+"!\n\n")

View File

@@ -32,7 +32,7 @@ var (
PID: 0xbeef, PID: 0xbeef,
ShimPID: 0xcafe, ShimPID: 0xcafe,
Config: &hst.Config{ Config: &hst.Config{
Enablements: hst.NewEnablements(hst.EWayland | hst.EPipeWire), Enablements: new(hst.EWayland | hst.EPipeWire),
Identity: 1, Identity: 1,
Container: &hst.ContainerConfig{ Container: &hst.ContainerConfig{
Shell: check.MustAbs("/bin/sh"), Shell: check.MustAbs("/bin/sh"),

94
cmd/mbf/cache.go Normal file
View File

@@ -0,0 +1,94 @@
package main
import (
"context"
"os"
"path/filepath"
"testing"
"hakurei.app/check"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
// cache refers to an instance of [pkg.Cache] that might be open.
type cache struct {
ctx context.Context
msg message.Msg
// Should generally not be used directly.
c *pkg.Cache
cures, jobs int
hostAbstract, idle bool
base string
}
// open opens the underlying [pkg.Cache].
func (cache *cache) open() (err error) {
if cache.c != nil {
return os.ErrInvalid
}
if cache.base == "" {
cache.base = "cache"
}
var base *check.Absolute
if cache.base, err = filepath.Abs(cache.base); err != nil {
return
} else if base, err = check.NewAbs(cache.base); err != nil {
return
}
var flags int
if cache.idle {
flags |= pkg.CSchedIdle
}
if cache.hostAbstract {
flags |= pkg.CHostAbstract
}
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-cache.ctx.Done():
if testing.Testing() {
return
}
os.Exit(2)
case <-done:
return
}
}()
cache.msg.Verbosef("opening cache at %s", base)
cache.c, err = pkg.Open(
cache.ctx,
cache.msg,
flags,
cache.cures,
cache.jobs,
base,
)
return
}
// Close closes the underlying [pkg.Cache] if it is open.
func (cache *cache) Close() {
if cache.c != nil {
cache.c.Close()
}
}
// Do calls f on the underlying cache and returns its error value.
func (cache *cache) Do(f func(cache *pkg.Cache) error) error {
if cache.c == nil {
if err := cache.open(); err != nil {
return err
}
}
return f(cache.c)
}

37
cmd/mbf/cache_test.go Normal file
View File

@@ -0,0 +1,37 @@
package main
import (
"log"
"os"
"testing"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
func TestCache(t *testing.T) {
t.Parallel()
cm := cache{
ctx: t.Context(),
msg: message.New(log.New(os.Stderr, "check: ", 0)),
base: t.TempDir(),
hostAbstract: true, idle: true,
}
defer cm.Close()
cm.Close()
if err := cm.open(); err != nil {
t.Fatalf("open: error = %v", err)
}
if err := cm.open(); err != os.ErrInvalid {
t.Errorf("(duplicate) open: error = %v", err)
}
if err := cm.Do(func(cache *pkg.Cache) error {
return cache.Scrub(0)
}); err != nil {
t.Errorf("Scrub: error = %v", err)
}
}

343
cmd/mbf/daemon.go Normal file
View File

@@ -0,0 +1,343 @@
package main
import (
"context"
"encoding/binary"
"errors"
"io"
"log"
"math"
"net"
"os"
"sync"
"syscall"
"testing"
"time"
"unique"
"hakurei.app/check"
"hakurei.app/internal/pkg"
)
// daemonTimeout is the maximum amount of time cureFromIR will wait on I/O.
const daemonTimeout = 30 * time.Second
// daemonDeadline returns the deadline corresponding to daemonTimeout, or the
// zero value when running in a test.
func daemonDeadline() time.Time {
if testing.Testing() {
return time.Time{}
}
return time.Now().Add(daemonTimeout)
}
const (
// remoteNoReply notifies that the client will not receive a cure reply.
remoteNoReply = 1 << iota
)
// cureFromIR services an IR curing request.
func cureFromIR(
cache *pkg.Cache,
conn net.Conn,
flags uint64,
) (pkg.Artifact, error) {
a, decodeErr := cache.NewDecoder(conn).Decode()
if decodeErr != nil {
_, err := conn.Write([]byte("\x00" + decodeErr.Error()))
return nil, errors.Join(decodeErr, err, conn.Close())
}
pathname, _, cureErr := cache.Cure(a)
if flags&remoteNoReply != 0 {
return a, errors.Join(cureErr, conn.Close())
}
if err := conn.SetWriteDeadline(daemonDeadline()); err != nil {
return a, errors.Join(cureErr, err, conn.Close())
}
if cureErr != nil {
_, err := conn.Write([]byte("\x00" + cureErr.Error()))
return a, errors.Join(cureErr, err, conn.Close())
}
_, err := conn.Write([]byte(pathname.String()))
if testing.Testing() && errors.Is(err, io.ErrClosedPipe) {
return a, nil
}
return a, errors.Join(err, conn.Close())
}
const (
// specialCancel is a message consisting of a single identifier referring
// to a curing artifact to be cancelled.
specialCancel = iota
// specialAbort requests for all pending cures to be aborted. It has no
// message body.
specialAbort
// remoteSpecial denotes a special message with custom layout.
remoteSpecial = math.MaxUint64
)
// writeSpecialHeader writes the header of a remoteSpecial message.
func writeSpecialHeader(conn net.Conn, kind uint64) error {
var sh [16]byte
binary.LittleEndian.PutUint64(sh[:], remoteSpecial)
binary.LittleEndian.PutUint64(sh[8:], kind)
if n, err := conn.Write(sh[:]); err != nil {
return err
} else if n != len(sh) {
return io.ErrShortWrite
}
return nil
}
// cancelIdent reads an identifier from conn and cancels the corresponding cure.
func cancelIdent(
cache *pkg.Cache,
conn net.Conn,
) (*pkg.ID, bool, error) {
var ident pkg.ID
if _, err := io.ReadFull(conn, ident[:]); err != nil {
return nil, false, errors.Join(err, conn.Close())
} else if err = conn.Close(); err != nil {
return nil, false, err
}
return &ident, cache.Cancel(unique.Make(ident)), nil
}
// serve services connections from a [net.UnixListener].
func serve(
ctx context.Context,
log *log.Logger,
cm *cache,
ul *net.UnixListener,
) error {
ul.SetUnlinkOnClose(true)
if cm.c == nil {
if err := cm.open(); err != nil {
return errors.Join(err, ul.Close())
}
}
var wg sync.WaitGroup
defer wg.Wait()
wg.Go(func() {
for {
if ctx.Err() != nil {
break
}
conn, err := ul.AcceptUnix()
if err != nil {
if !errors.Is(err, os.ErrDeadlineExceeded) {
log.Println(err)
}
continue
}
wg.Go(func() {
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-ctx.Done():
_ = conn.SetDeadline(time.Now())
case <-done:
return
}
}()
if _err := conn.SetReadDeadline(daemonDeadline()); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
var word [8]byte
if _, _err := io.ReadFull(conn, word[:]); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
flags := binary.LittleEndian.Uint64(word[:])
if flags == remoteSpecial {
if _, _err := io.ReadFull(conn, word[:]); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
switch special := binary.LittleEndian.Uint64(word[:]); special {
default:
log.Printf("invalid special %d", special)
case specialCancel:
if id, ok, _err := cancelIdent(cm.c, conn); _err != nil {
log.Println(_err)
} else if !ok {
log.Println(
"attempting to cancel invalid artifact",
pkg.Encode(*id),
)
} else {
log.Println(
"cancelled artifact",
pkg.Encode(*id),
)
}
case specialAbort:
if _err := conn.Close(); _err != nil {
log.Println(_err)
}
log.Println("aborting all pending cures")
cm.c.Abort()
}
return
}
if a, _err := cureFromIR(cm.c, conn, flags); _err != nil {
log.Println(_err)
} else {
log.Printf(
"fulfilled artifact %s",
pkg.Encode(cm.c.Ident(a).Value()),
)
}
})
}
})
<-ctx.Done()
if err := ul.SetDeadline(time.Now()); err != nil {
return errors.Join(err, ul.Close())
}
wg.Wait()
return ul.Close()
}
// dial wraps [net.DialUnix] with a context.
func dial(ctx context.Context, addr *net.UnixAddr) (
done chan<- struct{},
conn *net.UnixConn,
err error,
) {
conn, err = net.DialUnix("unix", nil, addr)
if err != nil {
return
}
d := make(chan struct{})
done = d
go func() {
select {
case <-ctx.Done():
_ = conn.SetDeadline(time.Now())
case <-d:
return
}
}()
return
}
// cureRemote cures a [pkg.Artifact] on a daemon.
func cureRemote(
ctx context.Context,
addr *net.UnixAddr,
a pkg.Artifact,
flags uint64,
) (*check.Absolute, error) {
if flags == remoteSpecial {
return nil, syscall.EINVAL
}
done, conn, err := dial(ctx, addr)
if err != nil {
return nil, err
}
defer close(done)
if n, flagErr := conn.Write(binary.LittleEndian.AppendUint64(nil, flags)); flagErr != nil {
return nil, errors.Join(flagErr, conn.Close())
} else if n != 8 {
return nil, errors.Join(io.ErrShortWrite, conn.Close())
}
if err = pkg.NewIR().EncodeAll(conn, a); err != nil {
return nil, errors.Join(err, conn.Close())
} else if err = conn.CloseWrite(); err != nil {
return nil, errors.Join(err, conn.Close())
}
if flags&remoteNoReply != 0 {
return nil, conn.Close()
}
payload, recvErr := io.ReadAll(conn)
if err = errors.Join(recvErr, conn.Close()); err != nil {
if errors.Is(err, os.ErrDeadlineExceeded) {
if cancelErr := ctx.Err(); cancelErr != nil {
err = cancelErr
}
}
return nil, err
}
if len(payload) > 0 && payload[0] == 0 {
return nil, errors.New(string(payload[1:]))
}
var p *check.Absolute
p, err = check.NewAbs(string(payload))
return p, err
}
// cancelRemote cancels a [pkg.Artifact] curing on a daemon.
func cancelRemote(
ctx context.Context,
addr *net.UnixAddr,
a pkg.Artifact,
) error {
done, conn, err := dial(ctx, addr)
if err != nil {
return err
}
defer close(done)
if err = writeSpecialHeader(conn, specialCancel); err != nil {
return errors.Join(err, conn.Close())
}
var n int
id := pkg.NewIR().Ident(a).Value()
if n, err = conn.Write(id[:]); err != nil {
return errors.Join(err, conn.Close())
} else if n != len(id) {
return errors.Join(io.ErrShortWrite, conn.Close())
}
return conn.Close()
}
// abortRemote aborts all [pkg.Artifact] curing on a daemon.
func abortRemote(
ctx context.Context,
addr *net.UnixAddr,
) error {
done, conn, err := dial(ctx, addr)
if err != nil {
return err
}
defer close(done)
err = writeSpecialHeader(conn, specialAbort)
return errors.Join(err, conn.Close())
}

146
cmd/mbf/daemon_test.go Normal file
View File

@@ -0,0 +1,146 @@
package main
import (
"bytes"
"context"
"errors"
"io"
"log"
"net"
"os"
"path/filepath"
"slices"
"strings"
"testing"
"time"
"hakurei.app/check"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
func TestNoReply(t *testing.T) {
t.Parallel()
if !daemonDeadline().IsZero() {
t.Fatal("daemonDeadline did not return the zero value")
}
c, err := pkg.Open(
t.Context(),
message.New(log.New(os.Stderr, "cir: ", 0)),
0, 0, 0,
check.MustAbs(t.TempDir()),
)
if err != nil {
t.Fatalf("Open: error = %v", err)
}
defer c.Close()
client, server := net.Pipe()
done := make(chan struct{})
go func() {
defer close(done)
go func() {
<-t.Context().Done()
if _err := client.SetDeadline(time.Now()); _err != nil && !errors.Is(_err, io.ErrClosedPipe) {
panic(_err)
}
}()
if _err := c.EncodeAll(
client,
pkg.NewFile("check", []byte{0}),
); _err != nil {
panic(_err)
} else if _err = client.Close(); _err != nil {
panic(_err)
}
}()
a, cureErr := cureFromIR(c, server, remoteNoReply)
if cureErr != nil {
t.Fatalf("cureFromIR: error = %v", cureErr)
}
<-done
wantIdent := pkg.MustDecode("fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG")
if gotIdent := c.Ident(a).Value(); gotIdent != wantIdent {
t.Errorf(
"cureFromIR: %s, want %s",
pkg.Encode(gotIdent), pkg.Encode(wantIdent),
)
}
}
func TestDaemon(t *testing.T) {
t.Parallel()
var buf bytes.Buffer
logger := log.New(&buf, "daemon: ", 0)
addr := net.UnixAddr{
Name: filepath.Join(t.TempDir(), "daemon"),
Net: "unix",
}
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
cm := cache{
ctx: ctx,
msg: message.New(logger),
base: t.TempDir(),
}
defer cm.Close()
ul, err := net.ListenUnix("unix", &addr)
if err != nil {
t.Fatalf("ListenUnix: error = %v", err)
}
done := make(chan struct{})
go func() {
defer close(done)
if _err := serve(ctx, logger, &cm, ul); _err != nil {
panic(_err)
}
}()
if err = cancelRemote(ctx, &addr, pkg.NewFile("nonexistent", nil)); err != nil {
t.Fatalf("cancelRemote: error = %v", err)
}
if err = abortRemote(ctx, &addr); err != nil {
t.Fatalf("abortRemote: error = %v", err)
}
// keep this last for synchronisation
var p *check.Absolute
p, err = cureRemote(ctx, &addr, pkg.NewFile("check", []byte{0}), 0)
if err != nil {
t.Fatalf("cureRemote: error = %v", err)
}
cancel()
<-done
const want = "fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG"
if got := filepath.Base(p.String()); got != want {
t.Errorf("cureRemote: %s, want %s", got, want)
}
wantLog := []string{
"",
"daemon: aborting all pending cures",
"daemon: attempting to cancel invalid artifact kQm9fmnCmXST1-MMmxzcau2oKZCXXrlZydo4PkeV5hO_2PKfeC8t98hrbV_ZZx_j",
"daemon: fulfilled artifact fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG",
}
gotLog := strings.Split(buf.String(), "\n")
slices.Sort(gotLog)
if !slices.Equal(gotLog, wantLog) {
t.Errorf(
"serve: logged\n%s\nwant\n%s",
strings.Join(gotLog, "\n"), strings.Join(wantLog, "\n"),
)
}
}

127
cmd/mbf/info.go Normal file
View File

@@ -0,0 +1,127 @@
package main
import (
"errors"
"fmt"
"io"
"os"
"strings"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
)
// commandInfo implements the info subcommand.
func commandInfo(
cm *cache,
args []string,
w io.Writer,
writeStatus bool,
reportPath string,
) (err error) {
if len(args) == 0 {
return errors.New("info requires at least 1 argument")
}
var r *rosa.Report
if reportPath != "" {
if r, err = rosa.OpenReport(reportPath); err != nil {
return err
}
defer func() {
if closeErr := r.Close(); err == nil {
err = closeErr
}
}()
defer r.HandleAccess(&err)()
}
// recovered by HandleAccess
mustPrintln := func(a ...any) {
if _, _err := fmt.Fprintln(w, a...); _err != nil {
panic(_err)
}
}
mustPrint := func(a ...any) {
if _, _err := fmt.Fprint(w, a...); _err != nil {
panic(_err)
}
}
for i, name := range args {
if p, ok := rosa.ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
var suffix string
if version := rosa.Std.Version(p); version != rosa.Unversioned {
suffix += "-" + version
}
mustPrintln("name : " + name + suffix)
meta := rosa.GetMetadata(p)
mustPrintln("description : " + meta.Description)
if meta.Website != "" {
mustPrintln("website : " +
strings.TrimSuffix(meta.Website, "/"))
}
if len(meta.Dependencies) > 0 {
mustPrint("depends on :")
for _, d := range meta.Dependencies {
s := rosa.GetMetadata(d).Name
if version := rosa.Std.Version(d); version != rosa.Unversioned {
s += "-" + version
}
mustPrint(" " + s)
}
mustPrintln()
}
const statusPrefix = "status : "
if writeStatus {
if r == nil {
var f io.ReadSeekCloser
err = cm.Do(func(cache *pkg.Cache) (err error) {
f, err = cache.OpenStatus(rosa.Std.Load(p))
return
})
if err != nil {
if errors.Is(err, os.ErrNotExist) {
mustPrintln(
statusPrefix + "not yet cured",
)
} else {
return
}
} else {
mustPrint(statusPrefix)
_, err = io.Copy(w, f)
if err = errors.Join(err, f.Close()); err != nil {
return
}
}
} else if err = cm.Do(func(cache *pkg.Cache) (err error) {
status, n := r.ArtifactOf(cache.Ident(rosa.Std.Load(p)))
if status == nil {
mustPrintln(
statusPrefix + "not in report",
)
} else {
mustPrintln("size :", n)
mustPrint(statusPrefix)
if _, err = w.Write(status); err != nil {
return
}
}
return
}); err != nil {
return
}
}
if i != len(args)-1 {
mustPrintln()
}
}
}
return nil
}

170
cmd/mbf/info_test.go Normal file
View File

@@ -0,0 +1,170 @@
package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"reflect"
"strings"
"syscall"
"testing"
"unsafe"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
"hakurei.app/message"
)
func TestInfo(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
args []string
status map[string]string
report string
want string
wantErr any
}{
{"qemu", []string{"qemu"}, nil, "", `
name : qemu-` + rosa.Std.Version(rosa.QEMU) + `
description : a generic and open source machine emulator and virtualizer
website : https://www.qemu.org
depends on : glib-` + rosa.Std.Version(rosa.GLib) + ` zstd-` + rosa.Std.Version(rosa.Zstd) + `
`, nil},
{"multi", []string{"hakurei", "hakurei-dist"}, nil, "", `
name : hakurei-` + rosa.Std.Version(rosa.Hakurei) + `
description : low-level userspace tooling for Rosa OS
website : https://hakurei.app
name : hakurei-dist-` + rosa.Std.Version(rosa.HakureiDist) + `
description : low-level userspace tooling for Rosa OS (distribution tarball)
website : https://hakurei.app
`, nil},
{"nonexistent", []string{"zlib", "\x00"}, nil, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
`, fmt.Errorf("unknown artifact %q", "\x00")},
{"status cache", []string{"zlib", "zstd"}, map[string]string{
"zstd": "internal/pkg (amd64) on satori\n",
"hakurei": "internal/pkg (amd64) on satori\n\n",
}, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
status : not yet cured
name : zstd-` + rosa.Std.Version(rosa.Zstd) + `
description : a fast compression algorithm
website : https://facebook.github.io/zstd
status : internal/pkg (amd64) on satori
`, nil},
{"status cache perm", []string{"zlib"}, map[string]string{
"zlib": "\x00",
}, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
`, func(cm *cache) error {
return &os.PathError{
Op: "open",
Path: filepath.Join(cm.base, "status", pkg.Encode(cm.c.Ident(rosa.Std.Load(rosa.Zlib)).Value())),
Err: syscall.EACCES,
}
}},
{"status report", []string{"zlib"}, nil, strings.Repeat("\x00", len(pkg.Checksum{})+8), `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
status : not in report
`, nil},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
cm *cache
buf strings.Builder
rp string
)
if tc.status != nil || tc.report != "" {
cm = &cache{
ctx: context.Background(),
msg: message.New(log.New(os.Stderr, "info: ", 0)),
base: t.TempDir(),
}
defer cm.Close()
}
if tc.report != "" {
rp = filepath.Join(t.TempDir(), "report")
if err := os.WriteFile(
rp,
unsafe.Slice(unsafe.StringData(tc.report), len(tc.report)),
0400,
); err != nil {
t.Fatal(err)
}
}
if tc.status != nil {
for name, status := range tc.status {
p, ok := rosa.ResolveName(name)
if !ok {
t.Fatalf("invalid name %q", name)
}
perm := os.FileMode(0400)
if status == "\x00" {
perm = 0
}
if err := cm.Do(func(cache *pkg.Cache) error {
return os.WriteFile(filepath.Join(
cm.base,
"status",
pkg.Encode(cache.Ident(rosa.Std.Load(p)).Value()),
), unsafe.Slice(unsafe.StringData(status), len(status)), perm)
}); err != nil {
t.Fatalf("Do: error = %v", err)
}
}
}
var wantErr error
switch c := tc.wantErr.(type) {
case error:
wantErr = c
case func(cm *cache) error:
wantErr = c(cm)
default:
if tc.wantErr != nil {
t.Fatalf("invalid wantErr %#v", tc.wantErr)
}
}
if err := commandInfo(
cm,
tc.args,
&buf,
cm != nil,
rp,
); !reflect.DeepEqual(err, wantErr) {
t.Fatalf("commandInfo: error = %v, want %v", err, wantErr)
}
if got := buf.String(); got != strings.TrimPrefix(tc.want, "\n") {
t.Errorf("commandInfo:\n%s\nwant\n%s", got, tc.want)
}
})
}
}

View File

@@ -14,16 +14,17 @@ package main
import ( import (
"context" "context"
"crypto/sha512"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"log" "log"
"net"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"runtime" "runtime"
"strconv" "strconv"
"strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"syscall" "syscall"
@@ -53,14 +54,13 @@ func main() {
log.Fatal("this program must not run as root") log.Fatal("this program must not run as root")
} }
var cache *pkg.Cache
ctx, stop := signal.NotifyContext(context.Background(), ctx, stop := signal.NotifyContext(context.Background(),
syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP) syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
defer stop() defer stop()
var cm cache
defer func() { defer func() {
if cache != nil { cm.Close()
cache.Close()
}
if r := recover(); r != nil { if r := recover(); r != nil {
fmt.Println(r) fmt.Println(r)
@@ -70,48 +70,65 @@ func main() {
var ( var (
flagQuiet bool flagQuiet bool
flagCures int
flagBase string addr net.UnixAddr
flagIdle bool
) )
c := command.New(os.Stderr, log.Printf, "mbf", func([]string) (err error) { c := command.New(os.Stderr, log.Printf, "mbf", func([]string) error {
msg.SwapVerbose(!flagQuiet) msg.SwapVerbose(!flagQuiet)
cm.ctx, cm.msg = ctx, msg
cm.base = os.ExpandEnv(cm.base)
flagBase = os.ExpandEnv(flagBase) addr.Net = "unix"
if flagBase == "" { addr.Name = os.ExpandEnv(addr.Name)
flagBase = "cache" if addr.Name == "" {
addr.Name = "daemon"
} }
var base *check.Absolute return nil
if flagBase, err = filepath.Abs(flagBase); err != nil {
return
} else if base, err = check.NewAbs(flagBase); err != nil {
return
}
var flags int
if flagIdle {
flags &= pkg.CSchedIdle
}
cache, err = pkg.Open(ctx, msg, flags, flagCures, base)
return
}).Flag( }).Flag(
&flagQuiet, &flagQuiet,
"q", command.BoolFlag(false), "q", command.BoolFlag(false),
"Do not print cure messages", "Do not print cure messages",
).Flag( ).Flag(
&flagCures, &cm.cures,
"cures", command.IntFlag(0), "cures", command.IntFlag(0),
"Maximum number of dependencies to cure at any given time", "Maximum number of dependencies to cure at any given time",
).Flag( ).Flag(
&flagBase, &cm.jobs,
"jobs", command.IntFlag(0),
"Preferred number of jobs to run, when applicable",
).Flag(
&cm.base,
"d", command.StringFlag("$MBF_CACHE_DIR"), "d", command.StringFlag("$MBF_CACHE_DIR"),
"Directory to store cured artifacts", "Directory to store cured artifacts",
).Flag( ).Flag(
&flagIdle, &cm.idle,
"sched-idle", command.BoolFlag(false), "sched-idle", command.BoolFlag(false),
"Set SCHED_IDLE scheduling policy", "Set SCHED_IDLE scheduling policy",
).Flag(
&cm.hostAbstract,
"host-abstract", command.BoolFlag(
os.Getenv("MBF_HOST_ABSTRACT") != "",
),
"Do not restrict networked cure containers from connecting to host "+
"abstract UNIX sockets",
).Flag(
&addr.Name,
"socket", command.StringFlag("$MBF_DAEMON_SOCKET"),
"Pathname of socket to bind to",
)
c.NewCommand(
"checksum", "Compute checksum of data read from standard input",
func([]string) error {
go func() { <-ctx.Done(); os.Exit(1) }()
h := sha512.New384()
if _, err := io.Copy(h, os.Stdin); err != nil {
return err
}
log.Println(pkg.Encode(pkg.Checksum(h.Sum(nil))))
return nil
},
) )
{ {
@@ -125,7 +142,9 @@ func main() {
if flagShifts < 0 || flagShifts > 31 { if flagShifts < 0 || flagShifts > 31 {
flagShifts = 12 flagShifts = 12
} }
return cm.Do(func(cache *pkg.Cache) error {
return cache.Scrub(runtime.NumCPU() << flagShifts) return cache.Scrub(runtime.NumCPU() << flagShifts)
})
}, },
).Flag( ).Flag(
&flagShifts, &flagShifts,
@@ -143,101 +162,13 @@ func main() {
"info", "info",
"Display out-of-band metadata of an artifact", "Display out-of-band metadata of an artifact",
func(args []string) (err error) { func(args []string) (err error) {
if len(args) == 0 { return commandInfo(&cm, args, os.Stdout, flagStatus, flagReport)
return errors.New("info requires at least 1 argument")
}
var r *rosa.Report
if flagReport != "" {
if r, err = rosa.OpenReport(flagReport); err != nil {
return err
}
defer func() {
if closeErr := r.Close(); err == nil {
err = closeErr
}
}()
defer r.HandleAccess(&err)()
}
for i, name := range args {
if p, ok := rosa.ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
var suffix string
if version := rosa.Std.Version(p); version != rosa.Unversioned {
suffix += "-" + version
}
fmt.Println("name : " + name + suffix)
meta := rosa.GetMetadata(p)
fmt.Println("description : " + meta.Description)
if meta.Website != "" {
fmt.Println("website : " +
strings.TrimSuffix(meta.Website, "/"))
}
if len(meta.Dependencies) > 0 {
fmt.Print("depends on :")
for _, d := range meta.Dependencies {
s := rosa.GetMetadata(d).Name
if version := rosa.Std.Version(d); version != rosa.Unversioned {
s += "-" + version
}
fmt.Print(" " + s)
}
fmt.Println()
}
const statusPrefix = "status : "
if flagStatus {
if r == nil {
var f io.ReadSeekCloser
f, err = cache.OpenStatus(rosa.Std.Load(p))
if err != nil {
if errors.Is(err, os.ErrNotExist) {
fmt.Println(
statusPrefix + "not yet cured",
)
} else {
return
}
} else {
fmt.Print(statusPrefix)
_, err = io.Copy(os.Stdout, f)
if err = errors.Join(err, f.Close()); err != nil {
return
}
}
} else {
status, n := r.ArtifactOf(cache.Ident(rosa.Std.Load(p)))
if status == nil {
fmt.Println(
statusPrefix + "not in report",
)
} else {
fmt.Println("size :", n)
fmt.Print(statusPrefix)
if _, err = os.Stdout.Write(status); err != nil {
return
}
}
}
}
if i != len(args)-1 {
fmt.Println()
}
}
}
return nil
}, },
). ).Flag(
Flag(
&flagStatus, &flagStatus,
"status", command.BoolFlag(false), "status", command.BoolFlag(false),
"Display cure status if available", "Display cure status if available",
). ).Flag(
Flag(
&flagReport, &flagReport,
"report", command.StringFlag(""), "report", command.StringFlag(""),
"Load cure status from this report file instead of cache", "Load cure status from this report file instead of cache",
@@ -275,7 +206,9 @@ func main() {
if ext.Isatty(int(w.Fd())) { if ext.Isatty(int(w.Fd())) {
return errors.New("output appears to be a terminal") return errors.New("output appears to be a terminal")
} }
return cm.Do(func(cache *pkg.Cache) error {
return rosa.WriteReport(msg, w, cache) return rosa.WriteReport(msg, w, cache)
})
}, },
) )
@@ -338,14 +271,26 @@ func main() {
" package(s) are out of date")) " package(s) are out of date"))
} }
return errors.Join(errs...) return errors.Join(errs...)
}). }).Flag(
Flag(
&flagJobs, &flagJobs,
"j", command.IntFlag(32), "j", command.IntFlag(32),
"Maximum number of simultaneous connections", "Maximum number of simultaneous connections",
) )
} }
c.NewCommand(
"daemon",
"Service artifact IR with Rosa OS extensions",
func(args []string) error {
ul, err := net.ListenUnix("unix", &addr)
if err != nil {
return err
}
log.Printf("listening on pathname socket at %s", addr.Name)
return serve(ctx, log.Default(), &cm, ul)
},
)
{ {
var ( var (
flagGentoo string flagGentoo string
@@ -370,25 +315,37 @@ func main() {
rosa.SetGentooStage3(flagGentoo, checksum) rosa.SetGentooStage3(flagGentoo, checksum)
} }
_, _, _, stage1 := (t - 2).NewLLVM()
_, _, _, stage2 := (t - 1).NewLLVM()
_, _, _, stage3 := t.NewLLVM()
var ( var (
pathname *check.Absolute pathname *check.Absolute
checksum [2]unique.Handle[pkg.Checksum] checksum [2]unique.Handle[pkg.Checksum]
) )
if pathname, _, err = cache.Cure(stage1); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, _, err = cache.Cure(
(t - 2).Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage1:", pathname) log.Println("stage1:", pathname)
if pathname, checksum[0], err = cache.Cure(stage2); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, checksum[0], err = cache.Cure(
(t - 1).Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage2:", pathname) log.Println("stage2:", pathname)
if pathname, checksum[1], err = cache.Cure(stage3); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, checksum[1], err = cache.Cure(
t.Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage3:", pathname) log.Println("stage3:", pathname)
@@ -405,28 +362,28 @@ func main() {
} }
if flagStage0 { if flagStage0 {
if pathname, _, err = cache.Cure( if err = cm.Do(func(cache *pkg.Cache) (err error) {
pathname, _, err = cache.Cure(
t.Load(rosa.Stage0), t.Load(rosa.Stage0),
); err != nil { )
return err return
}); err != nil {
return
} }
log.Println(pathname) log.Println(pathname)
} }
return return
}, },
). ).Flag(
Flag(
&flagGentoo, &flagGentoo,
"gentoo", command.StringFlag(""), "gentoo", command.StringFlag(""),
"Bootstrap from a Gentoo stage3 tarball", "Bootstrap from a Gentoo stage3 tarball",
). ).Flag(
Flag(
&flagChecksum, &flagChecksum,
"checksum", command.StringFlag(""), "checksum", command.StringFlag(""),
"Checksum of Gentoo stage3 tarball", "Checksum of Gentoo stage3 tarball",
). ).Flag(
Flag(
&flagStage0, &flagStage0,
"stage0", command.BoolFlag(false), "stage0", command.BoolFlag(false),
"Create bootstrap stage0 tarball", "Create bootstrap stage0 tarball",
@@ -438,6 +395,8 @@ func main() {
flagDump string flagDump string
flagEnter bool flagEnter bool
flagExport string flagExport string
flagRemote bool
flagNoReply bool
) )
c.NewCommand( c.NewCommand(
"cure", "cure",
@@ -453,7 +412,11 @@ func main() {
switch { switch {
default: default:
pathname, _, err := cache.Cure(rosa.Std.Load(p)) var pathname *check.Absolute
err := cm.Do(func(cache *pkg.Cache) (err error) {
pathname, _, err = cache.Cure(rosa.Std.Load(p))
return
})
if err != nil { if err != nil {
return err return err
} }
@@ -493,7 +456,7 @@ func main() {
return err return err
} }
if err = cache.EncodeAll(f, rosa.Std.Load(p)); err != nil { if err = pkg.NewIR().EncodeAll(f, rosa.Std.Load(p)); err != nil {
_ = f.Close() _ = f.Close()
return err return err
} }
@@ -501,6 +464,7 @@ func main() {
return f.Close() return f.Close()
case flagEnter: case flagEnter:
return cm.Do(func(cache *pkg.Cache) error {
return cache.EnterExec( return cache.EnterExec(
ctx, ctx,
rosa.Std.Load(p), rosa.Std.Load(p),
@@ -508,26 +472,60 @@ func main() {
rosa.AbsSystem.Append("bin", "mksh"), rosa.AbsSystem.Append("bin", "mksh"),
"sh", "sh",
) )
})
case flagRemote:
var flags uint64
if flagNoReply {
flags |= remoteNoReply
}
a := rosa.Std.Load(p)
pathname, err := cureRemote(ctx, &addr, a, flags)
if !flagNoReply && err == nil {
log.Println(pathname)
}
if errors.Is(err, context.Canceled) {
cc, cancel := context.WithDeadline(context.Background(), daemonDeadline())
defer cancel()
if _err := cancelRemote(cc, &addr, a); _err != nil {
log.Println(err)
}
}
return err
} }
}, },
). ).Flag(
Flag(
&flagDump, &flagDump,
"dump", command.StringFlag(""), "dump", command.StringFlag(""),
"Write IR to specified pathname and terminate", "Write IR to specified pathname and terminate",
). ).Flag(
Flag(
&flagExport, &flagExport,
"export", command.StringFlag(""), "export", command.StringFlag(""),
"Export cured artifact to specified pathname", "Export cured artifact to specified pathname",
). ).Flag(
Flag(
&flagEnter, &flagEnter,
"enter", command.BoolFlag(false), "enter", command.BoolFlag(false),
"Enter cure container with an interactive shell", "Enter cure container with an interactive shell",
).Flag(
&flagRemote,
"daemon", command.BoolFlag(false),
"Cure artifact on the daemon",
).Flag(
&flagNoReply,
"no-reply", command.BoolFlag(false),
"Do not receive a reply from the daemon",
) )
} }
c.NewCommand(
"abort",
"Abort all pending cures on the daemon",
func([]string) error { return abortRemote(ctx, &addr) },
)
{ {
var ( var (
flagNet bool flagNet bool
@@ -539,7 +537,7 @@ func main() {
"shell", "shell",
"Interactive shell in the specified Rosa OS environment", "Interactive shell in the specified Rosa OS environment",
func(args []string) error { func(args []string) error {
presets := make([]rosa.PArtifact, len(args)) presets := make([]rosa.PArtifact, len(args)+3)
for i, arg := range args { for i, arg := range args {
p, ok := rosa.ResolveName(arg) p, ok := rosa.ResolveName(arg)
if !ok { if !ok {
@@ -547,21 +545,24 @@ func main() {
} }
presets[i] = p presets[i] = p
} }
base := rosa.Clang
if !flagWithToolchain {
base = rosa.Musl
}
presets = append(presets,
base,
rosa.Mksh,
rosa.Toybox,
)
root := make(pkg.Collect, 0, 6+len(args)) root := make(pkg.Collect, 0, 6+len(args))
root = rosa.Std.AppendPresets(root, presets...) root = rosa.Std.AppendPresets(root, presets...)
if flagWithToolchain { if err := cm.Do(func(cache *pkg.Cache) error {
musl, compilerRT, runtimes, clang := (rosa.Std - 1).NewLLVM() _, _, err := cache.Cure(&root)
root = append(root, musl, compilerRT, runtimes, clang) return err
} else { }); err == nil {
root = append(root, rosa.Std.Load(rosa.Musl))
}
root = append(root,
rosa.Std.Load(rosa.Mksh),
rosa.Std.Load(rosa.Toybox),
)
if _, _, err := cache.Cure(&root); err == nil {
return errors.New("unreachable") return errors.New("unreachable")
} else if !pkg.IsCollected(err) { } else if !pkg.IsCollected(err) {
return err return err
@@ -573,11 +574,22 @@ func main() {
} }
cured := make(map[pkg.Artifact]cureRes) cured := make(map[pkg.Artifact]cureRes)
for _, a := range root { for _, a := range root {
if err := cm.Do(func(cache *pkg.Cache) error {
pathname, checksum, err := cache.Cure(a) pathname, checksum, err := cache.Cure(a)
if err != nil { if err == nil {
cured[a] = cureRes{pathname, checksum}
}
return err
}); err != nil {
return err
}
}
// explicitly open for direct error-free use from this point
if cm.c == nil {
if err := cm.open(); err != nil {
return err return err
} }
cured[a] = cureRes{pathname, checksum}
} }
layers := pkg.PromoteLayers(root, func(a pkg.Artifact) ( layers := pkg.PromoteLayers(root, func(a pkg.Artifact) (
@@ -587,7 +599,7 @@ func main() {
res := cured[a] res := cured[a]
return res.pathname, res.checksum return res.pathname, res.checksum
}, func(i int, d pkg.Artifact) { }, func(i int, d pkg.Artifact) {
r := pkg.Encode(cache.Ident(d).Value()) r := pkg.Encode(cm.c.Ident(d).Value())
if s, ok := d.(fmt.Stringer); ok { if s, ok := d.(fmt.Stringer); ok {
if name := s.String(); name != "" { if name := s.String(); name != "" {
r += "-" + name r += "-" + name
@@ -651,18 +663,15 @@ func main() {
} }
return z.Wait() return z.Wait()
}, },
). ).Flag(
Flag(
&flagNet, &flagNet,
"net", command.BoolFlag(false), "net", command.BoolFlag(false),
"Share host net namespace", "Share host net namespace",
). ).Flag(
Flag(
&flagSession, &flagSession,
"session", command.BoolFlag(true), "session", command.BoolFlag(true),
"Retain session", "Retain session",
). ).Flag(
Flag(
&flagWithToolchain, &flagWithToolchain,
"with-toolchain", command.BoolFlag(false), "with-toolchain", command.BoolFlag(false),
"Include the stage2 LLVM toolchain", "Include the stage2 LLVM toolchain",
@@ -677,9 +686,7 @@ func main() {
) )
c.MustParse(os.Args[1:], func(err error) { c.MustParse(os.Args[1:], func(err error) {
if cache != nil { cm.Close()
cache.Close()
}
if w, ok := err.(interface{ Unwrap() []error }); !ok { if w, ok := err.(interface{ Unwrap() []error }); !ok {
log.Fatal(err) log.Fatal(err)
} else { } else {

176
cmd/pkgserver/api.go Normal file
View File

@@ -0,0 +1,176 @@
package main
import (
"encoding/json"
"log"
"net/http"
"net/url"
"path"
"strconv"
"sync"
"hakurei.app/internal/info"
"hakurei.app/internal/rosa"
)
// for lazy initialisation of serveInfo
var (
infoPayload struct {
// Current package count.
Count int `json:"count"`
// Hakurei version, set at link time.
HakureiVersion string `json:"hakurei_version"`
}
infoPayloadOnce sync.Once
)
// handleInfo writes constant system information.
func handleInfo(w http.ResponseWriter, _ *http.Request) {
infoPayloadOnce.Do(func() {
infoPayload.Count = int(rosa.PresetUnexportedStart)
infoPayload.HakureiVersion = info.Version()
})
// TODO(mae): cache entire response if no additional fields are planned
writeAPIPayload(w, infoPayload)
}
// newStatusHandler returns a [http.HandlerFunc] that offers status files for
// viewing or download, if available.
func (index *packageIndex) newStatusHandler(disposition bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
m, ok := index.names[path.Base(r.URL.Path)]
if !ok || !m.HasReport {
http.NotFound(w, r)
return
}
contentType := "text/plain; charset=utf-8"
if disposition {
contentType = "application/octet-stream"
// quoting like this is unsound, but okay, because metadata is hardcoded
contentDisposition := `attachment; filename="`
contentDisposition += m.Name + "-"
if m.Version != "" {
contentDisposition += m.Version + "-"
}
contentDisposition += m.ids + `.log"`
w.Header().Set("Content-Disposition", contentDisposition)
}
w.Header().Set("Content-Type", contentType)
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
if err := func() (err error) {
defer index.handleAccess(&err)()
_, err = w.Write(m.status)
return
}(); err != nil {
log.Println(err)
http.Error(
w, "cannot deliver status, contact maintainers",
http.StatusInternalServerError,
)
}
}
}
// handleGet writes a slice of metadata with specified order.
func (index *packageIndex) handleGet(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
limit, err := strconv.Atoi(q.Get("limit"))
if err != nil || limit > 100 || limit < 1 {
http.Error(
w, "limit must be an integer between 1 and 100",
http.StatusBadRequest,
)
return
}
i, err := strconv.Atoi(q.Get("index"))
if err != nil || i >= len(index.sorts[0]) || i < 0 {
http.Error(
w, "index must be an integer between 0 and "+
strconv.Itoa(int(rosa.PresetUnexportedStart-1)),
http.StatusBadRequest,
)
return
}
sort, err := strconv.Atoi(q.Get("sort"))
if err != nil || sort >= len(index.sorts) || sort < 0 {
http.Error(
w, "sort must be an integer between 0 and "+
strconv.Itoa(sortOrderEnd),
http.StatusBadRequest,
)
return
}
values := index.sorts[sort][i:min(i+limit, len(index.sorts[sort]))]
writeAPIPayload(w, &struct {
Values []*metadata `json:"values"`
}{values})
}
func (index *packageIndex) handleSearch(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
limit, err := strconv.Atoi(q.Get("limit"))
if err != nil || limit > 100 || limit < 1 {
http.Error(
w, "limit must be an integer between 1 and 100",
http.StatusBadRequest,
)
return
}
i, err := strconv.Atoi(q.Get("index"))
if err != nil || i >= len(index.sorts[0]) || i < 0 {
http.Error(
w, "index must be an integer between 0 and "+
strconv.Itoa(int(rosa.PresetUnexportedStart-1)),
http.StatusBadRequest,
)
return
}
search, err := url.QueryUnescape(q.Get("search"))
if len(search) > 100 || err != nil {
http.Error(
w, "search must be a string between 0 and 100 characters long",
http.StatusBadRequest,
)
return
}
desc := q.Get("desc") == "true"
n, res, err := index.performSearchQuery(limit, i, search, desc)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
writeAPIPayload(w, &struct {
Count int `json:"count"`
Values []searchResult `json:"values"`
}{n, res})
}
// apiVersion is the name of the current API revision, as part of the pattern.
const apiVersion = "v1"
// registerAPI registers API handler functions.
func (index *packageIndex) registerAPI(mux *http.ServeMux) {
mux.HandleFunc("GET /api/"+apiVersion+"/info", handleInfo)
mux.HandleFunc("GET /api/"+apiVersion+"/get", index.handleGet)
mux.HandleFunc("GET /api/"+apiVersion+"/search", index.handleSearch)
mux.HandleFunc("GET /api/"+apiVersion+"/status/", index.newStatusHandler(false))
mux.HandleFunc("GET /status/", index.newStatusHandler(true))
}
// writeAPIPayload sets headers common to API responses and encodes payload as
// JSON for the response body.
func writeAPIPayload(w http.ResponseWriter, payload any) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Set("Pragma", "no-cache")
w.Header().Set("Expires", "0")
if err := json.NewEncoder(w).Encode(payload); err != nil {
log.Println(err)
http.Error(
w, "cannot encode payload, contact maintainers",
http.StatusInternalServerError,
)
}
}

183
cmd/pkgserver/api_test.go Normal file
View File

@@ -0,0 +1,183 @@
package main
import (
"net/http"
"net/http/httptest"
"slices"
"strconv"
"testing"
"hakurei.app/internal/info"
"hakurei.app/internal/rosa"
)
// prefix is prepended to every API path.
const prefix = "/api/" + apiVersion + "/"
func TestAPIInfo(t *testing.T) {
t.Parallel()
w := httptest.NewRecorder()
handleInfo(w, httptest.NewRequestWithContext(
t.Context(),
http.MethodGet,
prefix+"info",
nil,
))
resp := w.Result()
checkStatus(t, resp, http.StatusOK)
checkAPIHeader(t, w.Header())
checkPayload(t, resp, struct {
Count int `json:"count"`
HakureiVersion string `json:"hakurei_version"`
}{int(rosa.PresetUnexportedStart), info.Version()})
}
func TestAPIGet(t *testing.T) {
t.Parallel()
const target = prefix + "get"
index := newIndex(t)
newRequest := func(suffix string) *httptest.ResponseRecorder {
w := httptest.NewRecorder()
index.handleGet(w, httptest.NewRequestWithContext(
t.Context(),
http.MethodGet,
target+suffix,
nil,
))
return w
}
checkValidate := func(t *testing.T, suffix string, vmin, vmax int, wantErr string) {
t.Run("invalid", func(t *testing.T) {
t.Parallel()
w := newRequest("?" + suffix + "=invalid")
resp := w.Result()
checkError(t, resp, wantErr, http.StatusBadRequest)
})
t.Run("min", func(t *testing.T) {
t.Parallel()
w := newRequest("?" + suffix + "=" + strconv.Itoa(vmin-1))
resp := w.Result()
checkError(t, resp, wantErr, http.StatusBadRequest)
w = newRequest("?" + suffix + "=" + strconv.Itoa(vmin))
resp = w.Result()
checkStatus(t, resp, http.StatusOK)
})
t.Run("max", func(t *testing.T) {
t.Parallel()
w := newRequest("?" + suffix + "=" + strconv.Itoa(vmax+1))
resp := w.Result()
checkError(t, resp, wantErr, http.StatusBadRequest)
w = newRequest("?" + suffix + "=" + strconv.Itoa(vmax))
resp = w.Result()
checkStatus(t, resp, http.StatusOK)
})
}
t.Run("limit", func(t *testing.T) {
t.Parallel()
checkValidate(
t, "index=0&sort=0&limit", 1, 100,
"limit must be an integer between 1 and 100",
)
})
t.Run("index", func(t *testing.T) {
t.Parallel()
checkValidate(
t, "limit=1&sort=0&index", 0, int(rosa.PresetUnexportedStart-1),
"index must be an integer between 0 and "+strconv.Itoa(int(rosa.PresetUnexportedStart-1)),
)
})
t.Run("sort", func(t *testing.T) {
t.Parallel()
checkValidate(
t, "index=0&limit=1&sort", 0, int(sortOrderEnd),
"sort must be an integer between 0 and "+strconv.Itoa(int(sortOrderEnd)),
)
})
checkWithSuffix := func(name, suffix string, want []*metadata) {
t.Run(name, func(t *testing.T) {
t.Parallel()
w := newRequest(suffix)
resp := w.Result()
checkStatus(t, resp, http.StatusOK)
checkAPIHeader(t, w.Header())
checkPayloadFunc(t, resp, func(got *struct {
Count int `json:"count"`
Values []*metadata `json:"values"`
}) bool {
return got.Count == len(want) &&
slices.EqualFunc(got.Values, want, func(a, b *metadata) bool {
return (a.Version == b.Version ||
a.Version == rosa.Unversioned ||
b.Version == rosa.Unversioned) &&
a.HasReport == b.HasReport &&
a.Name == b.Name &&
a.Description == b.Description &&
a.Website == b.Website
})
})
})
}
checkWithSuffix("declarationAscending", "?limit=2&index=0&sort=0", []*metadata{
{
Metadata: rosa.GetMetadata(0),
Version: rosa.Std.Version(0),
},
{
Metadata: rosa.GetMetadata(1),
Version: rosa.Std.Version(1),
},
})
checkWithSuffix("declarationAscending offset", "?limit=3&index=5&sort=0", []*metadata{
{
Metadata: rosa.GetMetadata(5),
Version: rosa.Std.Version(5),
},
{
Metadata: rosa.GetMetadata(6),
Version: rosa.Std.Version(6),
},
{
Metadata: rosa.GetMetadata(7),
Version: rosa.Std.Version(7),
},
})
checkWithSuffix("declarationDescending", "?limit=3&index=0&sort=1", []*metadata{
{
Metadata: rosa.GetMetadata(rosa.PresetUnexportedStart - 1),
Version: rosa.Std.Version(rosa.PresetUnexportedStart - 1),
},
{
Metadata: rosa.GetMetadata(rosa.PresetUnexportedStart - 2),
Version: rosa.Std.Version(rosa.PresetUnexportedStart - 2),
},
{
Metadata: rosa.GetMetadata(rosa.PresetUnexportedStart - 3),
Version: rosa.Std.Version(rosa.PresetUnexportedStart - 3),
},
})
checkWithSuffix("declarationDescending offset", "?limit=1&index=37&sort=1", []*metadata{
{
Metadata: rosa.GetMetadata(rosa.PresetUnexportedStart - 38),
Version: rosa.Std.Version(rosa.PresetUnexportedStart - 38),
},
})
}

105
cmd/pkgserver/index.go Normal file
View File

@@ -0,0 +1,105 @@
package main
import (
"cmp"
"errors"
"slices"
"strings"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
)
const (
declarationAscending = iota
declarationDescending
nameAscending
nameDescending
sizeAscending
sizeDescending
sortOrderEnd = iota - 1
)
// packageIndex refers to metadata by name and various sort orders.
type packageIndex struct {
sorts [sortOrderEnd + 1][rosa.PresetUnexportedStart]*metadata
names map[string]*metadata
search searchCache
// Taken from [rosa.Report] if available.
handleAccess func(*error) func()
}
// metadata holds [rosa.Metadata] extended with additional information.
type metadata struct {
p rosa.PArtifact
*rosa.Metadata
// Populated via [rosa.Toolchain.Version], [rosa.Unversioned] is equivalent
// to the zero value. Otherwise, the zero value is invalid.
Version string `json:"version,omitempty"`
// Output data size, available if present in report.
Size int64 `json:"size,omitempty"`
// Whether the underlying [pkg.Artifact] is present in the report.
HasReport bool `json:"report"`
// Ident string encoded ahead of time.
ids string
// Backed by [rosa.Report], access must be prepared by HandleAccess.
status []byte
}
// populate deterministically populates packageIndex, optionally with a report.
func (index *packageIndex) populate(cache *pkg.Cache, report *rosa.Report) (err error) {
if report != nil {
defer report.HandleAccess(&err)()
index.handleAccess = report.HandleAccess
}
var work [rosa.PresetUnexportedStart]*metadata
index.names = make(map[string]*metadata)
for p := range rosa.PresetUnexportedStart {
m := metadata{
p: p,
Metadata: rosa.GetMetadata(p),
Version: rosa.Std.Version(p),
}
if m.Version == "" {
return errors.New("invalid version from " + m.Name)
}
if m.Version == rosa.Unversioned {
m.Version = ""
}
if cache != nil && report != nil {
id := cache.Ident(rosa.Std.Load(p))
m.ids = pkg.Encode(id.Value())
m.status, m.Size = report.ArtifactOf(id)
m.HasReport = m.Size >= 0
}
work[p] = &m
index.names[m.Name] = &m
}
index.sorts[declarationAscending] = work
index.sorts[declarationDescending] = work
slices.Reverse(index.sorts[declarationDescending][:])
index.sorts[nameAscending] = work
slices.SortFunc(index.sorts[nameAscending][:], func(a, b *metadata) int {
return strings.Compare(a.Name, b.Name)
})
index.sorts[nameDescending] = index.sorts[nameAscending]
slices.Reverse(index.sorts[nameDescending][:])
index.sorts[sizeAscending] = work
slices.SortFunc(index.sorts[sizeAscending][:], func(a, b *metadata) int {
return cmp.Compare(a.Size, b.Size)
})
index.sorts[sizeDescending] = index.sorts[sizeAscending]
slices.Reverse(index.sorts[sizeDescending][:])
return
}

115
cmd/pkgserver/main.go Normal file
View File

@@ -0,0 +1,115 @@
package main
import (
"context"
"errors"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"hakurei.app/check"
"hakurei.app/command"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
"hakurei.app/message"
)
const shutdownTimeout = 15 * time.Second
func main() {
log.SetFlags(0)
log.SetPrefix("pkgserver: ")
var (
flagBaseDir string
flagAddr string
)
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
defer stop()
msg := message.New(log.Default())
c := command.New(os.Stderr, log.Printf, "pkgserver", func(args []string) error {
var (
cache *pkg.Cache
report *rosa.Report
)
switch len(args) {
case 0:
break
case 1:
baseDir, err := check.NewAbs(flagBaseDir)
if err != nil {
return err
}
cache, err = pkg.Open(ctx, msg, 0, 0, 0, baseDir)
if err != nil {
return err
}
defer cache.Close()
report, err = rosa.OpenReport(args[0])
if err != nil {
return err
}
default:
return errors.New("pkgserver requires 1 argument")
}
var index packageIndex
index.search = make(searchCache)
if err := index.populate(cache, report); err != nil {
return err
}
ticker := time.NewTicker(1 * time.Minute)
go func() {
for {
select {
case <-ctx.Done():
ticker.Stop()
return
case <-ticker.C:
index.search.clean()
}
}
}()
var mux http.ServeMux
uiRoutes(&mux)
testUIRoutes(&mux)
index.registerAPI(&mux)
server := http.Server{
Addr: flagAddr,
Handler: &mux,
}
go func() {
<-ctx.Done()
c, cancel := context.WithTimeout(context.Background(), shutdownTimeout)
defer cancel()
if err := server.Shutdown(c); err != nil {
log.Fatal(err)
}
}()
return server.ListenAndServe()
}).Flag(
&flagBaseDir,
"b", command.StringFlag(""),
"base directory for cache",
).Flag(
&flagAddr,
"addr", command.StringFlag(":8067"),
"TCP network address to listen on",
)
c.MustParse(os.Args[1:], func(err error) {
if errors.Is(err, http.ErrServerClosed) {
os.Exit(0)
}
log.Fatal(err)
})
}

View File

@@ -0,0 +1,96 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"reflect"
"testing"
)
// newIndex returns the address of a newly populated packageIndex.
func newIndex(t *testing.T) *packageIndex {
t.Helper()
var index packageIndex
if err := index.populate(nil, nil); err != nil {
t.Fatalf("populate: error = %v", err)
}
return &index
}
// checkStatus checks response status code.
func checkStatus(t *testing.T, resp *http.Response, want int) {
t.Helper()
if resp.StatusCode != want {
t.Errorf(
"StatusCode: %s, want %s",
http.StatusText(resp.StatusCode),
http.StatusText(want),
)
}
}
// checkHeader checks the value of a header entry.
func checkHeader(t *testing.T, h http.Header, key, want string) {
t.Helper()
if got := h.Get(key); got != want {
t.Errorf("%s: %q, want %q", key, got, want)
}
}
// checkAPIHeader checks common entries set for API endpoints.
func checkAPIHeader(t *testing.T, h http.Header) {
t.Helper()
checkHeader(t, h, "Content-Type", "application/json; charset=utf-8")
checkHeader(t, h, "Cache-Control", "no-cache, no-store, must-revalidate")
checkHeader(t, h, "Pragma", "no-cache")
checkHeader(t, h, "Expires", "0")
}
// checkPayloadFunc checks the JSON response of an API endpoint by passing it to f.
func checkPayloadFunc[T any](
t *testing.T,
resp *http.Response,
f func(got *T) bool,
) {
t.Helper()
var got T
r := io.Reader(resp.Body)
if testing.Verbose() {
var buf bytes.Buffer
r = io.TeeReader(r, &buf)
defer func() { t.Helper(); t.Log(buf.String()) }()
}
if err := json.NewDecoder(r).Decode(&got); err != nil {
t.Fatalf("Decode: error = %v", err)
}
if !f(&got) {
t.Errorf("Body: %#v", got)
}
}
// checkPayload checks the JSON response of an API endpoint.
func checkPayload[T any](t *testing.T, resp *http.Response, want T) {
t.Helper()
checkPayloadFunc(t, resp, func(got *T) bool {
return reflect.DeepEqual(got, &want)
})
}
func checkError(t *testing.T, resp *http.Response, error string, code int) {
t.Helper()
checkStatus(t, resp, code)
if got, _ := io.ReadAll(resp.Body); string(got) != fmt.Sprintln(error) {
t.Errorf("Body: %q, want %q", string(got), error)
}
}

81
cmd/pkgserver/search.go Normal file
View File

@@ -0,0 +1,81 @@
package main
import (
"cmp"
"maps"
"regexp"
"slices"
"time"
)
type searchCache map[string]searchCacheEntry
type searchResult struct {
NameIndices [][]int `json:"name_matches"`
DescIndices [][]int `json:"desc_matches,omitempty"`
Score float64 `json:"score"`
*metadata
}
type searchCacheEntry struct {
query string
results []searchResult
expiry time.Time
}
func (index *packageIndex) performSearchQuery(limit int, i int, search string, desc bool) (int, []searchResult, error) {
query := search
if desc {
query += ";withDesc"
}
entry, ok := index.search[query]
if ok && len(entry.results) > 0 {
return len(entry.results), entry.results[min(i, len(entry.results)-1):min(i+limit, len(entry.results))], nil
}
regex, err := regexp.Compile(search)
if err != nil {
return 0, make([]searchResult, 0), err
}
res := make([]searchResult, 0)
for p := range maps.Values(index.names) {
nameIndices := regex.FindAllIndex([]byte(p.Name), -1)
var descIndices [][]int = nil
if desc {
descIndices = regex.FindAllIndex([]byte(p.Description), -1)
}
if nameIndices == nil && descIndices == nil {
continue
}
score := float64(indexsum(nameIndices)) / (float64(len(nameIndices)) + 1)
if desc {
score += float64(indexsum(descIndices)) / (float64(len(descIndices)) + 1) / 10.0
}
res = append(res, searchResult{
NameIndices: nameIndices,
DescIndices: descIndices,
Score: score,
metadata: p,
})
}
slices.SortFunc(res[:], func(a, b searchResult) int { return -cmp.Compare(a.Score, b.Score) })
expiry := time.Now().Add(1 * time.Minute)
entry = searchCacheEntry{
query: search,
results: res,
expiry: expiry,
}
index.search[query] = entry
return len(res), res[i:min(i+limit, len(entry.results))], nil
}
func (s *searchCache) clean() {
maps.DeleteFunc(*s, func(_ string, v searchCacheEntry) bool {
return v.expiry.Before(time.Now())
})
}
func indexsum(in [][]int) int {
sum := 0
for i := 0; i < len(in); i++ {
sum += in[i][1] - in[i][0]
}
return sum
}

35
cmd/pkgserver/test_ui.go Normal file
View File

@@ -0,0 +1,35 @@
//go:build frontend && frontend_test
package main
import (
"embed"
"io/fs"
"net/http"
)
// Always remove ui_test/ui; if the previous tsc run failed, the rm never
// executes.
//go:generate sh -c "rm -r ui_test/ui/ 2>/dev/null || true"
//go:generate mkdir ui_test/ui
//go:generate sh -c "cp ui/static/*.ts ui_test/ui/"
//go:generate tsc -p ui_test
//go:generate rm -r ui_test/ui/
//go:generate cp ui_test/lib/ui.css ui_test/static/style.css
//go:generate cp ui_test/lib/ui.html ui_test/static/index.html
//go:generate sh -c "cd ui_test/lib && cp *.svg ../static/"
//go:embed ui_test/static
var _staticTest embed.FS
var staticTest = func() fs.FS {
if f, err := fs.Sub(_staticTest, "ui_test/static"); err != nil {
panic(err)
} else {
return f
}
}()
func testUIRoutes(mux *http.ServeMux) {
mux.Handle("GET /test/", http.StripPrefix("/test", http.FileServer(http.FS(staticTest))))
}

View File

@@ -0,0 +1,7 @@
//go:build !(frontend && frontend_test)
package main
import "net/http"
func testUIRoutes(mux *http.ServeMux) {}

33
cmd/pkgserver/ui.go Normal file
View File

@@ -0,0 +1,33 @@
package main
import "net/http"
func serveWebUI(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Set("Pragma", "no-cache")
w.Header().Set("Expires", "0")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("X-XSS-Protection", "1")
w.Header().Set("X-Frame-Options", "DENY")
http.ServeFileFS(w, r, content, "ui/index.html")
}
func serveStaticContent(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/static/style.css":
http.ServeFileFS(w, r, content, "ui/static/style.css")
case "/favicon.ico":
http.ServeFileFS(w, r, content, "ui/static/favicon.ico")
case "/static/index.js":
http.ServeFileFS(w, r, content, "ui/static/index.js")
default:
http.NotFound(w, r)
}
}
func uiRoutes(mux *http.ServeMux) {
mux.HandleFunc("GET /{$}", serveWebUI)
mux.HandleFunc("GET /favicon.ico", serveStaticContent)
mux.HandleFunc("GET /static/", serveStaticContent)
}

View File

@@ -0,0 +1,57 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="static/style.css">
<title>Hakurei PkgServer</title>
<script src="static/index.js"></script>
</head>
<body>
<h1>Hakurei PkgServer</h1>
<div class="top-controls" id="top-controls-regular">
<p>Showing entries <span id="entry-counter"></span>.</p>
<span id="search-bar">
<label for="search">Search: </label>
<input type="text" name="search" id="search"/>
<button onclick="doSearch()">Find</button>
<label for="include-desc">Include descriptions: </label>
<input type="checkbox" name="include-desc" id="include-desc" checked/>
</span>
<div><label for="count">Entries per page: </label><select name="count" id="count">
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
<option value="50">50</option>
</select></div>
<div><label for="sort">Sort by: </label><select name="sort" id="sort">
<option value="0">Definition (ascending)</option>
<option value="1">Definition (descending)</option>
<option value="2">Name (ascending)</option>
<option value="3">Name (descending)</option>
<option value="4">Size (ascending)</option>
<option value="5">Size (descending)</option>
</select></div>
</div>
<div class="top-controls" id="search-top-controls" hidden>
<p>Showing search results <span id="search-entry-counter"></span> for query "<span id="search-query"></span>".</p>
<button onclick="exitSearch()">Back</button>
<div><label for="search-count">Entries per page: </label><select name="search-count" id="search-count">
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
<option value="50">50</option>
</select></div>
<p>Sorted by best match</p>
</div>
<div class="page-controls"><a href="javascript:prevPage()">&laquo; Previous</a> <input type="text" class="page-number" value="1"/> <a href="javascript:nextPage()">Next &raquo;</a></div>
<table id="pkg-list">
<tr><td>Loading...</td></tr>
</table>
<div class="page-controls"><a href="javascript:prevPage()">&laquo; Previous</a> <input type="text" class="page-number" value="1"/> <a href="javascript:nextPage()">Next &raquo;</a></div>
<footer>
<p>&copy;<a href="https://hakurei.app/">Hakurei</a> (<span id="hakurei-version">unknown</span>). Licensed under the MIT license.</p>
</footer>
<script>main();</script>
</body>
</html>

331
cmd/pkgserver/ui/index.ts Normal file
View File

@@ -0,0 +1,331 @@
interface PackageIndexEntry {
name: string
size?: number
description?: string
website?: string
version?: string
report?: boolean
}
function entryToHTML(entry: PackageIndexEntry | SearchResult): HTMLTableRowElement {
let v = entry.version != null ? `<span>${escapeHtml(entry.version)}</span>` : ""
let s = entry.size != null && entry.size > 0 ? `<p>Size: ${toByteSizeString(entry.size)} (${entry.size})</p>` : ""
let n: string
let d: string
if ('name_matches' in entry) {
n = `<h2>${nameMatches(entry as SearchResult)} ${v}</h2>`
} else {
n = `<h2>${escapeHtml(entry.name)} ${v}</h2>`
}
if ('desc_matches' in entry && STATE.getIncludeDescriptions()) {
d = descMatches(entry as SearchResult)
} else {
d = (entry as PackageIndexEntry).description != null ? `<p>${escapeHtml((entry as PackageIndexEntry).description)}</p>` : ""
}
let w = entry.website != null ? `<a href="${encodeURI(entry.website)}">Website</a>` : ""
let r = entry.report ? `Log (<a href=\"${encodeURI('/api/v1/status/' + entry.name)}\">View</a> | <a href=\"${encodeURI('/status/' + entry.name)}\">Download</a>)` : ""
let row = <HTMLTableRowElement>(document.createElement('tr'))
row.innerHTML = `<td>
${n}
${d}
${s}
${w}
${r}
</td>`
return row
}
function nameMatches(sr: SearchResult): string {
return markMatches(sr.name, sr.name_matches)
}
function descMatches(sr: SearchResult): string {
return markMatches(sr.description!, sr.desc_matches)
}
function markMatches(str: string, indices: [number, number][]): string {
if (indices == null) {
return str
}
let out: string = ""
let j = 0
for (let i = 0; i < str.length; i++) {
if (j < indices.length) {
if (i === indices[j][0]) {
out += `<mark>${escapeHtmlChar(str[i])}`
continue
}
if (i === indices[j][1]) {
out += `</mark>${escapeHtmlChar(str[i])}`
j++
continue
}
}
out += escapeHtmlChar(str[i])
}
if (indices[j] !== undefined) {
out += "</mark>"
}
return out
}
function toByteSizeString(bytes: number): string {
if (bytes == null) return `unspecified`
if (bytes < 1024) return `${bytes}B`
if (bytes < Math.pow(1024, 2)) return `${(bytes / 1024).toFixed(2)}kiB`
if (bytes < Math.pow(1024, 3)) return `${(bytes / Math.pow(1024, 2)).toFixed(2)}MiB`
if (bytes < Math.pow(1024, 4)) return `${(bytes / Math.pow(1024, 3)).toFixed(2)}GiB`
if (bytes < Math.pow(1024, 5)) return `${(bytes / Math.pow(1024, 4)).toFixed(2)}TiB`
return "not only is it big, it's large"
}
const API_VERSION = 1
const ENDPOINT = `/api/v${API_VERSION}`
interface InfoPayload {
count?: number
hakurei_version?: string
}
async function infoRequest(): Promise<InfoPayload> {
const res = await fetch(`${ENDPOINT}/info`)
const payload = await res.json()
return payload as InfoPayload
}
interface GetPayload {
values?: PackageIndexEntry[]
}
enum SortOrders {
DeclarationAscending,
DeclarationDescending,
NameAscending,
NameDescending
}
async function getRequest(limit: number, index: number, sort: SortOrders): Promise<GetPayload> {
const res = await fetch(`${ENDPOINT}/get?limit=${limit}&index=${index}&sort=${sort.valueOf()}`)
const payload = await res.json()
return payload as GetPayload
}
interface SearchResult extends PackageIndexEntry {
name_matches: [number, number][]
desc_matches: [number, number][]
score: number
}
interface SearchPayload {
count?: number
values?: SearchResult[]
}
async function searchRequest(limit: number, index: number, search: string, desc: boolean): Promise<SearchPayload> {
const res = await fetch(`${ENDPOINT}/search?limit=${limit}&index=${index}&search=${encodeURIComponent(search)}&desc=${desc}`)
if (!res.ok) {
exitSearch()
alert("invalid search query!")
return Promise.reject(res.statusText)
}
const payload = await res.json()
return payload as SearchPayload
}
class State {
entriesPerPage: number = 10
entryIndex: number = 0
maxTotal: number = 0
maxEntries: number = 0
sort: SortOrders = SortOrders.DeclarationAscending
search: boolean = false
getEntriesPerPage(): number {
return this.entriesPerPage
}
setEntriesPerPage(entriesPerPage: number) {
this.entriesPerPage = entriesPerPage
this.setEntryIndex(Math.floor(this.getEntryIndex() / entriesPerPage) * entriesPerPage)
}
getEntryIndex(): number {
return this.entryIndex
}
setEntryIndex(entryIndex: number) {
this.entryIndex = entryIndex
this.updatePage()
this.updateRange()
this.updateListings()
}
getMaxTotal(): number {
return this.maxTotal
}
setMaxTotal(max: number) {
this.maxTotal = max
}
getSortOrder(): SortOrders {
return this.sort
}
setSortOrder(sortOrder: SortOrders) {
this.sort = sortOrder
this.setEntryIndex(0)
}
updatePage() {
let page = Math.ceil(((this.getEntryIndex() + this.getEntriesPerPage()) - 1) / this.getEntriesPerPage())
for (let e of document.getElementsByClassName("page-number")) {
(e as HTMLInputElement).value = String(page)
}
}
updateRange() {
let max = Math.min(this.getEntryIndex() + this.getEntriesPerPage(), this.getMaxTotal())
document.getElementById("entry-counter")!.textContent = `${this.getEntryIndex() + 1}-${max} of ${this.getMaxTotal()}`
if (this.search) {
document.getElementById("search-entry-counter")!.textContent = `${this.getEntryIndex() + 1}-${max} of ${this.maxTotal}/${this.maxEntries}`
document.getElementById("search-query")!.innerHTML = `<code>${escapeHtml(this.getSearchQuery())}</code>`
}
}
getSearchQuery(): string {
let queryString = document.getElementById("search")!;
return (queryString as HTMLInputElement).value
}
getIncludeDescriptions(): boolean {
let includeDesc = document.getElementById("include-desc")!;
return (includeDesc as HTMLInputElement).checked
}
updateListings() {
if (this.search) {
searchRequest(this.getEntriesPerPage(), this.getEntryIndex(), this.getSearchQuery(), this.getIncludeDescriptions())
.then(res => {
let table = document.getElementById("pkg-list")!
table.innerHTML = ''
for (let row of res.values!) {
table.appendChild(entryToHTML(row))
}
STATE.maxTotal = res.count!
STATE.updateRange()
if(res.count! < 1) {
exitSearch()
alert("no results found!")
}
})
} else {
getRequest(this.getEntriesPerPage(), this.getEntryIndex(), this.getSortOrder())
.then(res => {
let table = document.getElementById("pkg-list")!
table.innerHTML = ''
for (let row of res.values!) {
table.appendChild(entryToHTML(row))
}
})
}
}
}
let STATE: State
function lastPageIndex(): number {
return Math.floor(STATE.getMaxTotal() / STATE.getEntriesPerPage()) * STATE.getEntriesPerPage()
}
function setPage(page: number) {
STATE.setEntryIndex(Math.max(0, Math.min(STATE.getEntriesPerPage() * (page - 1), lastPageIndex())))
}
function escapeHtml(str?: string): string {
let out: string = ''
if (str == undefined) return ""
for (let i = 0; i < str.length; i++) {
out += escapeHtmlChar(str[i])
}
return out
}
function escapeHtmlChar(char: string): string {
if (char.length != 1) return char
switch (char[0]) {
case '&':
return "&amp;"
case '<':
return "&lt;"
case '>':
return "&gt;"
case '"':
return "&quot;"
case "'":
return "&apos;"
default:
return char
}
}
function firstPage() {
STATE.setEntryIndex(0)
}
function prevPage() {
let index = STATE.getEntryIndex()
STATE.setEntryIndex(Math.max(0, index - STATE.getEntriesPerPage()))
}
function lastPage() {
STATE.setEntryIndex(lastPageIndex())
}
function nextPage() {
let index = STATE.getEntryIndex()
STATE.setEntryIndex(Math.min(lastPageIndex(), index + STATE.getEntriesPerPage()))
}
function doSearch() {
document.getElementById("top-controls-regular")!.toggleAttribute("hidden");
document.getElementById("search-top-controls")!.toggleAttribute("hidden");
STATE.search = true;
STATE.setEntryIndex(0);
}
function exitSearch() {
document.getElementById("top-controls-regular")!.toggleAttribute("hidden");
document.getElementById("search-top-controls")!.toggleAttribute("hidden");
STATE.search = false;
STATE.setMaxTotal(STATE.maxEntries)
STATE.setEntryIndex(0)
}
function main() {
STATE = new State()
infoRequest()
.then(res => {
STATE.maxEntries = res.count!
STATE.setMaxTotal(STATE.maxEntries)
document.getElementById("hakurei-version")!.textContent = res.hakurei_version!
STATE.updateRange()
STATE.updateListings()
})
for (let e of document.getElementsByClassName("page-number")) {
e.addEventListener("change", (_) => {
setPage(parseInt((e as HTMLInputElement).value))
})
}
document.getElementById("count")?.addEventListener("change", (event) => {
STATE.setEntriesPerPage(parseInt((event.target as HTMLSelectElement).value))
})
document.getElementById("sort")?.addEventListener("change", (event) => {
STATE.setSortOrder(parseInt((event.target as HTMLSelectElement).value))
})
document.getElementById("search")?.addEventListener("keyup", (event) => {
if (event.key === 'Enter') doSearch()
})
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -0,0 +1,21 @@
.page-number {
width: 2em;
text-align: center;
}
.page-number {
width: 2em;
text-align: center;
}
@media (prefers-color-scheme: dark) {
html {
background-color: #2c2c2c;
color: ghostwhite;
}
}
@media (prefers-color-scheme: light) {
html {
background-color: #d3d3d3;
color: black;
}
}

View File

@@ -0,0 +1,8 @@
{
"compilerOptions": {
"target": "ES2024",
"strict": true,
"alwaysStrict": true,
"outDir": "static"
}
}

9
cmd/pkgserver/ui_full.go Normal file
View File

@@ -0,0 +1,9 @@
//go:build frontend
package main
import "embed"
//go:generate tsc -p ui
//go:embed ui/*
var content embed.FS

7
cmd/pkgserver/ui_stub.go Normal file
View File

@@ -0,0 +1,7 @@
//go:build !frontend
package main
import "testing/fstest"
var content fstest.MapFS

View File

@@ -0,0 +1,2 @@
// Import all test files to register their test suites.
import "./index_test.js";

View File

@@ -0,0 +1,2 @@
import { suite, test } from "./lib/test.js";
import "./ui/index.js";

View File

@@ -0,0 +1,48 @@
#!/usr/bin/env node
// Many editors have terminal emulators built in, so running tests with NodeJS
// provides faster iteration, especially for those acclimated to test-driven
// development.
import "../all_tests.js";
import { StreamReporter, GLOBAL_REGISTRAR } from "./test.js";
// TypeScript doesn't like process and Deno as their type definitions aren't
// installed, but doesn't seem to complain if they're accessed through
// globalThis.
const process: any = (globalThis as any).process;
const Deno: any = (globalThis as any).Deno;
function getArgs(): string[] {
if (process) {
const [runtime, program, ...args] = process.argv;
return args;
}
if (Deno) return Deno.args;
return [];
}
function exit(code?: number): never {
if (Deno) Deno.exit(code);
if (process) process.exit(code);
throw `exited with code ${code ?? 0}`;
}
const args = getArgs();
let verbose = false;
if (args.length > 1) {
console.error("Too many arguments");
exit(1);
}
if (args.length === 1) {
if (args[0] === "-v" || args[0] === "--verbose" || args[0] === "-verbose") {
verbose = true;
} else if (args[0] !== "--") {
console.error(`Unknown argument '${args[0]}'`);
exit(1);
}
}
let reporter = new StreamReporter({ writeln: console.log }, verbose);
GLOBAL_REGISTRAR.run(reporter);
exit(reporter.succeeded() ? 0 : 1);

View File

@@ -0,0 +1,13 @@
<?xml version="1.0"?>
<!-- See failure-open.svg for an explanation of the view box dimensions. -->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<!-- This triangle should match success-closed.svg, fill and stroke color notwithstanding. -->
<polygon points="0,0 100,50 0,100" fill="red" stroke="red" stroke-width="15" stroke-linejoin="round"/>
<!--
! y-coordinates go before x-coordinates here to highlight the difference
! (or, lack thereof) between these numbers and the ones in failure-open.svg;
! try a textual diff. Make sure to keep the numbers in sync!
-->
<line y1="30" x1="10" y2="70" x2="50" stroke="white" stroke-width="16"/>
<line y1="30" x1="50" y2="70" x2="10" stroke="white" stroke-width="16"/>
</svg>

After

Width:  |  Height:  |  Size: 788 B

View File

@@ -0,0 +1,35 @@
<?xml version="1.0"?>
<!--
! This view box is a bit weird: the strokes assume they're working in a view
! box that spans from the (0,0) to (100,100), and indeed that is convenient
! conceptualizing the strokes, but the stroke itself has a considerable width
! that gets clipped by restrictive view box dimensions. Hence, the view is
! shifted from (0,0)(100,100) to (-20,-20)(120,120), to make room for the
! clipped stroke, while leaving behind an illusion of working in a view box
! spanning from (0,0) to (100,100).
!
! However, the resulting SVG is too close to the summary text, and CSS
! properties to add padding do not seem to work with `content:` (likely because
! they're anonymous replaced elements); thus, the width of the view is
! increased considerably to provide padding in the SVG itself, while leaving
! the strokes oblivious.
!
! It gets worse: the summary text isn't vertically aligned with the icon! As
! a flexbox cannot be used in a summary to align the marker with the text, the
! simplest and most effective solution is to reduce the height of the view box
! from 140 to 130, thereby removing some of the bottom padding present.
!
! All six SVGs use the same view box (and indeed, they refer to this comment)
! so that they all appear to be the same size and position relative to each
! other on the DOM—indeed, the view box dimensions, alongside the width,
! directly control their placement on the DOM.
!
! TL;DR: CSS is janky, overflow is weird, and SVG is awesome!
-->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<!-- This triangle should match success-open.svg, fill and stroke color notwithstanding. -->
<polygon points="0,0 100,0 50,100" fill="red" stroke="red" stroke-width="15" stroke-linejoin="round"/>
<!-- See the comment in failure-closed.svg before modifying this. -->
<line x1="30" y1="10" x2="70" y2="50" stroke="white" stroke-width="16"/>
<line x1="30" y1="50" x2="70" y2="10" stroke="white" stroke-width="16"/>
</svg>

View File

@@ -0,0 +1,3 @@
import "../all_tests.js";
import { GoTestReporter, GLOBAL_REGISTRAR } from "./test.js";
GLOBAL_REGISTRAR.run(new GoTestReporter());

View File

@@ -0,0 +1,21 @@
<?xml version="1.0"?>
<!-- See failure-open.svg for an explanation of the view box dimensions. -->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<!-- This triangle should match success-closed.svg, fill and stroke color notwithstanding. -->
<polygon points="0,0 100,50 0,100" fill="blue" stroke="blue" stroke-width="15" stroke-linejoin="round"/>
<!--
! This path is extremely similar to the one in skip-open.svg; before
! making minor modifications, diff the two to understand how they should
! remain in sync.
-->
<path
d="M 50,50
A 23,23 270,1,1 30,30
l -10,20
m 10,-20
l -20,-10"
fill="none"
stroke="white"
stroke-width="12"
stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 812 B

View File

@@ -0,0 +1,21 @@
<?xml version="1.0"?>
<!-- See failure-open.svg for an explanation of the view box dimensions. -->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<!-- This triangle should match success-open.svg, fill and stroke color notwithstanding. -->
<polygon points="0,0 100,0 50,100" fill="blue" stroke="blue" stroke-width="15" stroke-linejoin="round"/>
<!--
! This path is extremely similar to the one in skip-closed.svg; before
! making minor modifications, diff the two to understand how they should
! remain in sync.
-->
<path
d="M 50,50
A 23,23 270,1,1 70,30
l 10,-20
m -10,20
l -20,-10"
fill="none"
stroke="white"
stroke-width="12"
stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 812 B

View File

@@ -0,0 +1,16 @@
<?xml version="1.0"?>
<!-- See failure-open.svg for an explanation of the view box dimensions. -->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<style>
.adaptive-stroke {
stroke: black;
}
@media (prefers-color-scheme: dark) {
.adaptive-stroke {
stroke: ghostwhite;
}
}
</style>
<!-- When updating this triangle, also update the other five SVGs. -->
<polygon points="0,0 100,50 0,100" fill="none" class="adaptive-stroke" stroke-width="15" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 572 B

View File

@@ -0,0 +1,16 @@
<?xml version="1.0"?>
<!-- See failure-open.svg for an explanation of the view box dimensions. -->
<svg xmlns="http://www.w3.org/2000/svg" width="20" viewBox="-20,-20 160 130">
<style>
.adaptive-stroke {
stroke: black;
}
@media (prefers-color-scheme: dark) {
.adaptive-stroke {
stroke: ghostwhite;
}
}
</style>
<!-- When updating this triangle, also update the other five SVGs. -->
<polygon points="0,0 100,0 50,100" fill="none" class="adaptive-stroke" stroke-width="15" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 572 B

View File

@@ -0,0 +1,403 @@
// =============================================================================
// DSL
type TestTree = TestGroup | Test;
type TestGroup = { name: string; children: TestTree[] };
type Test = { name: string; test: (t: TestController) => void };
export class TestRegistrar {
#suites: TestGroup[];
constructor() {
this.#suites = [];
}
suite(name: string, children: TestTree[]) {
checkDuplicates(name, children);
this.#suites.push({ name, children });
}
run(reporter: Reporter) {
reporter.register(this.#suites);
for (const suite of this.#suites) {
for (const c of suite.children) runTests(reporter, [suite.name], c);
}
reporter.finalize();
}
}
export let GLOBAL_REGISTRAR = new TestRegistrar();
// Register a suite in the global registrar.
export function suite(name: string, children: TestTree[]) {
GLOBAL_REGISTRAR.suite(name, children);
}
export function group(name: string, children: TestTree[]): TestTree {
checkDuplicates(name, children);
return { name, children };
}
export const context = group;
export const describe = group;
export function test(name: string, test: (t: TestController) => void): TestTree {
return { name, test };
}
function checkDuplicates(parent: string, names: { name: string }[]) {
let seen = new Set<string>();
for (const { name } of names) {
if (seen.has(name)) {
throw new RangeError(`duplicate name '${name}' in '${parent}'`);
}
seen.add(name);
}
}
export type TestState = "success" | "failure" | "skip";
class AbortSentinel {}
export class TestController {
#state: TestState;
logs: string[];
constructor() {
this.#state = "success";
this.logs = [];
}
getState(): TestState {
return this.#state;
}
fail() {
this.#state = "failure";
}
failed(): boolean {
return this.#state === "failure";
}
failNow(): never {
this.fail();
throw new AbortSentinel();
}
log(message: string) {
this.logs.push(message);
}
error(message: string) {
this.log(message);
this.fail();
}
fatal(message: string): never {
this.log(message);
this.failNow();
}
skip(message?: string): never {
if (message != null) this.log(message);
if (this.#state !== "failure") this.#state = "skip";
throw new AbortSentinel();
}
skipped(): boolean {
return this.#state === "skip";
}
}
// =============================================================================
// Execution
export interface TestResult {
state: TestState;
logs: string[];
}
function runTests(reporter: Reporter, parents: string[], node: TestTree) {
const path = [...parents, node.name];
if ("children" in node) {
for (const c of node.children) runTests(reporter, path, c);
return;
}
let controller = new TestController();
try {
node.test(controller);
} catch (e) {
if (!(e instanceof AbortSentinel)) {
controller.error(extractExceptionString(e));
}
}
reporter.update(path, { state: controller.getState(), logs: controller.logs });
}
function extractExceptionString(e: any): string {
// String() instead of .toString() as null and undefined don't have
// properties.
const s = String(e);
if (!(e instanceof Error && e.stack)) return s;
// v8 (Chromium, NodeJS) includes the error message, while Firefox and
// WebKit do not.
if (e.stack.startsWith(s)) return e.stack;
return `${s}\n${e.stack}`;
}
// =============================================================================
// Reporting
export interface Reporter {
register(suites: TestGroup[]): void;
update(path: string[], result: TestResult): void;
finalize(): void;
}
export class NoOpReporter implements Reporter {
suites: TestGroup[];
results: ({ path: string[] } & TestResult)[];
finalized: boolean;
constructor() {
this.suites = [];
this.results = [];
this.finalized = false;
}
register(suites: TestGroup[]) {
this.suites = suites;
}
update(path: string[], result: TestResult) {
this.results.push({ path, ...result });
}
finalize() {
this.finalized = true;
}
}
export interface Stream {
writeln(s: string): void;
}
const SEP = " ";
export class StreamReporter implements Reporter {
stream: Stream;
verbose: boolean;
#successes: ({ path: string[] } & TestResult)[];
#failures: ({ path: string[] } & TestResult)[];
#skips: ({ path: string[] } & TestResult)[];
constructor(stream: Stream, verbose: boolean = false) {
this.stream = stream;
this.verbose = verbose;
this.#successes = [];
this.#failures = [];
this.#skips = [];
}
succeeded(): boolean {
return this.#successes.length > 0 && this.#failures.length === 0;
}
register(suites: TestGroup[]) {}
update(path: string[], result: TestResult) {
if (path.length === 0) throw new RangeError("path is empty");
const pathStr = path.join(SEP);
switch (result.state) {
case "success":
this.#successes.push({ path, ...result });
if (this.verbose) this.stream.writeln(`✅️ ${pathStr}`);
break;
case "failure":
this.#failures.push({ path, ...result });
this.stream.writeln(`⚠️ ${pathStr}`);
break;
case "skip":
this.#skips.push({ path, ...result });
this.stream.writeln(`⏭️ ${pathStr}`);
break;
}
}
finalize() {
if (this.verbose) this.#displaySection("successes", this.#successes, true);
this.#displaySection("failures", this.#failures);
this.#displaySection("skips", this.#skips);
this.stream.writeln("");
this.stream.writeln(
`${this.#successes.length} succeeded, ${this.#failures.length} failed` +
(this.#skips.length ? `, ${this.#skips.length} skipped` : ""),
);
}
#displaySection(name: string, data: ({ path: string[] } & TestResult)[], ignoreEmpty: boolean = false) {
if (!data.length) return;
// Transform [{ path: ["a", "b", "c"] }, { path: ["a", "b", "d"] }]
// into { "a b": ["c", "d"] }.
let pathMap = new Map<string, ({ name: string } & TestResult)[]>();
for (const t of data) {
if (t.path.length === 0) throw new RangeError("path is empty");
const key = t.path.slice(0, -1).join(SEP);
if (!pathMap.has(key)) pathMap.set(key, []);
pathMap.get(key)!.push({ name: t.path.at(-1)!, ...t });
}
this.stream.writeln("");
this.stream.writeln(name.toUpperCase());
this.stream.writeln("=".repeat(name.length));
for (let [path, tests] of pathMap) {
if (ignoreEmpty) tests = tests.filter((t) => t.logs.length);
if (tests.length === 0) continue;
if (tests.length === 1) {
this.#writeOutput(tests[0], path ? `${path}${SEP}` : "", false);
} else {
this.stream.writeln(path);
for (const t of tests) this.#writeOutput(t, " - ", true);
}
}
}
#writeOutput(test: { name: string } & TestResult, prefix: string, nested: boolean) {
let output = "";
if (test.logs.length) {
// Individual logs might span multiple lines, so join them together
// then split it again.
const logStr = test.logs.join("\n");
const lines = logStr.split("\n");
if (lines.length <= 1) {
output = `: ${logStr}`;
} else {
const padding = nested ? " " : " ";
output = ":\n" + lines.map((line) => padding + line).join("\n");
}
}
this.stream.writeln(`${prefix}${test.name}${output}`);
}
}
function assertGetElementById(id: string): HTMLElement {
let elem = document.getElementById(id);
if (elem == null) throw new ReferenceError(`element with ID '${id}' missing from DOM`);
return elem;
}
export class DOMReporter implements Reporter {
register(suites: TestGroup[]) {}
update(path: string[], result: TestResult) {
if (path.length === 0) throw new RangeError("path is empty");
if (result.state === "skip") {
assertGetElementById("skip-counter-text").hidden = false;
}
const counter = assertGetElementById(`${result.state}-counter`);
counter.innerText = (Number(counter.innerText) + 1).toString();
let parent = assertGetElementById("root");
for (const node of path) {
let child: HTMLDetailsElement | null = null;
let summary: HTMLElement | null = null;
let d: Element;
outer: for (d of parent.children) {
if (!(d instanceof HTMLDetailsElement)) continue;
for (const s of d.children) {
if (!(s instanceof HTMLElement)) continue;
if (!(s.tagName === "SUMMARY" && s.innerText === node)) continue;
child = d;
summary = s;
break outer;
}
}
if (!child) {
child = document.createElement("details");
child.className = "test-node";
child.ariaRoleDescription = "test";
summary = document.createElement("summary");
summary.appendChild(document.createTextNode(node));
summary.ariaRoleDescription = "test name";
child.appendChild(summary);
parent.appendChild(child);
}
if (!summary) throw new Error("unreachable as assigned above");
switch (result.state) {
case "failure":
child.open = true;
child.classList.add("failure");
child.classList.remove("skip");
child.classList.remove("success");
// The summary marker does not appear in the AOM, so setting its
// alt text is fruitless; label the summary itself instead.
summary.setAttribute("aria-labelledby", "failure-description");
break;
case "skip":
if (child.classList.contains("failure")) break;
child.classList.add("skip");
child.classList.remove("success");
summary.setAttribute("aria-labelledby", "skip-description");
break;
case "success":
if (child.classList.contains("failure") || child.classList.contains("skip")) break;
child.classList.add("success");
summary.setAttribute("aria-labelledby", "success-description");
break;
}
parent = child;
}
const p = document.createElement("p");
p.classList.add("test-desc");
if (result.logs.length) {
const pre = document.createElement("pre");
pre.appendChild(document.createTextNode(result.logs.join("\n")));
p.appendChild(pre);
} else {
p.classList.add("italic");
p.appendChild(document.createTextNode("No output."));
}
parent.appendChild(p);
}
finalize() {}
}
interface GoNode {
name: string;
subtests?: GoNode[];
}
// Used to display results via `go test`, via some glue code from the Go side.
export class GoTestReporter implements Reporter {
register(suites: TestGroup[]) {
console.log(JSON.stringify(suites.map(GoTestReporter.serialize)));
}
// Convert a test tree into the one expected by the Go code.
static serialize(node: TestTree): GoNode {
return {
name: node.name,
subtests: "children" in node ? node.children.map(GoTestReporter.serialize) : undefined,
};
}
update(path: string[], result: TestResult) {
let state: number;
switch (result.state) {
case "success": state = 0; break;
case "failure": state = 1; break;
case "skip": state = 2; break;
}
console.log(JSON.stringify({ path, state, logs: result.logs }));
}
finalize() {
console.log("null");
}
}

View File

@@ -0,0 +1,87 @@
/*
* When updating the theme colors, also update them in success-closed.svg and
* success-open.svg!
*/
:root {
--bg: #d3d3d3;
--fg: black;
}
@media (prefers-color-scheme: dark) {
:root {
--bg: #2c2c2c;
--fg: ghostwhite;
}
}
html {
background-color: var(--bg);
color: var(--fg);
}
h1, p, summary, noscript {
font-family: sans-serif;
}
noscript {
font-size: 16pt;
}
.root {
margin: 1rem 0;
}
details.test-node {
margin-left: 1rem;
padding: 0.2rem 0.5rem;
border-left: 2px dashed var(--fg);
> summary {
cursor: pointer;
}
&.success > summary::marker {
/*
* WebKit only supports color and font-size properties in ::marker [1], and
* its ::-webkit-details-marker only supports hiding the marker entirely
* [2], contrary to mdn's example [3]; thus, set a color as a fallback:
* while it may not be accessible for colorblind individuals, it's better
* than no indication of a test's state for anyone, as that there's no other
* way to include an indication in the marker on WebKit.
*
* [1]: https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/Selectors/::marker#browser_compatibility
* [2]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/summary#default_style
* [3]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/summary#changing_the_summarys_icon
*/
color: var(--fg);
content: url("/test/success-closed.svg") / "success";
}
&.success[open] > summary::marker {
content: url("/test/success-open.svg") / "success";
}
&.failure > summary::marker {
color: red;
content: url("/test/failure-closed.svg") / "failure";
}
&.failure[open] > summary::marker {
content: url("/test/failure-open.svg") / "failure";
}
&.skip > summary::marker {
color: blue;
content: url("/test/skip-closed.svg") / "skip";
}
&.skip[open] > summary::marker {
content: url("/test/skip-open.svg") / "skip";
}
}
p.test-desc {
margin: 0 0 0 1rem;
padding: 2px 0;
> pre {
margin: 0;
}
}
.italic {
font-style: italic;
}

View File

@@ -0,0 +1,39 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/test/style.css">
<title>PkgServer Tests</title>
</head>
<body>
<noscript>
I hate JavaScript as much as you, but this page runs tests written in
JavaScript to test the functionality of code written in JavaScript, so it
wouldn't make sense for it to work without JavaScript. <strong>Please turn
JavaScript on!</strong>
</noscript>
<h1>PkgServer Tests</h1>
<main>
<p id="counters">
<span id="success-counter">0</span> succeeded, <span id="failure-counter">0</span>
failed<span id="skip-counter-text" hidden>, <span id="skip-counter">0</span> skipped</span>.
</p>
<p hidden id="success-description">Successful test</p>
<p hidden id="failure-description">Failed test</p>
<p hidden id="skip-description">Partially or fully skipped test</p>
<div id="root">
</div>
<script type="module">
import "/test/all_tests.js";
import { DOMReporter, GLOBAL_REGISTRAR } from "/test/lib/test.js";
GLOBAL_REGISTRAR.run(new DOMReporter());
</script>
</main>
</body>
</html>

View File

@@ -0,0 +1,8 @@
{
"compilerOptions": {
"target": "ES2024",
"strict": true,
"alwaysStrict": true,
"outDir": "static"
}
}

View File

@@ -7,8 +7,8 @@
#endif #endif
#define SHAREFS_MEDIA_RW_ID (1 << 10) - 1 /* owning gid presented to userspace */ #define SHAREFS_MEDIA_RW_ID (1 << 10) - 1 /* owning gid presented to userspace */
#define SHAREFS_PERM_DIR 0700 /* permission bits for directories presented to userspace */ #define SHAREFS_PERM_DIR 0770 /* permission bits for directories presented to userspace */
#define SHAREFS_PERM_REG 0600 /* permission bits for regular files presented to userspace */ #define SHAREFS_PERM_REG 0660 /* permission bits for regular files presented to userspace */
#define SHAREFS_FORBIDDEN_FLAGS O_DIRECT /* these open flags are cleared unconditionally */ #define SHAREFS_FORBIDDEN_FLAGS O_DIRECT /* these open flags are cleared unconditionally */
/* sharefs_private is populated by sharefs_init and contains process-wide context */ /* sharefs_private is populated by sharefs_init and contains process-wide context */

View File

@@ -19,7 +19,6 @@ import (
"encoding/gob" "encoding/gob"
"errors" "errors"
"fmt" "fmt"
"io"
"log" "log"
"os" "os"
"os/exec" "os/exec"
@@ -470,13 +469,14 @@ func _main(s ...string) (exitCode int) {
os.NewFile(uintptr(C.fuse_session_fd(se)), "fuse"), os.NewFile(uintptr(C.fuse_session_fd(se)), "fuse"),
)) ))
var setupWriter io.WriteCloser var setupPipe [2]*os.File
if fd, w, err := container.Setup(&z.ExtraFiles); err != nil { if r, w, err := os.Pipe(); err != nil {
log.Println(err) log.Println(err)
return 5 return 5
} else { } else {
z.Args = append(z.Args, "-osetup="+strconv.Itoa(fd)) z.Args = append(z.Args, "-osetup="+strconv.Itoa(3+len(z.ExtraFiles)))
setupWriter = w z.ExtraFiles = append(z.ExtraFiles, r)
setupPipe[0], setupPipe[1] = r, w
} }
if err := z.Start(); err != nil { if err := z.Start(); err != nil {
@@ -487,6 +487,9 @@ func _main(s ...string) (exitCode int) {
} }
return 5 return 5
} }
if err := setupPipe[0].Close(); err != nil {
log.Println(err)
}
if err := z.Serve(); err != nil { if err := z.Serve(); err != nil {
if m, ok := message.GetMessage(err); ok { if m, ok := message.GetMessage(err); ok {
log.Println(m) log.Println(m)
@@ -496,10 +499,10 @@ func _main(s ...string) (exitCode int) {
return 5 return 5
} }
if err := gob.NewEncoder(setupWriter).Encode(&setup); err != nil { if err := gob.NewEncoder(setupPipe[1]).Encode(&setup); err != nil {
log.Println(err) log.Println(err)
return 5 return 5
} else if err = setupWriter.Close(); err != nil { } else if err = setupPipe[1].Close(); err != nil {
log.Println(err) log.Println(err)
} }

View File

@@ -0,0 +1,122 @@
//go:build raceattr
// The raceattr program reproduces vfs inode file attribute race.
//
// Even though libfuse high-level API presents the address of a struct stat
// alongside struct fuse_context, file attributes are actually inherent to the
// inode, instead of the specific call from userspace. The kernel implementation
// in fs/fuse/xattr.c appears to make stale data in the inode (set by a previous
// call) impossible or very unlikely to reach userspace via the stat family of
// syscalls. However, when using default_permissions to have the VFS check
// permissions, this race still happens, despite the resulting struct stat being
// correct when overriding the check via capabilities otherwise.
//
// This program reproduces the failure, but because of its continuous nature, it
// is provided independent of the vm integration test suite.
package main
import (
"context"
"flag"
"log"
"os"
"os/signal"
"runtime"
"sync"
"sync/atomic"
"syscall"
)
func newStatAs(
ctx context.Context, cancel context.CancelFunc,
n *atomic.Uint64, ok *atomic.Bool,
uid uint32, pathname string,
continuous bool,
) func() {
return func() {
runtime.LockOSThread()
defer cancel()
if _, _, errno := syscall.Syscall(
syscall.SYS_SETUID, uintptr(uid),
0, 0,
); errno != 0 {
cancel()
log.Printf("cannot set uid to %d: %s", uid, errno)
}
var stat syscall.Stat_t
for {
if ctx.Err() != nil {
return
}
if err := syscall.Lstat(pathname, &stat); err != nil {
// SHAREFS_PERM_DIR not world executable, or
// SHAREFS_PERM_REG not world readable
if !continuous {
cancel()
}
ok.Store(true)
log.Printf("uid %d: %v", uid, err)
} else if stat.Uid != uid {
// appears to be unreachable
if !continuous {
cancel()
}
ok.Store(true)
log.Printf("got uid %d instead of %d", stat.Uid, uid)
}
n.Add(1)
}
}
}
func main() {
log.SetFlags(0)
log.SetPrefix("raceattr: ")
p := flag.String("target", "/sdcard/raceattr", "pathname of test file")
u0 := flag.Int("uid0", 1<<10-1, "first uid")
u1 := flag.Int("uid1", 1<<10-2, "second uid")
count := flag.Int("count", 1, "threads per uid")
continuous := flag.Bool("continuous", false, "keep running even after reproduce")
flag.Parse()
if os.Geteuid() != 0 {
log.Fatal("this program must run as root")
}
ctx, cancel := signal.NotifyContext(
context.Background(),
syscall.SIGINT,
syscall.SIGTERM,
syscall.SIGHUP,
)
if err := os.WriteFile(*p, nil, 0); err != nil {
log.Fatal(err)
}
var (
wg sync.WaitGroup
n atomic.Uint64
ok atomic.Bool
)
if *count < 1 {
*count = 1
}
for range *count {
wg.Go(newStatAs(ctx, cancel, &n, &ok, uint32(*u0), *p, *continuous))
if *u1 >= 0 {
wg.Go(newStatAs(ctx, cancel, &n, &ok, uint32(*u1), *p, *continuous))
}
}
wg.Wait()
if !*continuous && ok.Load() {
log.Printf("reproduced after %d calls", n.Load())
}
}

View File

@@ -21,6 +21,7 @@ import (
"hakurei.app/container/std" "hakurei.app/container/std"
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/internal/landlock"
"hakurei.app/message" "hakurei.app/message"
) )
@@ -28,9 +29,6 @@ const (
// CancelSignal is the signal expected by container init on context cancel. // CancelSignal is the signal expected by container init on context cancel.
// A custom [Container.Cancel] function must eventually deliver this signal. // A custom [Container.Cancel] function must eventually deliver this signal.
CancelSignal = SIGUSR2 CancelSignal = SIGUSR2
// Timeout for writing initParams to Container.setup.
initSetupTimeout = 5 * time.Second
) )
type ( type (
@@ -53,7 +51,7 @@ type (
ExtraFiles []*os.File ExtraFiles []*os.File
// Write end of a pipe connected to the init to deliver [Params]. // Write end of a pipe connected to the init to deliver [Params].
setup *os.File setup [2]*os.File
// Cancels the context passed to the underlying cmd. // Cancels the context passed to the underlying cmd.
cancel context.CancelFunc cancel context.CancelFunc
// Closed after Wait returns. Keeps the spawning thread alive. // Closed after Wait returns. Keeps the spawning thread alive.
@@ -287,14 +285,16 @@ func (p *Container) Start() error {
} }
// place setup pipe before user supplied extra files, this is later restored by init // place setup pipe before user supplied extra files, this is later restored by init
if fd, f, err := Setup(&p.cmd.ExtraFiles); err != nil { if r, w, err := os.Pipe(); err != nil {
return &StartError{ return &StartError{
Fatal: true, Fatal: true,
Step: "set up params stream", Step: "set up params stream",
Err: err, Err: err,
} }
} else { } else {
p.setup = f fd := 3 + len(p.cmd.ExtraFiles)
p.cmd.ExtraFiles = append(p.cmd.ExtraFiles, r)
p.setup[0], p.setup[1] = r, w
p.cmd.Env = []string{setupEnv + "=" + strconv.Itoa(fd)} p.cmd.Env = []string{setupEnv + "=" + strconv.Itoa(fd)}
} }
p.cmd.ExtraFiles = append(p.cmd.ExtraFiles, p.ExtraFiles...) p.cmd.ExtraFiles = append(p.cmd.ExtraFiles, p.ExtraFiles...)
@@ -308,7 +308,7 @@ func (p *Container) Start() error {
done <- func() error { done <- func() error {
// PR_SET_NO_NEW_PRIVS: thread-directed but acts on all processes // PR_SET_NO_NEW_PRIVS: thread-directed but acts on all processes
// created from the calling thread // created from the calling thread
if err := SetNoNewPrivs(); err != nil { if err := setNoNewPrivs(); err != nil {
return &StartError{ return &StartError{
Fatal: true, Fatal: true,
Step: "prctl(PR_SET_NO_NEW_PRIVS)", Step: "prctl(PR_SET_NO_NEW_PRIVS)",
@@ -318,15 +318,17 @@ func (p *Container) Start() error {
// landlock: depends on per-thread state but acts on a process group // landlock: depends on per-thread state but acts on a process group
{ {
rulesetAttr := &RulesetAttr{Scoped: LANDLOCK_SCOPE_SIGNAL} rulesetAttr := &landlock.RulesetAttr{
Scoped: landlock.LANDLOCK_SCOPE_SIGNAL,
}
if !p.HostAbstract { if !p.HostAbstract {
rulesetAttr.Scoped |= LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET rulesetAttr.Scoped |= landlock.LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET
} }
if abi, err := LandlockGetABI(); err != nil { if abi, err := landlock.GetABI(); err != nil {
if p.HostAbstract { if p.HostAbstract || !p.HostNet {
// landlock can be skipped here as it restricts access // landlock can be skipped here as it restricts access
// to resources already covered by namespaces (pid) // to resources already covered by namespaces (pid, net)
goto landlockOut goto landlockOut
} }
return &StartError{Step: "get landlock ABI", Err: err} return &StartError{Step: "get landlock ABI", Err: err}
@@ -352,7 +354,7 @@ func (p *Container) Start() error {
} }
} else { } else {
p.msg.Verbosef("enforcing landlock ruleset %s", rulesetAttr) p.msg.Verbosef("enforcing landlock ruleset %s", rulesetAttr)
if err = LandlockRestrictSelf(rulesetFd, 0); err != nil { if err = landlock.RestrictSelf(rulesetFd, 0); err != nil {
_ = Close(rulesetFd) _ = Close(rulesetFd)
return &StartError{ return &StartError{
Fatal: true, Fatal: true,
@@ -428,24 +430,33 @@ func (p *Container) Start() error {
// Serve serves [Container.Params] to the container init. // Serve serves [Container.Params] to the container init.
// //
// Serve must only be called once. // Serve must only be called once.
func (p *Container) Serve() error { func (p *Container) Serve() (err error) {
if p.setup == nil { if p.setup[0] == nil || p.setup[1] == nil {
panic("invalid serve") panic("invalid serve")
} }
setup := p.setup done := make(chan struct{})
p.setup = nil defer func() {
if err := setup.SetDeadline(time.Now().Add(initSetupTimeout)); err != nil { if closeErr := p.setup[1].Close(); err == nil {
err = closeErr
}
if err != nil {
p.cancel()
}
close(done)
p.setup[0], p.setup[1] = nil, nil
}()
if err = p.setup[0].Close(); err != nil {
return &StartError{ return &StartError{
Fatal: true, Fatal: true,
Step: "set init pipe deadline", Step: "close read end of init pipe",
Err: err, Err: err,
Passthrough: true, Passthrough: true,
} }
} }
if p.Path == nil { if p.Path == nil {
p.cancel()
return &StartError{ return &StartError{
Step: "invalid executable pathname", Step: "invalid executable pathname",
Err: EINVAL, Err: EINVAL,
@@ -461,18 +472,27 @@ func (p *Container) Serve() error {
p.SeccompRules = make([]std.NativeRule, 0) p.SeccompRules = make([]std.NativeRule, 0)
} }
err := gob.NewEncoder(setup).Encode(&initParams{ t := time.Now().UTC()
go func(f *os.File) {
select {
case <-p.ctx.Done():
if cancelErr := f.SetWriteDeadline(t); cancelErr != nil {
p.msg.Verbose(err)
}
case <-done:
p.msg.Verbose("setup payload took", time.Since(t))
return
}
}(p.setup[1])
return gob.NewEncoder(p.setup[1]).Encode(&initParams{
p.Params, p.Params,
Getuid(), Getuid(),
Getgid(), Getgid(),
len(p.ExtraFiles), len(p.ExtraFiles),
p.msg.IsVerbose(), p.msg.IsVerbose(),
}) })
_ = setup.Close()
if err != nil {
p.cancel()
}
return err
} }
// Wait blocks until the container init process to exit and releases any // Wait blocks until the container init process to exit and releases any

View File

@@ -25,6 +25,9 @@ import (
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/hst" "hakurei.app/hst"
"hakurei.app/internal/info"
"hakurei.app/internal/landlock"
"hakurei.app/internal/params"
"hakurei.app/ldd" "hakurei.app/ldd"
"hakurei.app/message" "hakurei.app/message"
"hakurei.app/vfs" "hakurei.app/vfs"
@@ -83,9 +86,9 @@ func TestStartError(t *testing.T) {
{"params env", &container.StartError{ {"params env", &container.StartError{
Fatal: true, Fatal: true,
Step: "set up params stream", Step: "set up params stream",
Err: container.ErrReceiveEnv, Err: params.ErrReceiveEnv,
}, "set up params stream: environment variable not set", }, "set up params stream: environment variable not set",
container.ErrReceiveEnv, syscall.EBADF, params.ErrReceiveEnv, syscall.EBADF,
"cannot set up params stream: environment variable not set"}, "cannot set up params stream: environment variable not set"},
{"params", &container.StartError{ {"params", &container.StartError{
@@ -453,6 +456,15 @@ func TestContainer(t *testing.T) {
c.SeccompDisable = !tc.filter c.SeccompDisable = !tc.filter
c.RetainSession = tc.session c.RetainSession = tc.session
c.HostNet = tc.net c.HostNet = tc.net
if info.CanDegrade {
if _, err := landlock.GetABI(); err != nil {
if !errors.Is(err, syscall.ENOSYS) {
t.Fatalf("LandlockGetABI: error = %v", err)
}
c.HostAbstract = true
t.Log("Landlock LSM is unavailable, enabling HostAbstract")
}
}
c. c.
Readonly(check.MustAbs(pathReadonly), 0755). Readonly(check.MustAbs(pathReadonly), 0755).

View File

@@ -16,6 +16,7 @@ import (
"hakurei.app/container/std" "hakurei.app/container/std"
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/internal/netlink" "hakurei.app/internal/netlink"
"hakurei.app/internal/params"
"hakurei.app/message" "hakurei.app/message"
) )
@@ -56,7 +57,7 @@ type syscallDispatcher interface {
// isatty provides [Isatty]. // isatty provides [Isatty].
isatty(fd int) bool isatty(fd int) bool
// receive provides [Receive]. // receive provides [Receive].
receive(key string, e any, fdp *uintptr) (closeFunc func() error, err error) receive(key string, e any, fdp *int) (closeFunc func() error, err error)
// bindMount provides procPaths.bindMount. // bindMount provides procPaths.bindMount.
bindMount(msg message.Msg, source, target string, flags uintptr) error bindMount(msg message.Msg, source, target string, flags uintptr) error
@@ -147,7 +148,7 @@ func (direct) lockOSThread() { runtime.LockOSThread() }
func (direct) setPtracer(pid uintptr) error { return ext.SetPtracer(pid) } func (direct) setPtracer(pid uintptr) error { return ext.SetPtracer(pid) }
func (direct) setDumpable(dumpable uintptr) error { return ext.SetDumpable(dumpable) } func (direct) setDumpable(dumpable uintptr) error { return ext.SetDumpable(dumpable) }
func (direct) setNoNewPrivs() error { return SetNoNewPrivs() } func (direct) setNoNewPrivs() error { return setNoNewPrivs() }
func (direct) lastcap(msg message.Msg) uintptr { return LastCap(msg) } func (direct) lastcap(msg message.Msg) uintptr { return LastCap(msg) }
func (direct) capset(hdrp *capHeader, datap *[2]capData) error { return capset(hdrp, datap) } func (direct) capset(hdrp *capHeader, datap *[2]capData) error { return capset(hdrp, datap) }
@@ -155,8 +156,8 @@ func (direct) capBoundingSetDrop(cap uintptr) error { return capBound
func (direct) capAmbientClearAll() error { return capAmbientClearAll() } func (direct) capAmbientClearAll() error { return capAmbientClearAll() }
func (direct) capAmbientRaise(cap uintptr) error { return capAmbientRaise(cap) } func (direct) capAmbientRaise(cap uintptr) error { return capAmbientRaise(cap) }
func (direct) isatty(fd int) bool { return ext.Isatty(fd) } func (direct) isatty(fd int) bool { return ext.Isatty(fd) }
func (direct) receive(key string, e any, fdp *uintptr) (func() error, error) { func (direct) receive(key string, e any, fdp *int) (func() error, error) {
return Receive(key, e, fdp) return params.Receive(key, e, fdp)
} }
func (direct) bindMount(msg message.Msg, source, target string, flags uintptr) error { func (direct) bindMount(msg message.Msg, source, target string, flags uintptr) error {

View File

@@ -390,7 +390,7 @@ func (k *kstub) isatty(fd int) bool {
return expect.Ret.(bool) return expect.Ret.(bool)
} }
func (k *kstub) receive(key string, e any, fdp *uintptr) (closeFunc func() error, err error) { func (k *kstub) receive(key string, e any, fdp *int) (closeFunc func() error, err error) {
k.Helper() k.Helper()
expect := k.Expects("receive") expect := k.Expects("receive")
@@ -408,10 +408,17 @@ func (k *kstub) receive(key string, e any, fdp *uintptr) (closeFunc func() error
} }
return nil return nil
} }
// avoid changing test cases
var fdpComp *uintptr
if fdp != nil {
fdpComp = new(uintptr(*fdp))
}
err = expect.Error( err = expect.Error(
stub.CheckArg(k.Stub, "key", key, 0), stub.CheckArg(k.Stub, "key", key, 0),
stub.CheckArgReflect(k.Stub, "e", e, 1), stub.CheckArgReflect(k.Stub, "e", e, 1),
stub.CheckArgReflect(k.Stub, "fdp", fdp, 2)) stub.CheckArgReflect(k.Stub, "fdp", fdpComp, 2))
// 3 is unused so stores params // 3 is unused so stores params
if expect.Args[3] != nil { if expect.Args[3] != nil {
@@ -426,7 +433,7 @@ func (k *kstub) receive(key string, e any, fdp *uintptr) (closeFunc func() error
if expect.Args[4] != nil { if expect.Args[4] != nil {
if v, ok := expect.Args[4].(uintptr); ok && v >= 3 { if v, ok := expect.Args[4].(uintptr); ok && v >= 3 {
if fdp != nil { if fdp != nil {
*fdp = v *fdp = int(v)
} }
} }
} }

View File

@@ -19,6 +19,7 @@ import (
"hakurei.app/container/seccomp" "hakurei.app/container/seccomp"
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/internal/params"
"hakurei.app/message" "hakurei.app/message"
) )
@@ -147,35 +148,33 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
} }
var ( var (
params initParams param initParams
closeSetup func() error closeSetup func() error
setupFd uintptr setupFd int
offsetSetup int
) )
if f, err := k.receive(setupEnv, &params, &setupFd); err != nil { if f, err := k.receive(setupEnv, &param, &setupFd); err != nil {
if errors.Is(err, EBADF) { if errors.Is(err, EBADF) {
k.fatal(msg, "invalid setup descriptor") k.fatal(msg, "invalid setup descriptor")
} }
if errors.Is(err, ErrReceiveEnv) { if errors.Is(err, params.ErrReceiveEnv) {
k.fatal(msg, setupEnv+" not set") k.fatal(msg, setupEnv+" not set")
} }
k.fatalf(msg, "cannot decode init setup payload: %v", err) k.fatalf(msg, "cannot decode init setup payload: %v", err)
} else { } else {
if params.Ops == nil { if param.Ops == nil {
k.fatal(msg, "invalid setup parameters") k.fatal(msg, "invalid setup parameters")
} }
if params.ParentPerm == 0 { if param.ParentPerm == 0 {
params.ParentPerm = 0755 param.ParentPerm = 0755
} }
msg.SwapVerbose(params.Verbose) msg.SwapVerbose(param.Verbose)
msg.Verbose("received setup parameters") msg.Verbose("received setup parameters")
closeSetup = f closeSetup = f
offsetSetup = int(setupFd + 1)
} }
if !params.HostNet { if !param.HostNet {
ctx, cancel := signal.NotifyContext(context.Background(), CancelSignal, ctx, cancel := signal.NotifyContext(context.Background(), CancelSignal,
os.Interrupt, SIGTERM, SIGQUIT) os.Interrupt, SIGTERM, SIGQUIT)
defer cancel() // for panics defer cancel() // for panics
@@ -188,7 +187,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.fatalf(msg, "cannot set SUID_DUMP_USER: %v", err) k.fatalf(msg, "cannot set SUID_DUMP_USER: %v", err)
} }
if err := k.writeFile(fhs.Proc+"self/uid_map", if err := k.writeFile(fhs.Proc+"self/uid_map",
append([]byte{}, strconv.Itoa(params.Uid)+" "+strconv.Itoa(params.HostUid)+" 1\n"...), append([]byte{}, strconv.Itoa(param.Uid)+" "+strconv.Itoa(param.HostUid)+" 1\n"...),
0); err != nil { 0); err != nil {
k.fatalf(msg, "%v", err) k.fatalf(msg, "%v", err)
} }
@@ -198,7 +197,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.fatalf(msg, "%v", err) k.fatalf(msg, "%v", err)
} }
if err := k.writeFile(fhs.Proc+"self/gid_map", if err := k.writeFile(fhs.Proc+"self/gid_map",
append([]byte{}, strconv.Itoa(params.Gid)+" "+strconv.Itoa(params.HostGid)+" 1\n"...), append([]byte{}, strconv.Itoa(param.Gid)+" "+strconv.Itoa(param.HostGid)+" 1\n"...),
0); err != nil { 0); err != nil {
k.fatalf(msg, "%v", err) k.fatalf(msg, "%v", err)
} }
@@ -207,8 +206,8 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
} }
oldmask := k.umask(0) oldmask := k.umask(0)
if params.Hostname != "" { if param.Hostname != "" {
if err := k.sethostname([]byte(params.Hostname)); err != nil { if err := k.sethostname([]byte(param.Hostname)); err != nil {
k.fatalf(msg, "cannot set hostname: %v", err) k.fatalf(msg, "cannot set hostname: %v", err)
} }
} }
@@ -221,7 +220,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
} }
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
state := &setupState{process: make(map[int]WaitStatus), Params: &params.Params, Msg: msg, Context: ctx} state := &setupState{process: make(map[int]WaitStatus), Params: &param.Params, Msg: msg, Context: ctx}
defer cancel() defer cancel()
/* early is called right before pivot_root into intermediate root; /* early is called right before pivot_root into intermediate root;
@@ -229,7 +228,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
difficult to obtain via library functions after pivot_root, and difficult to obtain via library functions after pivot_root, and
implementations are expected to avoid changing the state of the mount implementations are expected to avoid changing the state of the mount
namespace */ namespace */
for i, op := range *params.Ops { for i, op := range *param.Ops {
if op == nil || !op.Valid() { if op == nil || !op.Valid() {
k.fatalf(msg, "invalid op at index %d", i) k.fatalf(msg, "invalid op at index %d", i)
} }
@@ -272,7 +271,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
step sets up the container filesystem, and implementations are expected to step sets up the container filesystem, and implementations are expected to
keep the host root and sysroot mount points intact but otherwise can do keep the host root and sysroot mount points intact but otherwise can do
whatever they need to. Calling chdir is allowed but discouraged. */ whatever they need to. Calling chdir is allowed but discouraged. */
for i, op := range *params.Ops { for i, op := range *param.Ops {
// ops already checked during early setup // ops already checked during early setup
if prefix, ok := op.prefix(); ok { if prefix, ok := op.prefix(); ok {
msg.Verbosef("%s %s", prefix, op) msg.Verbosef("%s %s", prefix, op)
@@ -328,7 +327,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.fatalf(msg, "cannot clear the ambient capability set: %v", err) k.fatalf(msg, "cannot clear the ambient capability set: %v", err)
} }
for i := uintptr(0); i <= lastcap; i++ { for i := uintptr(0); i <= lastcap; i++ {
if params.Privileged && i == CAP_SYS_ADMIN { if param.Privileged && i == CAP_SYS_ADMIN {
continue continue
} }
if err := k.capBoundingSetDrop(i); err != nil { if err := k.capBoundingSetDrop(i); err != nil {
@@ -337,7 +336,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
} }
var keep [2]uint32 var keep [2]uint32
if params.Privileged { if param.Privileged {
keep[capToIndex(CAP_SYS_ADMIN)] |= capToMask(CAP_SYS_ADMIN) keep[capToIndex(CAP_SYS_ADMIN)] |= capToMask(CAP_SYS_ADMIN)
if err := k.capAmbientRaise(CAP_SYS_ADMIN); err != nil { if err := k.capAmbientRaise(CAP_SYS_ADMIN); err != nil {
@@ -351,13 +350,13 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.fatalf(msg, "cannot capset: %v", err) k.fatalf(msg, "cannot capset: %v", err)
} }
if !params.SeccompDisable { if !param.SeccompDisable {
rules := params.SeccompRules rules := param.SeccompRules
if len(rules) == 0 { // non-empty rules slice always overrides presets if len(rules) == 0 { // non-empty rules slice always overrides presets
msg.Verbosef("resolving presets %#x", params.SeccompPresets) msg.Verbosef("resolving presets %#x", param.SeccompPresets)
rules = seccomp.Preset(params.SeccompPresets, params.SeccompFlags) rules = seccomp.Preset(param.SeccompPresets, param.SeccompFlags)
} }
if err := k.seccompLoad(rules, params.SeccompFlags); err != nil { if err := k.seccompLoad(rules, param.SeccompFlags); err != nil {
// this also indirectly asserts PR_SET_NO_NEW_PRIVS // this also indirectly asserts PR_SET_NO_NEW_PRIVS
k.fatalf(msg, "cannot load syscall filter: %v", err) k.fatalf(msg, "cannot load syscall filter: %v", err)
} }
@@ -366,10 +365,10 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
msg.Verbose("syscall filter not configured") msg.Verbose("syscall filter not configured")
} }
extraFiles := make([]*os.File, params.Count) extraFiles := make([]*os.File, param.Count)
for i := range extraFiles { for i := range extraFiles {
// setup fd is placed before all extra files // setup fd is placed before all extra files
extraFiles[i] = k.newFile(uintptr(offsetSetup+i), "extra file "+strconv.Itoa(i)) extraFiles[i] = k.newFile(uintptr(setupFd+1+i), "extra file "+strconv.Itoa(i))
} }
k.umask(oldmask) k.umask(oldmask)
@@ -447,7 +446,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
// called right before startup of initial process, all state changes to the // called right before startup of initial process, all state changes to the
// current process is prohibited during late // current process is prohibited during late
for i, op := range *params.Ops { for i, op := range *param.Ops {
// ops already checked during early setup // ops already checked during early setup
if err := op.late(state, k); err != nil { if err := op.late(state, k); err != nil {
if m, ok := messageFromError(err); ok { if m, ok := messageFromError(err); ok {
@@ -468,14 +467,14 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.fatalf(msg, "cannot close setup pipe: %v", err) k.fatalf(msg, "cannot close setup pipe: %v", err)
} }
cmd := exec.Command(params.Path.String()) cmd := exec.Command(param.Path.String())
cmd.Stdin, cmd.Stdout, cmd.Stderr = os.Stdin, os.Stdout, os.Stderr cmd.Stdin, cmd.Stdout, cmd.Stderr = os.Stdin, os.Stdout, os.Stderr
cmd.Args = params.Args cmd.Args = param.Args
cmd.Env = params.Env cmd.Env = param.Env
cmd.ExtraFiles = extraFiles cmd.ExtraFiles = extraFiles
cmd.Dir = params.Dir.String() cmd.Dir = param.Dir.String()
msg.Verbosef("starting initial process %s", params.Path) msg.Verbosef("starting initial process %s", param.Path)
if err := k.start(cmd); err != nil { if err := k.start(cmd); err != nil {
k.fatalf(msg, "%v", err) k.fatalf(msg, "%v", err)
} }
@@ -493,7 +492,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
for { for {
select { select {
case s := <-sig: case s := <-sig:
if s == CancelSignal && params.ForwardCancel && cmd.Process != nil { if s == CancelSignal && param.ForwardCancel && cmd.Process != nil {
msg.Verbose("forwarding context cancellation") msg.Verbose("forwarding context cancellation")
if err := k.signal(cmd, os.Interrupt); err != nil && !errors.Is(err, os.ErrProcessDone) { if err := k.signal(cmd, os.Interrupt); err != nil && !errors.Is(err, os.ErrProcessDone) {
k.printf(msg, "cannot forward cancellation: %v", err) k.printf(msg, "cannot forward cancellation: %v", err)
@@ -525,7 +524,7 @@ func initEntrypoint(k syscallDispatcher, msg message.Msg) {
cancel() cancel()
// start timeout early // start timeout early
go func() { time.Sleep(params.AdoptWaitDelay); close(timeout) }() go func() { time.Sleep(param.AdoptWaitDelay); close(timeout) }()
// close initial process files; this also keeps them alive // close initial process files; this also keeps them alive
for _, f := range extraFiles { for _, f := range extraFiles {

View File

@@ -10,6 +10,7 @@ import (
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/container/seccomp" "hakurei.app/container/seccomp"
"hakurei.app/container/std" "hakurei.app/container/std"
"hakurei.app/internal/params"
"hakurei.app/internal/stub" "hakurei.app/internal/stub"
) )
@@ -40,7 +41,7 @@ func TestInitEntrypoint(t *testing.T) {
call("lockOSThread", stub.ExpectArgs{}, nil, nil), call("lockOSThread", stub.ExpectArgs{}, nil, nil),
call("getpid", stub.ExpectArgs{}, 1, nil), call("getpid", stub.ExpectArgs{}, 1, nil),
call("setPtracer", stub.ExpectArgs{uintptr(0)}, nil, nil), call("setPtracer", stub.ExpectArgs{uintptr(0)}, nil, nil),
call("receive", stub.ExpectArgs{"HAKUREI_SETUP", new(initParams), new(uintptr)}, nil, ErrReceiveEnv), call("receive", stub.ExpectArgs{"HAKUREI_SETUP", new(initParams), new(uintptr)}, nil, params.ErrReceiveEnv),
call("fatal", stub.ExpectArgs{[]any{"HAKUREI_SETUP not set"}}, nil, nil), call("fatal", stub.ExpectArgs{[]any{"HAKUREI_SETUP not set"}}, nil, nil),
}, },
}, nil}, }, nil},

View File

@@ -1,65 +0,0 @@
package container_test
import (
"testing"
"unsafe"
"hakurei.app/container"
)
func TestLandlockString(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
rulesetAttr *container.RulesetAttr
want string
}{
{"nil", nil, "NULL"},
{"zero", new(container.RulesetAttr), "0"},
{"some", &container.RulesetAttr{Scoped: container.LANDLOCK_SCOPE_SIGNAL}, "scoped: signal"},
{"set", &container.RulesetAttr{
HandledAccessFS: container.LANDLOCK_ACCESS_FS_MAKE_SYM | container.LANDLOCK_ACCESS_FS_IOCTL_DEV | container.LANDLOCK_ACCESS_FS_WRITE_FILE,
HandledAccessNet: container.LANDLOCK_ACCESS_NET_BIND_TCP,
Scoped: container.LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET | container.LANDLOCK_SCOPE_SIGNAL,
}, "fs: write_file make_sym fs_ioctl_dev, net: bind_tcp, scoped: abstract_unix_socket signal"},
{"all", &container.RulesetAttr{
HandledAccessFS: container.LANDLOCK_ACCESS_FS_EXECUTE |
container.LANDLOCK_ACCESS_FS_WRITE_FILE |
container.LANDLOCK_ACCESS_FS_READ_FILE |
container.LANDLOCK_ACCESS_FS_READ_DIR |
container.LANDLOCK_ACCESS_FS_REMOVE_DIR |
container.LANDLOCK_ACCESS_FS_REMOVE_FILE |
container.LANDLOCK_ACCESS_FS_MAKE_CHAR |
container.LANDLOCK_ACCESS_FS_MAKE_DIR |
container.LANDLOCK_ACCESS_FS_MAKE_REG |
container.LANDLOCK_ACCESS_FS_MAKE_SOCK |
container.LANDLOCK_ACCESS_FS_MAKE_FIFO |
container.LANDLOCK_ACCESS_FS_MAKE_BLOCK |
container.LANDLOCK_ACCESS_FS_MAKE_SYM |
container.LANDLOCK_ACCESS_FS_REFER |
container.LANDLOCK_ACCESS_FS_TRUNCATE |
container.LANDLOCK_ACCESS_FS_IOCTL_DEV,
HandledAccessNet: container.LANDLOCK_ACCESS_NET_BIND_TCP |
container.LANDLOCK_ACCESS_NET_CONNECT_TCP,
Scoped: container.LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET |
container.LANDLOCK_SCOPE_SIGNAL,
}, "fs: execute write_file read_file read_dir remove_dir remove_file make_char make_dir make_reg make_sock make_fifo make_block make_sym fs_refer fs_truncate fs_ioctl_dev, net: bind_tcp connect_tcp, scoped: abstract_unix_socket signal"},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.rulesetAttr.String(); got != tc.want {
t.Errorf("String: %s, want %s", got, tc.want)
}
})
}
}
func TestLandlockAttrSize(t *testing.T) {
t.Parallel()
want := 24
if got := unsafe.Sizeof(container.RulesetAttr{}); got != uintptr(want) {
t.Errorf("Sizeof: %d, want %d", got, want)
}
}

View File

@@ -1,47 +0,0 @@
package container
import (
"encoding/gob"
"errors"
"os"
"strconv"
"syscall"
)
// Setup appends the read end of a pipe for setup params transmission and returns its fd.
func Setup(extraFiles *[]*os.File) (int, *os.File, error) {
if r, w, err := os.Pipe(); err != nil {
return -1, nil, err
} else {
fd := 3 + len(*extraFiles)
*extraFiles = append(*extraFiles, r)
return fd, w, nil
}
}
var (
ErrReceiveEnv = errors.New("environment variable not set")
)
// Receive retrieves setup fd from the environment and receives params.
func Receive(key string, e any, fdp *uintptr) (func() error, error) {
var setup *os.File
if s, ok := os.LookupEnv(key); !ok {
return nil, ErrReceiveEnv
} else {
if fd, err := strconv.Atoi(s); err != nil {
return nil, optionalErrorUnwrap(err)
} else {
setup = os.NewFile(uintptr(fd), "setup")
if setup == nil {
return nil, syscall.EDOM
}
if fdp != nil {
*fdp = setup.Fd()
}
}
}
return setup.Close, gob.NewDecoder(setup).Decode(e)
}

View File

@@ -7,8 +7,8 @@ import (
"hakurei.app/ext" "hakurei.app/ext"
) )
// SetNoNewPrivs sets the calling thread's no_new_privs attribute. // setNoNewPrivs sets the calling thread's no_new_privs attribute.
func SetNoNewPrivs() error { func setNoNewPrivs() error {
return ext.Prctl(PR_SET_NO_NEW_PRIVS, 1, 0) return ext.Prctl(PR_SET_NO_NEW_PRIVS, 1, 0)
} }

1
dist/hsurc.default vendored
View File

@@ -1 +0,0 @@
1000 0

12
dist/install.sh vendored
View File

@@ -1,12 +0,0 @@
#!/bin/sh
cd "$(dirname -- "$0")" || exit 1
install -vDm0755 "bin/hakurei" "${DESTDIR}/usr/bin/hakurei"
install -vDm0755 "bin/sharefs" "${DESTDIR}/usr/bin/sharefs"
install -vDm4511 "bin/hsu" "${DESTDIR}/usr/bin/hsu"
if [ ! -f "${DESTDIR}/etc/hsurc" ]; then
install -vDm0400 "hsurc.default" "${DESTDIR}/etc/hsurc"
fi
install -vDm0644 "comp/_hakurei" "${DESTDIR}/usr/share/zsh/site-functions/_hakurei"

37
dist/release.sh vendored
View File

@@ -1,37 +0,0 @@
#!/bin/sh -e
cd "$(dirname -- "$0")/.."
VERSION="${HAKUREI_VERSION:-untagged}"
prefix="${PREFIX:-/usr}"
pname="hakurei-${VERSION}-$(go env GOARCH)"
out="${DESTDIR:-dist}/${pname}"
destroy_workdir() {
rm -rf "${out}"
}
trap destroy_workdir EXIT
echo '# Preparing distribution files.'
mkdir -p "${out}"
cp -v "README.md" "dist/hsurc.default" "dist/install.sh" "${out}"
cp -rv "dist/comp" "${out}"
echo
echo '# Building hakurei.'
go generate ./...
go build -trimpath -v -o "${out}/bin/" -ldflags "-s -w
-buildid= -linkmode external -extldflags=-static
-X hakurei.app/internal/info.buildVersion=${VERSION}
-X hakurei.app/internal/info.hakureiPath=${prefix}/bin/hakurei
-X hakurei.app/internal/info.hsuPath=${prefix}/bin/hsu
-X main.hakureiPath=${prefix}/bin/hakurei" ./...
echo
echo '# Testing hakurei.'
go test -ldflags='-buildid= -linkmode external -extldflags=-static' ./...
echo
echo '# Creating distribution.'
rm -f "${out}.tar.gz" && tar -C "${out}/.." -vczf "${out}.tar.gz" "${pname}"
rm -rf "${out}"
(cd "${out}/.." && sha512sum "${pname}.tar.gz" > "${pname}.tar.gz.sha512")
echo

View File

@@ -137,11 +137,10 @@
CC="musl-clang -O3 -Werror -Qunused-arguments" \ CC="musl-clang -O3 -Werror -Qunused-arguments" \
GOCACHE="$(mktemp -d)" \ GOCACHE="$(mktemp -d)" \
HAKUREI_TEST_SKIP_ACL=1 \
PATH="${pkgs.pkgsStatic.musl.bin}/bin:$PATH" \ PATH="${pkgs.pkgsStatic.musl.bin}/bin:$PATH" \
DESTDIR="$out" \ DESTDIR="$out" \
HAKUREI_VERSION="v${hakurei.version}" \ HAKUREI_VERSION="v${hakurei.version}" \
./dist/release.sh ./all.sh
''; '';
} }
); );
@@ -196,6 +195,7 @@
./test/interactive/vm.nix ./test/interactive/vm.nix
./test/interactive/hakurei.nix ./test/interactive/hakurei.nix
./test/interactive/trace.nix ./test/interactive/trace.nix
./test/interactive/raceattr.nix
self.nixosModules.hakurei self.nixosModules.hakurei
home-manager.nixosModules.home-manager home-manager.nixosModules.home-manager

View File

@@ -140,21 +140,29 @@ var (
ErrInsecure = errors.New("configuration is insecure") ErrInsecure = errors.New("configuration is insecure")
) )
const (
// VAllowInsecure allows use of compatibility options considered insecure
// under any configuration, to work around ecosystem-wide flaws.
VAllowInsecure = 1 << iota
)
// Validate checks [Config] and returns [AppError] if an invalid value is encountered. // Validate checks [Config] and returns [AppError] if an invalid value is encountered.
func (config *Config) Validate() error { func (config *Config) Validate(flags int) error {
const step = "validate configuration"
if config == nil { if config == nil {
return &AppError{Step: "validate configuration", Err: ErrConfigNull, return &AppError{Step: step, Err: ErrConfigNull,
Msg: "invalid configuration"} Msg: "invalid configuration"}
} }
// this is checked again in hsu // this is checked again in hsu
if config.Identity < IdentityStart || config.Identity > IdentityEnd { if config.Identity < IdentityStart || config.Identity > IdentityEnd {
return &AppError{Step: "validate configuration", Err: ErrIdentityBounds, return &AppError{Step: step, Err: ErrIdentityBounds,
Msg: "identity " + strconv.Itoa(config.Identity) + " out of range"} Msg: "identity " + strconv.Itoa(config.Identity) + " out of range"}
} }
if config.SchedPolicy < 0 || config.SchedPolicy > ext.SCHED_LAST { if config.SchedPolicy < 0 || config.SchedPolicy > ext.SCHED_LAST {
return &AppError{Step: "validate configuration", Err: ErrSchedPolicyBounds, return &AppError{Step: step, Err: ErrSchedPolicyBounds,
Msg: "scheduling policy " + Msg: "scheduling policy " +
strconv.Itoa(int(config.SchedPolicy)) + strconv.Itoa(int(config.SchedPolicy)) +
" out of range"} " out of range"}
@@ -168,34 +176,51 @@ func (config *Config) Validate() error {
} }
if config.Container == nil { if config.Container == nil {
return &AppError{Step: "validate configuration", Err: ErrConfigNull, return &AppError{Step: step, Err: ErrConfigNull,
Msg: "configuration missing container state"} Msg: "configuration missing container state"}
} }
if config.Container.Home == nil { if config.Container.Home == nil {
return &AppError{Step: "validate configuration", Err: ErrConfigNull, return &AppError{Step: step, Err: ErrConfigNull,
Msg: "container configuration missing path to home directory"} Msg: "container configuration missing path to home directory"}
} }
if config.Container.Shell == nil { if config.Container.Shell == nil {
return &AppError{Step: "validate configuration", Err: ErrConfigNull, return &AppError{Step: step, Err: ErrConfigNull,
Msg: "container configuration missing path to shell"} Msg: "container configuration missing path to shell"}
} }
if config.Container.Path == nil { if config.Container.Path == nil {
return &AppError{Step: "validate configuration", Err: ErrConfigNull, return &AppError{Step: step, Err: ErrConfigNull,
Msg: "container configuration missing path to initial program"} Msg: "container configuration missing path to initial program"}
} }
for key := range config.Container.Env { for key := range config.Container.Env {
if strings.IndexByte(key, '=') != -1 || strings.IndexByte(key, 0) != -1 { if strings.IndexByte(key, '=') != -1 || strings.IndexByte(key, 0) != -1 {
return &AppError{Step: "validate configuration", Err: ErrEnviron, return &AppError{Step: step, Err: ErrEnviron,
Msg: "invalid environment variable " + strconv.Quote(key)} Msg: "invalid environment variable " + strconv.Quote(key)}
} }
} }
if et := config.Enablements.Unwrap(); !config.DirectPulse && et&EPulse != 0 { et := config.Enablements.Unwrap()
return &AppError{Step: "validate configuration", Err: ErrInsecure, if !config.DirectPulse && et&EPulse != 0 {
return &AppError{Step: step, Err: ErrInsecure,
Msg: "enablement PulseAudio is insecure and no longer supported"} Msg: "enablement PulseAudio is insecure and no longer supported"}
} }
if flags&VAllowInsecure == 0 {
switch {
case et&EWayland != 0 && config.DirectWayland:
return &AppError{Step: step, Err: ErrInsecure,
Msg: "direct_wayland is insecure and no longer supported"}
case et&EPipeWire != 0 && config.DirectPipeWire:
return &AppError{Step: step, Err: ErrInsecure,
Msg: "direct_pipewire is insecure and no longer supported"}
case et&EPulse != 0 && config.DirectPulse:
return &AppError{Step: step, Err: ErrInsecure,
Msg: "direct_pulse is insecure and no longer supported"}
}
}
return nil return nil
} }

View File

@@ -14,65 +14,109 @@ func TestConfigValidate(t *testing.T) {
testCases := []struct { testCases := []struct {
name string name string
config *hst.Config config *hst.Config
flags int
wantErr error wantErr error
}{ }{
{"nil", nil, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull, {"nil", nil, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
Msg: "invalid configuration"}}, Msg: "invalid configuration"}},
{"identity lower", &hst.Config{Identity: -1}, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds,
{"identity lower", &hst.Config{Identity: -1}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds,
Msg: "identity -1 out of range"}}, Msg: "identity -1 out of range"}},
{"identity upper", &hst.Config{Identity: 10000}, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds, {"identity upper", &hst.Config{Identity: 10000}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds,
Msg: "identity 10000 out of range"}}, Msg: "identity 10000 out of range"}},
{"sched lower", &hst.Config{SchedPolicy: -1}, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds,
{"sched lower", &hst.Config{SchedPolicy: -1}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds,
Msg: "scheduling policy -1 out of range"}}, Msg: "scheduling policy -1 out of range"}},
{"sched upper", &hst.Config{SchedPolicy: 0xcafe}, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds, {"sched upper", &hst.Config{SchedPolicy: 0xcafe}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds,
Msg: "scheduling policy 51966 out of range"}}, Msg: "scheduling policy 51966 out of range"}},
{"dbus session", &hst.Config{SessionBus: &hst.BusConfig{See: []string{""}}},
{"dbus session", &hst.Config{SessionBus: &hst.BusConfig{See: []string{""}}}, 0,
&hst.BadInterfaceError{Interface: "", Segment: "session"}}, &hst.BadInterfaceError{Interface: "", Segment: "session"}},
{"dbus system", &hst.Config{SystemBus: &hst.BusConfig{See: []string{""}}}, {"dbus system", &hst.Config{SystemBus: &hst.BusConfig{See: []string{""}}}, 0,
&hst.BadInterfaceError{Interface: "", Segment: "system"}}, &hst.BadInterfaceError{Interface: "", Segment: "system"}},
{"container", &hst.Config{}, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
{"container", &hst.Config{}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
Msg: "configuration missing container state"}}, Msg: "configuration missing container state"}},
{"home", &hst.Config{Container: &hst.ContainerConfig{}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull, {"home", &hst.Config{Container: &hst.ContainerConfig{}}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
Msg: "container configuration missing path to home directory"}}, Msg: "container configuration missing path to home directory"}},
{"shell", &hst.Config{Container: &hst.ContainerConfig{ {"shell", &hst.Config{Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull, }}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
Msg: "container configuration missing path to shell"}}, Msg: "container configuration missing path to shell"}},
{"path", &hst.Config{Container: &hst.ContainerConfig{ {"path", &hst.Config{Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
Shell: fhs.AbsTmp, Shell: fhs.AbsTmp,
}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull, }}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrConfigNull,
Msg: "container configuration missing path to initial program"}}, Msg: "container configuration missing path to initial program"}},
{"env equals", &hst.Config{Container: &hst.ContainerConfig{ {"env equals", &hst.Config{Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
Shell: fhs.AbsTmp, Shell: fhs.AbsTmp,
Path: fhs.AbsTmp, Path: fhs.AbsTmp,
Env: map[string]string{"TERM=": ""}, Env: map[string]string{"TERM=": ""},
}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrEnviron, }}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrEnviron,
Msg: `invalid environment variable "TERM="`}}, Msg: `invalid environment variable "TERM="`}},
{"env NUL", &hst.Config{Container: &hst.ContainerConfig{ {"env NUL", &hst.Config{Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
Shell: fhs.AbsTmp, Shell: fhs.AbsTmp,
Path: fhs.AbsTmp, Path: fhs.AbsTmp,
Env: map[string]string{"TERM\x00": ""}, Env: map[string]string{"TERM\x00": ""},
}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrEnviron, }}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrEnviron,
Msg: `invalid environment variable "TERM\x00"`}}, Msg: `invalid environment variable "TERM\x00"`}},
{"insecure pulse", &hst.Config{Enablements: hst.NewEnablements(hst.EPulse), Container: &hst.ContainerConfig{
{"insecure pulse", &hst.Config{Enablements: new(hst.EPulse), Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
Shell: fhs.AbsTmp, Shell: fhs.AbsTmp,
Path: fhs.AbsTmp, Path: fhs.AbsTmp,
}}, &hst.AppError{Step: "validate configuration", Err: hst.ErrInsecure, }}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrInsecure,
Msg: "enablement PulseAudio is insecure and no longer supported"}}, Msg: "enablement PulseAudio is insecure and no longer supported"}},
{"direct wayland", &hst.Config{Enablements: new(hst.EWayland), DirectWayland: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrInsecure,
Msg: "direct_wayland is insecure and no longer supported"}},
{"direct wayland allow", &hst.Config{Enablements: new(hst.EWayland), DirectWayland: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, hst.VAllowInsecure, nil},
{"direct pipewire", &hst.Config{Enablements: new(hst.EPipeWire), DirectPipeWire: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrInsecure,
Msg: "direct_pipewire is insecure and no longer supported"}},
{"direct pipewire allow", &hst.Config{Enablements: new(hst.EPipeWire), DirectPipeWire: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, hst.VAllowInsecure, nil},
{"direct pulse", &hst.Config{Enablements: new(hst.EPulse), DirectPulse: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, 0, &hst.AppError{Step: "validate configuration", Err: hst.ErrInsecure,
Msg: "direct_pulse is insecure and no longer supported"}},
{"direct pulse allow", &hst.Config{Enablements: new(hst.EPulse), DirectPulse: true, Container: &hst.ContainerConfig{
Home: fhs.AbsTmp,
Shell: fhs.AbsTmp,
Path: fhs.AbsTmp,
}}, hst.VAllowInsecure, nil},
{"valid", &hst.Config{Container: &hst.ContainerConfig{ {"valid", &hst.Config{Container: &hst.ContainerConfig{
Home: fhs.AbsTmp, Home: fhs.AbsTmp,
Shell: fhs.AbsTmp, Shell: fhs.AbsTmp,
Path: fhs.AbsTmp, Path: fhs.AbsTmp,
}}, nil}, }}, 0, nil},
} }
for _, tc := range testCases { for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
t.Parallel() t.Parallel()
if err := tc.config.Validate(); !reflect.DeepEqual(err, tc.wantErr) { if err := tc.config.Validate(tc.flags); !reflect.DeepEqual(err, tc.wantErr) {
t.Errorf("Validate: error = %#v, want %#v", err, tc.wantErr) t.Errorf("Validate: error = %#v, want %#v", err, tc.wantErr)
} }
}) })

View File

@@ -7,12 +7,12 @@ import (
"syscall" "syscall"
) )
// Enablement represents an optional host service to export to the target user. // Enablements denotes optional host service to export to the target user.
type Enablement byte type Enablements byte
const ( const (
// EWayland exposes a Wayland pathname socket via security-context-v1. // EWayland exposes a Wayland pathname socket via security-context-v1.
EWayland Enablement = 1 << iota EWayland Enablements = 1 << iota
// EX11 adds the target user via X11 ChangeHosts and exposes the X11 // EX11 adds the target user via X11 ChangeHosts and exposes the X11
// pathname socket. // pathname socket.
EX11 EX11
@@ -28,8 +28,8 @@ const (
EM EM
) )
// String returns a string representation of the flags set on [Enablement]. // String returns a string representation of the flags set on [Enablements].
func (e Enablement) String() string { func (e Enablements) String() string {
switch e { switch e {
case 0: case 0:
return "(no enablements)" return "(no enablements)"
@@ -47,7 +47,7 @@ func (e Enablement) String() string {
buf := new(strings.Builder) buf := new(strings.Builder)
buf.Grow(32) buf.Grow(32)
for i := Enablement(1); i < EM; i <<= 1 { for i := Enablements(1); i < EM; i <<= 1 {
if e&i != 0 { if e&i != 0 {
buf.WriteString(", " + i.String()) buf.WriteString(", " + i.String())
} }
@@ -60,12 +60,6 @@ func (e Enablement) String() string {
} }
} }
// NewEnablements returns the address of [Enablement] as [Enablements].
func NewEnablements(e Enablement) *Enablements { return (*Enablements)(&e) }
// Enablements is the [json] adapter for [Enablement].
type Enablements Enablement
// enablementsJSON is the [json] representation of [Enablements]. // enablementsJSON is the [json] representation of [Enablements].
type enablementsJSON = struct { type enablementsJSON = struct {
Wayland bool `json:"wayland,omitempty"` Wayland bool `json:"wayland,omitempty"`
@@ -75,24 +69,21 @@ type enablementsJSON = struct {
Pulse bool `json:"pulse,omitempty"` Pulse bool `json:"pulse,omitempty"`
} }
// Unwrap returns the underlying [Enablement]. // Unwrap returns the value pointed to by e.
func (e *Enablements) Unwrap() Enablement { func (e *Enablements) Unwrap() Enablements {
if e == nil { if e == nil {
return 0 return 0
} }
return Enablement(*e) return *e
} }
func (e *Enablements) MarshalJSON() ([]byte, error) { func (e Enablements) MarshalJSON() ([]byte, error) {
if e == nil {
return nil, syscall.EINVAL
}
return json.Marshal(&enablementsJSON{ return json.Marshal(&enablementsJSON{
Wayland: Enablement(*e)&EWayland != 0, Wayland: e&EWayland != 0,
X11: Enablement(*e)&EX11 != 0, X11: e&EX11 != 0,
DBus: Enablement(*e)&EDBus != 0, DBus: e&EDBus != 0,
PipeWire: Enablement(*e)&EPipeWire != 0, PipeWire: e&EPipeWire != 0,
Pulse: Enablement(*e)&EPulse != 0, Pulse: e&EPulse != 0,
}) })
} }
@@ -106,22 +97,21 @@ func (e *Enablements) UnmarshalJSON(data []byte) error {
return err return err
} }
var ve Enablement *e = 0
if v.Wayland { if v.Wayland {
ve |= EWayland *e |= EWayland
} }
if v.X11 { if v.X11 {
ve |= EX11 *e |= EX11
} }
if v.DBus { if v.DBus {
ve |= EDBus *e |= EDBus
} }
if v.PipeWire { if v.PipeWire {
ve |= EPipeWire *e |= EPipeWire
} }
if v.Pulse { if v.Pulse {
ve |= EPulse *e |= EPulse
} }
*e = Enablements(ve)
return nil return nil
} }

View File

@@ -13,7 +13,7 @@ func TestEnablementString(t *testing.T) {
t.Parallel() t.Parallel()
testCases := []struct { testCases := []struct {
flags hst.Enablement flags hst.Enablements
want string want string
}{ }{
{0, "(no enablements)"}, {0, "(no enablements)"},
@@ -59,13 +59,13 @@ func TestEnablements(t *testing.T) {
sData string sData string
}{ }{
{"nil", nil, "null", `{"value":null,"magic":3236757504}`}, {"nil", nil, "null", `{"value":null,"magic":3236757504}`},
{"zero", hst.NewEnablements(0), `{}`, `{"value":{},"magic":3236757504}`}, {"zero", new(hst.Enablements(0)), `{}`, `{"value":{},"magic":3236757504}`},
{"wayland", hst.NewEnablements(hst.EWayland), `{"wayland":true}`, `{"value":{"wayland":true},"magic":3236757504}`}, {"wayland", new(hst.EWayland), `{"wayland":true}`, `{"value":{"wayland":true},"magic":3236757504}`},
{"x11", hst.NewEnablements(hst.EX11), `{"x11":true}`, `{"value":{"x11":true},"magic":3236757504}`}, {"x11", new(hst.EX11), `{"x11":true}`, `{"value":{"x11":true},"magic":3236757504}`},
{"dbus", hst.NewEnablements(hst.EDBus), `{"dbus":true}`, `{"value":{"dbus":true},"magic":3236757504}`}, {"dbus", new(hst.EDBus), `{"dbus":true}`, `{"value":{"dbus":true},"magic":3236757504}`},
{"pipewire", hst.NewEnablements(hst.EPipeWire), `{"pipewire":true}`, `{"value":{"pipewire":true},"magic":3236757504}`}, {"pipewire", new(hst.EPipeWire), `{"pipewire":true}`, `{"value":{"pipewire":true},"magic":3236757504}`},
{"pulse", hst.NewEnablements(hst.EPulse), `{"pulse":true}`, `{"value":{"pulse":true},"magic":3236757504}`}, {"pulse", new(hst.EPulse), `{"pulse":true}`, `{"value":{"pulse":true},"magic":3236757504}`},
{"all", hst.NewEnablements(hst.EM - 1), `{"wayland":true,"x11":true,"dbus":true,"pipewire":true,"pulse":true}`, `{"value":{"wayland":true,"x11":true,"dbus":true,"pipewire":true,"pulse":true},"magic":3236757504}`}, {"all", new(hst.EM - 1), `{"wayland":true,"x11":true,"dbus":true,"pipewire":true,"pulse":true}`, `{"value":{"wayland":true,"x11":true,"dbus":true,"pipewire":true,"pulse":true},"magic":3236757504}`},
} }
for _, tc := range testCases { for _, tc := range testCases {
@@ -137,7 +137,7 @@ func TestEnablements(t *testing.T) {
}) })
t.Run("val", func(t *testing.T) { t.Run("val", func(t *testing.T) {
if got := hst.NewEnablements(hst.EWayland | hst.EPulse).Unwrap(); got != hst.EWayland|hst.EPulse { if got := new(hst.EWayland | hst.EPulse).Unwrap(); got != hst.EWayland|hst.EPulse {
t.Errorf("Unwrap: %v", got) t.Errorf("Unwrap: %v", got)
} }
}) })
@@ -146,9 +146,6 @@ func TestEnablements(t *testing.T) {
t.Run("passthrough", func(t *testing.T) { t.Run("passthrough", func(t *testing.T) {
t.Parallel() t.Parallel()
if _, err := (*hst.Enablements)(nil).MarshalJSON(); !errors.Is(err, syscall.EINVAL) {
t.Errorf("MarshalJSON: error = %v", err)
}
if err := (*hst.Enablements)(nil).UnmarshalJSON(nil); !errors.Is(err, syscall.EINVAL) { if err := (*hst.Enablements)(nil).UnmarshalJSON(nil); !errors.Is(err, syscall.EINVAL) {
t.Errorf("UnmarshalJSON: error = %v", err) t.Errorf("UnmarshalJSON: error = %v", err)
} }

View File

@@ -72,7 +72,7 @@ func Template() *Config {
return &Config{ return &Config{
ID: "org.chromium.Chromium", ID: "org.chromium.Chromium",
Enablements: NewEnablements(EWayland | EDBus | EPipeWire), Enablements: new(EWayland | EDBus | EPipeWire),
SessionBus: &BusConfig{ SessionBus: &BusConfig{
See: nil, See: nil,

View File

@@ -11,9 +11,11 @@ import (
"path/filepath" "path/filepath"
"reflect" "reflect"
"strconv" "strconv"
"syscall"
"testing" "testing"
"hakurei.app/internal/acl" "hakurei.app/internal/acl"
"hakurei.app/internal/info"
) )
const testFileName = "acl.test" const testFileName = "acl.test"
@@ -24,8 +26,14 @@ var (
) )
func TestUpdate(t *testing.T) { func TestUpdate(t *testing.T) {
if os.Getenv("HAKUREI_TEST_SKIP_ACL") == "1" { if info.CanDegrade {
t.Skip("acl test skipped") name := filepath.Join(t.TempDir(), "check-degrade")
if err := os.WriteFile(name, nil, 0); err != nil {
t.Fatal(err)
}
if err := acl.Update(name, os.Geteuid()); errors.Is(err, syscall.ENOTSUP) {
t.Skip(err)
}
} }
testFilePath := filepath.Join(t.TempDir(), testFileName) testFilePath := filepath.Join(t.TempDir(), testFileName)

View File

@@ -0,0 +1,7 @@
//go:build !noskip
package info
// CanDegrade is whether tests are allowed to transparently degrade or skip due
// to required system features being denied or unavailable.
const CanDegrade = true

View File

@@ -0,0 +1,5 @@
//go:build noskip
package info
const CanDegrade = false

90
internal/kobject/event.go Normal file
View File

@@ -0,0 +1,90 @@
package kobject
import (
"errors"
"strconv"
"strings"
"unsafe"
"hakurei.app/internal/uevent"
)
// Event is a [uevent.Message] with known environment variables processed.
type Event struct {
// alloc_uevent_skb: action_string
Action uevent.KobjectAction `json:"action"`
// alloc_uevent_skb: devpath
DevPath string `json:"devpath"`
// Uninterpreted environment variable pairs. An entry missing a separator
// gains the value "\x00".
Env map[string]string `json:"env"`
// SEQNUM value set by the kernel.
Sequence uint64 `json:"seqnum"`
// SYNTH_UUID value set on trigger, nil denotes a non-synthetic event.
Synth *uevent.UUID `json:"synth_uuid,omitempty"`
// SUBSYSTEM value set by the kernel.
Subsystem string `json:"subsystem"`
}
// Populate populates e with the contents of a [uevent.Message].
//
// The ACTION and DEVPATH environment variables are ignored and assumed to be
// consistent with the header.
func (e *Event) Populate(reportErr func(error), m *uevent.Message) {
if reportErr == nil {
reportErr = func(error) {}
}
*e = Event{
Action: m.Action,
DevPath: m.DevPath,
Env: make(map[string]string),
}
for _, s := range m.Env {
k, v, ok := strings.Cut(s, "=")
if !ok {
if _, ok = e.Env[s]; !ok {
e.Env[s] = "\x00"
}
continue
}
switch k {
case "ACTION", "DEVPATH":
continue
case "SEQNUM":
seq, err := strconv.ParseUint(v, 10, 64)
if err != nil {
if _e := errors.Unwrap(err); _e != nil {
err = _e
}
reportErr(err)
e.Env[k] = v
continue
}
e.Sequence = seq
case "SYNTH_UUID":
var uuid uevent.UUID
err := uuid.UnmarshalText(unsafe.Slice(unsafe.StringData(v), len(v)))
if err != nil {
reportErr(err)
e.Env[k] = v
continue
}
e.Synth = &uuid
case "SUBSYSTEM":
e.Subsystem = v
default:
e.Env[k] = v
}
}
}

View File

@@ -0,0 +1,92 @@
package kobject_test
import (
"reflect"
"strconv"
"testing"
"hakurei.app/internal/kobject"
"hakurei.app/internal/uevent"
)
func TestEvent(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
msg uevent.Message
want kobject.Event
errs []error
}{
{"sample coldboot qemu", uevent.Message{
Action: uevent.KOBJ_ADD,
DevPath: "/devices/LNXSYSTM:00/LNXPWRBN:00",
Env: []string{
"ACTION=add",
"DEVPATH=/devices/LNXSYSTM:00/LNXPWRBN:00",
"SUBSYSTEM=acpi",
"SYNTH_UUID=fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed",
"MODALIAS=acpi:LNXPWRBN:",
"SEQNUM=777",
}}, kobject.Event{
Action: uevent.KOBJ_ADD,
DevPath: "/devices/LNXSYSTM:00/LNXPWRBN:00",
Env: map[string]string{
"MODALIAS": "acpi:LNXPWRBN:",
},
Sequence: 777,
Synth: &uevent.UUID{
0xfe, 0x4d, 0x7c, 0x9d,
0xb8, 0xc6,
0x4a, 0x70,
0x9e, 0xf1,
0x3d, 0x8a, 0x58, 0xd1, 0x8e, 0xed,
},
Subsystem: "acpi",
}, []error{}},
{"nil reportErr", uevent.Message{Env: []string{
"SEQNUM=\x00",
}}, kobject.Event{Env: map[string]string{
"SEQNUM": "\x00",
}}, nil},
{"bad SEQNUM SYNTH_UUID", uevent.Message{Env: []string{
"SEQNUM=\x00",
"SYNTH_UUID=\x00",
"SUBSYSTEM=\x00",
}}, kobject.Event{Subsystem: "\x00", Env: map[string]string{
"SEQNUM": "\x00",
"SYNTH_UUID": "\x00",
}}, []error{strconv.ErrSyntax, uevent.UUIDSizeError(1)}},
{"bad sep", uevent.Message{Env: []string{
"SYNTH_UUID",
}}, kobject.Event{Env: map[string]string{
"SYNTH_UUID": "\x00",
}}, []error{}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var f func(error)
gotErrs := make([]error, 0)
if tc.errs != nil {
f = func(err error) {
gotErrs = append(gotErrs, err)
}
}
var got kobject.Event
got.Populate(f, &tc.msg)
if !reflect.DeepEqual(&got, &tc.want) {
t.Errorf("Populate: %#v, want %#v", got, tc.want)
}
if tc.errs != nil && !reflect.DeepEqual(gotErrs, tc.errs) {
t.Errorf("Populate: errs = %v, want %v", gotErrs, tc.errs)
}
})
}
}

View File

@@ -1,4 +1,4 @@
package container package landlock
import ( import (
"strings" "strings"
@@ -14,11 +14,11 @@ const (
LANDLOCK_CREATE_RULESET_VERSION = 1 << iota LANDLOCK_CREATE_RULESET_VERSION = 1 << iota
) )
// LandlockAccessFS is bitmask of handled filesystem actions. // AccessFS is bitmask of handled filesystem actions.
type LandlockAccessFS uint64 type AccessFS uint64
const ( const (
LANDLOCK_ACCESS_FS_EXECUTE LandlockAccessFS = 1 << iota LANDLOCK_ACCESS_FS_EXECUTE AccessFS = 1 << iota
LANDLOCK_ACCESS_FS_WRITE_FILE LANDLOCK_ACCESS_FS_WRITE_FILE
LANDLOCK_ACCESS_FS_READ_FILE LANDLOCK_ACCESS_FS_READ_FILE
LANDLOCK_ACCESS_FS_READ_DIR LANDLOCK_ACCESS_FS_READ_DIR
@@ -38,8 +38,8 @@ const (
_LANDLOCK_ACCESS_FS_DELIM _LANDLOCK_ACCESS_FS_DELIM
) )
// String returns a space-separated string of [LandlockAccessFS] flags. // String returns a space-separated string of [AccessFS] flags.
func (f LandlockAccessFS) String() string { func (f AccessFS) String() string {
switch f { switch f {
case LANDLOCK_ACCESS_FS_EXECUTE: case LANDLOCK_ACCESS_FS_EXECUTE:
return "execute" return "execute"
@@ -90,8 +90,8 @@ func (f LandlockAccessFS) String() string {
return "fs_ioctl_dev" return "fs_ioctl_dev"
default: default:
var c []LandlockAccessFS var c []AccessFS
for i := LandlockAccessFS(1); i < _LANDLOCK_ACCESS_FS_DELIM; i <<= 1 { for i := AccessFS(1); i < _LANDLOCK_ACCESS_FS_DELIM; i <<= 1 {
if f&i != 0 { if f&i != 0 {
c = append(c, i) c = append(c, i)
} }
@@ -107,18 +107,18 @@ func (f LandlockAccessFS) String() string {
} }
} }
// LandlockAccessNet is bitmask of handled network actions. // AccessNet is bitmask of handled network actions.
type LandlockAccessNet uint64 type AccessNet uint64
const ( const (
LANDLOCK_ACCESS_NET_BIND_TCP LandlockAccessNet = 1 << iota LANDLOCK_ACCESS_NET_BIND_TCP AccessNet = 1 << iota
LANDLOCK_ACCESS_NET_CONNECT_TCP LANDLOCK_ACCESS_NET_CONNECT_TCP
_LANDLOCK_ACCESS_NET_DELIM _LANDLOCK_ACCESS_NET_DELIM
) )
// String returns a space-separated string of [LandlockAccessNet] flags. // String returns a space-separated string of [AccessNet] flags.
func (f LandlockAccessNet) String() string { func (f AccessNet) String() string {
switch f { switch f {
case LANDLOCK_ACCESS_NET_BIND_TCP: case LANDLOCK_ACCESS_NET_BIND_TCP:
return "bind_tcp" return "bind_tcp"
@@ -127,8 +127,8 @@ func (f LandlockAccessNet) String() string {
return "connect_tcp" return "connect_tcp"
default: default:
var c []LandlockAccessNet var c []AccessNet
for i := LandlockAccessNet(1); i < _LANDLOCK_ACCESS_NET_DELIM; i <<= 1 { for i := AccessNet(1); i < _LANDLOCK_ACCESS_NET_DELIM; i <<= 1 {
if f&i != 0 { if f&i != 0 {
c = append(c, i) c = append(c, i)
} }
@@ -144,18 +144,18 @@ func (f LandlockAccessNet) String() string {
} }
} }
// LandlockScope is bitmask of scopes restricting a Landlock domain from accessing outside resources. // Scope is bitmask of scopes restricting a Landlock domain from accessing outside resources.
type LandlockScope uint64 type Scope uint64
const ( const (
LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET LandlockScope = 1 << iota LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET Scope = 1 << iota
LANDLOCK_SCOPE_SIGNAL LANDLOCK_SCOPE_SIGNAL
_LANDLOCK_SCOPE_DELIM _LANDLOCK_SCOPE_DELIM
) )
// String returns a space-separated string of [LandlockScope] flags. // String returns a space-separated string of [Scope] flags.
func (f LandlockScope) String() string { func (f Scope) String() string {
switch f { switch f {
case LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET: case LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET:
return "abstract_unix_socket" return "abstract_unix_socket"
@@ -164,8 +164,8 @@ func (f LandlockScope) String() string {
return "signal" return "signal"
default: default:
var c []LandlockScope var c []Scope
for i := LandlockScope(1); i < _LANDLOCK_SCOPE_DELIM; i <<= 1 { for i := Scope(1); i < _LANDLOCK_SCOPE_DELIM; i <<= 1 {
if f&i != 0 { if f&i != 0 {
c = append(c, i) c = append(c, i)
} }
@@ -184,12 +184,12 @@ func (f LandlockScope) String() string {
// RulesetAttr is equivalent to struct landlock_ruleset_attr. // RulesetAttr is equivalent to struct landlock_ruleset_attr.
type RulesetAttr struct { type RulesetAttr struct {
// Bitmask of handled filesystem actions. // Bitmask of handled filesystem actions.
HandledAccessFS LandlockAccessFS HandledAccessFS AccessFS
// Bitmask of handled network actions. // Bitmask of handled network actions.
HandledAccessNet LandlockAccessNet HandledAccessNet AccessNet
// Bitmask of scopes restricting a Landlock domain from accessing outside // Bitmask of scopes restricting a Landlock domain from accessing outside
// resources (e.g. IPCs). // resources (e.g. IPCs).
Scoped LandlockScope Scoped Scope
} }
// String returns a user-facing description of [RulesetAttr]. // String returns a user-facing description of [RulesetAttr].
@@ -239,13 +239,13 @@ func (rulesetAttr *RulesetAttr) Create(flags uintptr) (fd int, err error) {
return fd, nil return fd, nil
} }
// LandlockGetABI returns the ABI version supported by the kernel. // GetABI returns the ABI version supported by the kernel.
func LandlockGetABI() (int, error) { func GetABI() (int, error) {
return (*RulesetAttr)(nil).Create(LANDLOCK_CREATE_RULESET_VERSION) return (*RulesetAttr)(nil).Create(LANDLOCK_CREATE_RULESET_VERSION)
} }
// LandlockRestrictSelf applies a loaded ruleset to the calling thread. // RestrictSelf applies a loaded ruleset to the calling thread.
func LandlockRestrictSelf(rulesetFd int, flags uintptr) error { func RestrictSelf(rulesetFd int, flags uintptr) error {
r, _, errno := syscall.Syscall( r, _, errno := syscall.Syscall(
ext.SYS_LANDLOCK_RESTRICT_SELF, ext.SYS_LANDLOCK_RESTRICT_SELF,
uintptr(rulesetFd), uintptr(rulesetFd),

View File

@@ -0,0 +1,65 @@
package landlock_test
import (
"testing"
"unsafe"
"hakurei.app/internal/landlock"
)
func TestLandlockString(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
rulesetAttr *landlock.RulesetAttr
want string
}{
{"nil", nil, "NULL"},
{"zero", new(landlock.RulesetAttr), "0"},
{"some", &landlock.RulesetAttr{Scoped: landlock.LANDLOCK_SCOPE_SIGNAL}, "scoped: signal"},
{"set", &landlock.RulesetAttr{
HandledAccessFS: landlock.LANDLOCK_ACCESS_FS_MAKE_SYM | landlock.LANDLOCK_ACCESS_FS_IOCTL_DEV | landlock.LANDLOCK_ACCESS_FS_WRITE_FILE,
HandledAccessNet: landlock.LANDLOCK_ACCESS_NET_BIND_TCP,
Scoped: landlock.LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET | landlock.LANDLOCK_SCOPE_SIGNAL,
}, "fs: write_file make_sym fs_ioctl_dev, net: bind_tcp, scoped: abstract_unix_socket signal"},
{"all", &landlock.RulesetAttr{
HandledAccessFS: landlock.LANDLOCK_ACCESS_FS_EXECUTE |
landlock.LANDLOCK_ACCESS_FS_WRITE_FILE |
landlock.LANDLOCK_ACCESS_FS_READ_FILE |
landlock.LANDLOCK_ACCESS_FS_READ_DIR |
landlock.LANDLOCK_ACCESS_FS_REMOVE_DIR |
landlock.LANDLOCK_ACCESS_FS_REMOVE_FILE |
landlock.LANDLOCK_ACCESS_FS_MAKE_CHAR |
landlock.LANDLOCK_ACCESS_FS_MAKE_DIR |
landlock.LANDLOCK_ACCESS_FS_MAKE_REG |
landlock.LANDLOCK_ACCESS_FS_MAKE_SOCK |
landlock.LANDLOCK_ACCESS_FS_MAKE_FIFO |
landlock.LANDLOCK_ACCESS_FS_MAKE_BLOCK |
landlock.LANDLOCK_ACCESS_FS_MAKE_SYM |
landlock.LANDLOCK_ACCESS_FS_REFER |
landlock.LANDLOCK_ACCESS_FS_TRUNCATE |
landlock.LANDLOCK_ACCESS_FS_IOCTL_DEV,
HandledAccessNet: landlock.LANDLOCK_ACCESS_NET_BIND_TCP |
landlock.LANDLOCK_ACCESS_NET_CONNECT_TCP,
Scoped: landlock.LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET |
landlock.LANDLOCK_SCOPE_SIGNAL,
}, "fs: execute write_file read_file read_dir remove_dir remove_file make_char make_dir make_reg make_sock make_fifo make_block make_sym fs_refer fs_truncate fs_ioctl_dev, net: bind_tcp connect_tcp, scoped: abstract_unix_socket signal"},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.rulesetAttr.String(); got != tc.want {
t.Errorf("String: %s, want %s", got, tc.want)
}
})
}
}
func TestLandlockAttrSize(t *testing.T) {
t.Parallel()
want := 24
if got := unsafe.Sizeof(landlock.RulesetAttr{}); got != uintptr(want) {
t.Errorf("Sizeof: %d, want %d", got, want)
}
}

View File

@@ -17,6 +17,7 @@ import (
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/internal/dbus" "hakurei.app/internal/dbus"
"hakurei.app/internal/info" "hakurei.app/internal/info"
"hakurei.app/internal/params"
"hakurei.app/message" "hakurei.app/message"
) )
@@ -84,7 +85,7 @@ type syscallDispatcher interface {
// setDumpable provides [container.SetDumpable]. // setDumpable provides [container.SetDumpable].
setDumpable(dumpable uintptr) error setDumpable(dumpable uintptr) error
// receive provides [container.Receive]. // receive provides [container.Receive].
receive(key string, e any, fdp *uintptr) (closeFunc func() error, err error) receive(key string, e any, fdp *int) (closeFunc func() error, err error)
// containerStart provides the Start method of [container.Container]. // containerStart provides the Start method of [container.Container].
containerStart(z *container.Container) error containerStart(z *container.Container) error
@@ -154,8 +155,8 @@ func (direct) prctl(op, arg2, arg3 uintptr) error { return ext.Prctl(op, arg2, a
func (direct) overflowUid(msg message.Msg) int { return container.OverflowUid(msg) } func (direct) overflowUid(msg message.Msg) int { return container.OverflowUid(msg) }
func (direct) overflowGid(msg message.Msg) int { return container.OverflowGid(msg) } func (direct) overflowGid(msg message.Msg) int { return container.OverflowGid(msg) }
func (direct) setDumpable(dumpable uintptr) error { return ext.SetDumpable(dumpable) } func (direct) setDumpable(dumpable uintptr) error { return ext.SetDumpable(dumpable) }
func (direct) receive(key string, e any, fdp *uintptr) (func() error, error) { func (direct) receive(key string, e any, fdp *int) (func() error, error) {
return container.Receive(key, e, fdp) return params.Receive(key, e, fdp)
} }
func (direct) containerStart(z *container.Container) error { return z.Start() } func (direct) containerStart(z *container.Container) error { return z.Start() }

View File

@@ -401,12 +401,12 @@ func (k *kstub) setDumpable(dumpable uintptr) error {
stub.CheckArg(k.Stub, "dumpable", dumpable, 0)) stub.CheckArg(k.Stub, "dumpable", dumpable, 0))
} }
func (k *kstub) receive(key string, e any, fdp *uintptr) (closeFunc func() error, err error) { func (k *kstub) receive(key string, e any, fdp *int) (closeFunc func() error, err error) {
k.Helper() k.Helper()
expect := k.Expects("receive") expect := k.Expects("receive")
reflect.ValueOf(e).Elem().Set(reflect.ValueOf(expect.Args[1])) reflect.ValueOf(e).Elem().Set(reflect.ValueOf(expect.Args[1]))
if expect.Args[2] != nil { if expect.Args[2] != nil {
*fdp = expect.Args[2].(uintptr) *fdp = int(expect.Args[2].(uintptr))
} }
return func() error { return k.Expects("closeReceive").Err }, expect.Error( return func() error { return k.Expects("closeReceive").Err }, expect.Error(
stub.CheckArg(k.Stub, "key", key, 0)) stub.CheckArg(k.Stub, "key", key, 0))
@@ -712,7 +712,7 @@ func (panicDispatcher) cmdOutput(*exec.Cmd) ([]byte, error) { pa
func (panicDispatcher) overflowUid(message.Msg) int { panic("unreachable") } func (panicDispatcher) overflowUid(message.Msg) int { panic("unreachable") }
func (panicDispatcher) overflowGid(message.Msg) int { panic("unreachable") } func (panicDispatcher) overflowGid(message.Msg) int { panic("unreachable") }
func (panicDispatcher) setDumpable(uintptr) error { panic("unreachable") } func (panicDispatcher) setDumpable(uintptr) error { panic("unreachable") }
func (panicDispatcher) receive(string, any, *uintptr) (func() error, error) { panic("unreachable") } func (panicDispatcher) receive(string, any, *int) (func() error, error) { panic("unreachable") }
func (panicDispatcher) containerStart(*container.Container) error { panic("unreachable") } func (panicDispatcher) containerStart(*container.Container) error { panic("unreachable") }
func (panicDispatcher) containerServe(*container.Container) error { panic("unreachable") } func (panicDispatcher) containerServe(*container.Container) error { panic("unreachable") }
func (panicDispatcher) containerWait(*container.Container) error { panic("unreachable") } func (panicDispatcher) containerWait(*container.Container) error { panic("unreachable") }

View File

@@ -32,7 +32,14 @@ type outcome struct {
syscallDispatcher syscallDispatcher
} }
func (k *outcome) finalise(ctx context.Context, msg message.Msg, id *hst.ID, config *hst.Config) error { // finalise prepares an outcome for main.
func (k *outcome) finalise(
ctx context.Context,
msg message.Msg,
id *hst.ID,
config *hst.Config,
flags int,
) error {
if ctx == nil || id == nil { if ctx == nil || id == nil {
// unreachable // unreachable
panic("invalid call to finalise") panic("invalid call to finalise")
@@ -43,7 +50,7 @@ func (k *outcome) finalise(ctx context.Context, msg message.Msg, id *hst.ID, con
} }
k.ctx = ctx k.ctx = ctx
if err := config.Validate(); err != nil { if err := config.Validate(flags); err != nil {
return err return err
} }

View File

@@ -194,7 +194,7 @@ type outcomeStateSys struct {
// Copied from [hst.Config]. Safe for read by outcomeOp.toSystem. // Copied from [hst.Config]. Safe for read by outcomeOp.toSystem.
appId string appId string
// Copied from [hst.Config]. Safe for read by outcomeOp.toSystem. // Copied from [hst.Config]. Safe for read by outcomeOp.toSystem.
et hst.Enablement et hst.Enablements
// Copied from [hst.Config]. Safe for read by spWaylandOp.toSystem only. // Copied from [hst.Config]. Safe for read by spWaylandOp.toSystem only.
directWayland bool directWayland bool

View File

@@ -13,7 +13,6 @@ import (
"time" "time"
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/container"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/hst" "hakurei.app/hst"
"hakurei.app/internal/info" "hakurei.app/internal/info"
@@ -298,12 +297,12 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
// accumulate enablements of remaining instances // accumulate enablements of remaining instances
var ( var (
// alive enablement bits // alive enablement bits
rt hst.Enablement rt hst.Enablements
// alive instance count // alive instance count
n int n int
) )
for eh := range entries { for eh := range entries {
var et hst.Enablement var et hst.Enablements
if et, err = eh.Load(nil); err != nil { if et, err = eh.Load(nil); err != nil {
perror(err, "read state header of instance "+eh.ID.String()) perror(err, "read state header of instance "+eh.ID.String())
} else { } else {
@@ -372,17 +371,18 @@ func (k *outcome) start(ctx context.Context, msg message.Msg,
// shim runs in the same session as monitor; see shim.go for behaviour // shim runs in the same session as monitor; see shim.go for behaviour
cmd.Cancel = func() error { return cmd.Process.Signal(syscall.SIGCONT) } cmd.Cancel = func() error { return cmd.Process.Signal(syscall.SIGCONT) }
var shimPipe *os.File var shimPipe [2]*os.File
if fd, w, err := container.Setup(&cmd.ExtraFiles); err != nil { if r, w, err := os.Pipe(); err != nil {
return cmd, nil, &hst.AppError{Step: "create shim setup pipe", Err: err} return cmd, nil, &hst.AppError{Step: "create shim setup pipe", Err: err}
} else { } else {
shimPipe = w
cmd.Env = []string{ cmd.Env = []string{
// passed through to shim by hsu // passed through to shim by hsu
shimEnv + "=" + strconv.Itoa(fd), shimEnv + "=" + strconv.Itoa(3+len(cmd.ExtraFiles)),
// interpreted by hsu // interpreted by hsu
"HAKUREI_IDENTITY=" + k.state.identity.String(), "HAKUREI_IDENTITY=" + k.state.identity.String(),
} }
cmd.ExtraFiles = append(cmd.ExtraFiles, r)
shimPipe[0], shimPipe[1] = r, w
} }
if len(k.supp) > 0 { if len(k.supp) > 0 {
@@ -393,12 +393,16 @@ func (k *outcome) start(ctx context.Context, msg message.Msg,
msg.Verbosef("setuid helper at %s", hsuPath) msg.Verbosef("setuid helper at %s", hsuPath)
if err := cmd.Start(); err != nil { if err := cmd.Start(); err != nil {
_, _ = shimPipe[0].Close(), shimPipe[1].Close()
msg.Resume() msg.Resume()
return cmd, shimPipe, &hst.AppError{Step: "start setuid wrapper", Err: err} return cmd, nil, &hst.AppError{Step: "start setuid wrapper", Err: err}
}
if err := shimPipe[0].Close(); err != nil {
msg.Verbose(err)
} }
*startTime = time.Now().UTC() *startTime = time.Now().UTC()
return cmd, shimPipe, nil return cmd, shimPipe[1], nil
} }
// serveShim serves outcomeState through the shim setup pipe. // serveShim serves outcomeState through the shim setup pipe.
@@ -411,11 +415,11 @@ func serveShim(msg message.Msg, shimPipe *os.File, state *outcomeState) error {
msg.Verbose(err.Error()) msg.Verbose(err.Error())
} }
if err := gob.NewEncoder(shimPipe).Encode(state); err != nil { if err := gob.NewEncoder(shimPipe).Encode(state); err != nil {
_ = shimPipe.Close()
msg.Resume() msg.Resume()
return &hst.AppError{Step: "transmit shim config", Err: err} return &hst.AppError{Step: "transmit shim config", Err: err}
} }
_ = shimPipe.Close() return shimPipe.Close()
return nil
} }
// printMessageError prints the error message according to [message.GetMessage], // printMessageError prints the error message according to [message.GetMessage],

View File

@@ -18,7 +18,13 @@ import (
func IsPollDescriptor(fd uintptr) bool func IsPollDescriptor(fd uintptr) bool
// Main runs an app according to [hst.Config] and terminates. Main does not return. // Main runs an app according to [hst.Config] and terminates. Main does not return.
func Main(ctx context.Context, msg message.Msg, config *hst.Config, fd int) { func Main(
ctx context.Context,
msg message.Msg,
config *hst.Config,
flags int,
fd int,
) {
// avoids runtime internals or standard streams // avoids runtime internals or standard streams
if fd >= 0 { if fd >= 0 {
if IsPollDescriptor(uintptr(fd)) || fd < 3 { if IsPollDescriptor(uintptr(fd)) || fd < 3 {
@@ -34,7 +40,7 @@ func Main(ctx context.Context, msg message.Msg, config *hst.Config, fd int) {
k := outcome{syscallDispatcher: direct{msg}} k := outcome{syscallDispatcher: direct{msg}}
finaliseTime := time.Now() finaliseTime := time.Now()
if err := k.finalise(ctx, msg, &id, config); err != nil { if err := k.finalise(ctx, msg, &id, config, flags); err != nil {
printMessageError(msg.GetLogger().Fatalln, "cannot seal app:", err) printMessageError(msg.GetLogger().Fatalln, "cannot seal app:", err)
panic("unreachable") panic("unreachable")
} }

View File

@@ -288,7 +288,7 @@ func TestOutcomeRun(t *testing.T) {
}, },
Filter: true, Filter: true,
}, },
Enablements: hst.NewEnablements(hst.EWayland | hst.EDBus | hst.EPipeWire | hst.EPulse), Enablements: new(hst.EWayland | hst.EDBus | hst.EPipeWire | hst.EPulse),
Container: &hst.ContainerConfig{ Container: &hst.ContainerConfig{
Filesystem: []hst.FilesystemConfigJSON{ Filesystem: []hst.FilesystemConfigJSON{
@@ -427,7 +427,7 @@ func TestOutcomeRun(t *testing.T) {
DirectPipeWire: true, DirectPipeWire: true,
ID: "org.chromium.Chromium", ID: "org.chromium.Chromium",
Enablements: hst.NewEnablements(hst.EWayland | hst.EDBus | hst.EPipeWire | hst.EPulse), Enablements: new(hst.EWayland | hst.EDBus | hst.EPipeWire | hst.EPulse),
Container: &hst.ContainerConfig{ Container: &hst.ContainerConfig{
Env: nil, Env: nil,
Filesystem: []hst.FilesystemConfigJSON{ Filesystem: []hst.FilesystemConfigJSON{

View File

@@ -20,6 +20,7 @@ import (
"hakurei.app/ext" "hakurei.app/ext"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/hst" "hakurei.app/hst"
"hakurei.app/internal/params"
"hakurei.app/internal/pipewire" "hakurei.app/internal/pipewire"
"hakurei.app/message" "hakurei.app/message"
) )
@@ -197,7 +198,7 @@ func shimEntrypoint(k syscallDispatcher) {
if errors.Is(err, syscall.EBADF) { if errors.Is(err, syscall.EBADF) {
k.fatal("invalid config descriptor") k.fatal("invalid config descriptor")
} }
if errors.Is(err, container.ErrReceiveEnv) { if errors.Is(err, params.ErrReceiveEnv) {
k.fatal(shimEnv + " not set") k.fatal(shimEnv + " not set")
} }

View File

@@ -16,6 +16,7 @@ import (
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/hst" "hakurei.app/hst"
"hakurei.app/internal/env" "hakurei.app/internal/env"
"hakurei.app/internal/params"
"hakurei.app/internal/stub" "hakurei.app/internal/stub"
) )
@@ -172,7 +173,7 @@ func TestShimEntrypoint(t *testing.T) {
call("setDumpable", stub.ExpectArgs{uintptr(ext.SUID_DUMP_DISABLE)}, nil, nil), call("setDumpable", stub.ExpectArgs{uintptr(ext.SUID_DUMP_DISABLE)}, nil, nil),
call("getppid", stub.ExpectArgs{}, 0xbad, nil), call("getppid", stub.ExpectArgs{}, 0xbad, nil),
call("setupContSignal", stub.ExpectArgs{0xbad}, 0, nil), call("setupContSignal", stub.ExpectArgs{0xbad}, 0, nil),
call("receive", stub.ExpectArgs{"HAKUREI_SHIM", outcomeState{}, nil}, nil, container.ErrReceiveEnv), call("receive", stub.ExpectArgs{"HAKUREI_SHIM", outcomeState{}, nil}, nil, params.ErrReceiveEnv),
call("fatal", stub.ExpectArgs{[]any{"HAKUREI_SHIM not set"}}, nil, nil), call("fatal", stub.ExpectArgs{[]any{"HAKUREI_SHIM not set"}}, nil, nil),
// deferred // deferred

View File

@@ -21,7 +21,7 @@ func TestSpPulseOp(t *testing.T) {
newConfig := func() *hst.Config { newConfig := func() *hst.Config {
config := hst.Template() config := hst.Template()
config.DirectPulse = true config.DirectPulse = true
config.Enablements = hst.NewEnablements(hst.EPulse) config.Enablements = new(hst.EPulse)
return config return config
} }

42
internal/params/params.go Normal file
View File

@@ -0,0 +1,42 @@
// Package params provides helpers for receiving setup payload from parent.
package params
import (
"encoding/gob"
"errors"
"os"
"strconv"
"syscall"
)
// ErrReceiveEnv is returned by [Receive] if setup fd is not present in environment.
var ErrReceiveEnv = errors.New("environment variable not set")
// Receive retrieves setup fd from the environment and receives params.
//
// The file descriptor written to the value pointed to by fdp must not be passed
// to any system calls. It is made available for ordering file descriptor only.
func Receive(key string, v any, fdp *int) (func() error, error) {
var setup *os.File
if s, ok := os.LookupEnv(key); !ok {
return nil, ErrReceiveEnv
} else {
if fd, err := strconv.Atoi(s); err != nil {
if _err := errors.Unwrap(err); _err != nil {
err = _err
}
return nil, err
} else {
setup = os.NewFile(uintptr(fd), "setup")
if setup == nil {
return nil, syscall.EDOM
}
if fdp != nil {
*fdp = fd
}
}
}
return setup.Close, gob.NewDecoder(setup).Decode(v)
}

View File

@@ -1,4 +1,4 @@
package container_test package params_test
import ( import (
"encoding/gob" "encoding/gob"
@@ -9,7 +9,7 @@ import (
"syscall" "syscall"
"testing" "testing"
"hakurei.app/container" "hakurei.app/internal/params"
) )
func TestSetupReceive(t *testing.T) { func TestSetupReceive(t *testing.T) {
@@ -30,8 +30,8 @@ func TestSetupReceive(t *testing.T) {
}) })
} }
if _, err := container.Receive(key, nil, nil); !errors.Is(err, container.ErrReceiveEnv) { if _, err := params.Receive(key, nil, nil); !errors.Is(err, params.ErrReceiveEnv) {
t.Errorf("Receive: error = %v, want %v", err, container.ErrReceiveEnv) t.Errorf("Receive: error = %v, want %v", err, params.ErrReceiveEnv)
} }
}) })
@@ -39,7 +39,7 @@ func TestSetupReceive(t *testing.T) {
const key = "TEST_ENV_FORMAT" const key = "TEST_ENV_FORMAT"
t.Setenv(key, "") t.Setenv(key, "")
if _, err := container.Receive(key, nil, nil); !errors.Is(err, strconv.ErrSyntax) { if _, err := params.Receive(key, nil, nil); !errors.Is(err, strconv.ErrSyntax) {
t.Errorf("Receive: error = %v, want %v", err, strconv.ErrSyntax) t.Errorf("Receive: error = %v, want %v", err, strconv.ErrSyntax)
} }
}) })
@@ -48,7 +48,7 @@ func TestSetupReceive(t *testing.T) {
const key = "TEST_ENV_RANGE" const key = "TEST_ENV_RANGE"
t.Setenv(key, "-1") t.Setenv(key, "-1")
if _, err := container.Receive(key, nil, nil); !errors.Is(err, syscall.EDOM) { if _, err := params.Receive(key, nil, nil); !errors.Is(err, syscall.EDOM) {
t.Errorf("Receive: error = %v, want %v", err, syscall.EDOM) t.Errorf("Receive: error = %v, want %v", err, syscall.EDOM)
} }
}) })
@@ -60,16 +60,22 @@ func TestSetupReceive(t *testing.T) {
encoderDone := make(chan error, 1) encoderDone := make(chan error, 1)
extraFiles := make([]*os.File, 0, 1) extraFiles := make([]*os.File, 0, 1)
deadline, _ := t.Deadline() if r, w, err := os.Pipe(); err != nil {
if fd, f, err := container.Setup(&extraFiles); err != nil {
t.Fatalf("Setup: error = %v", err) t.Fatalf("Setup: error = %v", err)
} else if fd != 3 {
t.Fatalf("Setup: fd = %d, want 3", fd)
} else { } else {
if err = f.SetDeadline(deadline); err != nil { t.Cleanup(func() {
t.Fatal(err.Error()) if err = errors.Join(r.Close(), w.Close()); err != nil {
t.Fatal(err)
} }
go func() { encoderDone <- gob.NewEncoder(f).Encode(payload) }() })
extraFiles = append(extraFiles, r)
if deadline, ok := t.Deadline(); ok {
if err = w.SetDeadline(deadline); err != nil {
t.Fatal(err)
}
}
go func() { encoderDone <- gob.NewEncoder(w).Encode(payload) }()
} }
if len(extraFiles) != 1 { if len(extraFiles) != 1 {
@@ -87,13 +93,13 @@ func TestSetupReceive(t *testing.T) {
var ( var (
gotPayload []uint64 gotPayload []uint64
fdp *uintptr fdp *int
) )
if !useNilFdp { if !useNilFdp {
fdp = new(uintptr) fdp = new(int)
} }
var closeFile func() error var closeFile func() error
if f, err := container.Receive(key, &gotPayload, fdp); err != nil { if f, err := params.Receive(key, &gotPayload, fdp); err != nil {
t.Fatalf("Receive: error = %v", err) t.Fatalf("Receive: error = %v", err)
} else { } else {
closeFile = f closeFile = f
@@ -103,7 +109,7 @@ func TestSetupReceive(t *testing.T) {
} }
} }
if !useNilFdp { if !useNilFdp {
if int(*fdp) != dupFd { if *fdp != dupFd {
t.Errorf("Fd: %d, want %d", *fdp, dupFd) t.Errorf("Fd: %d, want %d", *fdp, dupFd)
} }
} }

View File

@@ -27,6 +27,11 @@ import (
// AbsWork is the container pathname [TContext.GetWorkDir] is mounted on. // AbsWork is the container pathname [TContext.GetWorkDir] is mounted on.
var AbsWork = fhs.AbsRoot.Append("work/") var AbsWork = fhs.AbsRoot.Append("work/")
// EnvJobs is the name of the environment variable holding a decimal
// representation of the preferred job count. Its value must not affect cure
// outcome.
const EnvJobs = "CURE_JOBS"
// ExecPath is a slice of [Artifact] and the [check.Absolute] pathname to make // ExecPath is a slice of [Artifact] and the [check.Absolute] pathname to make
// it available at under in the container. // it available at under in the container.
type ExecPath struct { type ExecPath struct {
@@ -397,6 +402,7 @@ const SeccompPresets = std.PresetStrict &
func (a *execArtifact) makeContainer( func (a *execArtifact) makeContainer(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, jobs int,
hostNet bool, hostNet bool,
temp, work *check.Absolute, temp, work *check.Absolute,
getArtifact GetArtifactFunc, getArtifact GetArtifactFunc,
@@ -423,14 +429,16 @@ func (a *execArtifact) makeContainer(
z.SeccompFlags |= seccomp.AllowMultiarch z.SeccompFlags |= seccomp.AllowMultiarch
z.ParentPerm = 0700 z.ParentPerm = 0700
z.HostNet = hostNet z.HostNet = hostNet
z.HostAbstract = flags&CHostAbstract != 0
z.Hostname = "cure" z.Hostname = "cure"
z.SetScheduler = flags&CSchedIdle != 0
z.SchedPolicy = ext.SCHED_IDLE z.SchedPolicy = ext.SCHED_IDLE
if z.HostNet { if z.HostNet {
z.Hostname = "cure-net" z.Hostname = "cure-net"
} }
z.Uid, z.Gid = (1<<10)-1, (1<<10)-1 z.Uid, z.Gid = (1<<10)-1, (1<<10)-1
z.Dir, z.Path, z.Args = a.dir, a.path, a.args
z.Dir, z.Env, z.Path, z.Args = a.dir, a.env, a.path, a.args z.Env = slices.Concat(a.env, []string{EnvJobs + "=" + strconv.Itoa(jobs)})
z.Grow(len(a.paths) + 4) z.Grow(len(a.paths) + 4)
for i, b := range a.paths { for i, b := range a.paths {
@@ -559,6 +567,8 @@ func (c *Cache) EnterExec(
var z *container.Container var z *container.Container
z, err = e.makeContainer( z, err = e.makeContainer(
ctx, c.msg, ctx, c.msg,
c.flags,
c.jobs,
hostNet, hostNet,
temp, work, temp, work,
func(a Artifact) (*check.Absolute, unique.Handle[Checksum]) { func(a Artifact) (*check.Absolute, unique.Handle[Checksum]) {
@@ -598,14 +608,13 @@ func (a *execArtifact) cure(f *FContext, hostNet bool) (err error) {
msg := f.GetMessage() msg := f.GetMessage()
var z *container.Container var z *container.Container
if z, err = a.makeContainer( if z, err = a.makeContainer(
ctx, msg, hostNet, ctx, msg, f.cache.flags, f.GetJobs(), hostNet,
f.GetTempDir(), f.GetWorkDir(), f.GetTempDir(), f.GetWorkDir(),
f.GetArtifact, f.GetArtifact,
f.cache.Ident, f.cache.Ident,
); err != nil { ); err != nil {
return return
} }
z.SetScheduler = f.cache.flags&CSchedIdle != 0
var status io.Writer var status io.Writer
if status, err = f.GetStatusWriter(); err != nil { if status, err = f.GetStatusWriter(); err != nil {

View File

@@ -3,7 +3,6 @@ package pkg
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"context"
"crypto/sha512" "crypto/sha512"
"encoding/binary" "encoding/binary"
"errors" "errors"
@@ -11,6 +10,7 @@ import (
"io" "io"
"slices" "slices"
"strconv" "strconv"
"sync"
"syscall" "syscall"
"unique" "unique"
"unsafe" "unsafe"
@@ -39,22 +39,45 @@ func panicToError(errP *error) {
} }
} }
// irCache implements [IRCache].
type irCache struct {
// Artifact to [unique.Handle] of identifier cache.
artifact sync.Map
// Identifier free list, must not be accessed directly.
identPool sync.Pool
}
// zeroIRCache returns the initialised value of irCache.
func zeroIRCache() irCache {
return irCache{
identPool: sync.Pool{New: func() any { return new(extIdent) }},
}
}
// IRCache provides memory management and caching primitives for IR and
// identifier operations against [Artifact] implementations.
//
// The zero value is not safe for use.
type IRCache struct{ irCache }
// NewIR returns the address of a new [IRCache].
func NewIR() *IRCache {
return &IRCache{zeroIRCache()}
}
// IContext is passed to [Artifact.Params] and provides methods for writing // IContext is passed to [Artifact.Params] and provides methods for writing
// values to the IR writer. It does not expose the underlying [io.Writer]. // values to the IR writer. It does not expose the underlying [io.Writer].
// //
// IContext is valid until [Artifact.Params] returns. // IContext is valid until [Artifact.Params] returns.
type IContext struct { type IContext struct {
// Address of underlying [Cache], should be zeroed or made unusable after // Address of underlying irCache, should be zeroed or made unusable after
// [Artifact.Params] returns and must not be exposed directly. // [Artifact.Params] returns and must not be exposed directly.
cache *Cache ic *irCache
// Written to by various methods, should be zeroed after [Artifact.Params] // Written to by various methods, should be zeroed after [Artifact.Params]
// returns and must not be exposed directly. // returns and must not be exposed directly.
w io.Writer w io.Writer
} }
// Unwrap returns the underlying [context.Context].
func (i *IContext) Unwrap() context.Context { return i.cache.ctx }
// irZero is a zero IR word. // irZero is a zero IR word.
var irZero [wordSize]byte var irZero [wordSize]byte
@@ -136,11 +159,11 @@ func (i *IContext) mustWrite(p []byte) {
// WriteIdent is not defined for an [Artifact] not part of the slice returned by // WriteIdent is not defined for an [Artifact] not part of the slice returned by
// [Artifact.Dependencies]. // [Artifact.Dependencies].
func (i *IContext) WriteIdent(a Artifact) { func (i *IContext) WriteIdent(a Artifact) {
buf := i.cache.getIdentBuf() buf := i.ic.getIdentBuf()
defer i.cache.putIdentBuf(buf) defer i.ic.putIdentBuf(buf)
IRKindIdent.encodeHeader(0).put(buf[:]) IRKindIdent.encodeHeader(0).put(buf[:])
*(*ID)(buf[wordSize:]) = i.cache.Ident(a).Value() *(*ID)(buf[wordSize:]) = i.ic.Ident(a).Value()
i.mustWrite(buf[:]) i.mustWrite(buf[:])
} }
@@ -183,19 +206,19 @@ func (i *IContext) WriteString(s string) {
// Encode writes a deterministic, efficient representation of a to w and returns // Encode writes a deterministic, efficient representation of a to w and returns
// the first non-nil error encountered while writing to w. // the first non-nil error encountered while writing to w.
func (c *Cache) Encode(w io.Writer, a Artifact) (err error) { func (ic *irCache) Encode(w io.Writer, a Artifact) (err error) {
deps := a.Dependencies() deps := a.Dependencies()
idents := make([]*extIdent, len(deps)) idents := make([]*extIdent, len(deps))
for i, d := range deps { for i, d := range deps {
dbuf, did := c.unsafeIdent(d, true) dbuf, did := ic.unsafeIdent(d, true)
if dbuf == nil { if dbuf == nil {
dbuf = c.getIdentBuf() dbuf = ic.getIdentBuf()
binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind())) binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind()))
*(*ID)(dbuf[wordSize:]) = did.Value() *(*ID)(dbuf[wordSize:]) = did.Value()
} else { } else {
c.storeIdent(d, dbuf) ic.storeIdent(d, dbuf)
} }
defer c.putIdentBuf(dbuf) defer ic.putIdentBuf(dbuf)
idents[i] = dbuf idents[i] = dbuf
} }
slices.SortFunc(idents, func(a, b *extIdent) int { slices.SortFunc(idents, func(a, b *extIdent) int {
@@ -221,10 +244,10 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
} }
func() { func() {
i := IContext{c, w} i := IContext{ic, w}
defer panicToError(&err) defer panicToError(&err)
defer func() { i.cache, i.w = nil, nil }() defer func() { i.ic, i.w = nil, nil }()
a.Params(&i) a.Params(&i)
}() }()
@@ -233,7 +256,7 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
} }
var f IREndFlag var f IREndFlag
kcBuf := c.getIdentBuf() kcBuf := ic.getIdentBuf()
sz := wordSize sz := wordSize
if kc, ok := a.(KnownChecksum); ok { if kc, ok := a.(KnownChecksum); ok {
f |= IREndKnownChecksum f |= IREndKnownChecksum
@@ -243,13 +266,13 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
IRKindEnd.encodeHeader(uint32(f)).put(kcBuf[:]) IRKindEnd.encodeHeader(uint32(f)).put(kcBuf[:])
_, err = w.Write(kcBuf[:sz]) _, err = w.Write(kcBuf[:sz])
c.putIdentBuf(kcBuf) ic.putIdentBuf(kcBuf)
return return
} }
// encodeAll implements EncodeAll by recursively encoding dependencies and // encodeAll implements EncodeAll by recursively encoding dependencies and
// performs deduplication by value via the encoded map. // performs deduplication by value via the encoded map.
func (c *Cache) encodeAll( func (ic *irCache) encodeAll(
w io.Writer, w io.Writer,
a Artifact, a Artifact,
encoded map[Artifact]struct{}, encoded map[Artifact]struct{},
@@ -259,13 +282,13 @@ func (c *Cache) encodeAll(
} }
for _, d := range a.Dependencies() { for _, d := range a.Dependencies() {
if err = c.encodeAll(w, d, encoded); err != nil { if err = ic.encodeAll(w, d, encoded); err != nil {
return return
} }
} }
encoded[a] = struct{}{} encoded[a] = struct{}{}
return c.Encode(w, a) return ic.Encode(w, a)
} }
// EncodeAll writes a self-describing IR stream of a to w and returns the first // EncodeAll writes a self-describing IR stream of a to w and returns the first
@@ -283,8 +306,8 @@ func (c *Cache) encodeAll(
// the ident cache, nor does it contribute identifiers it computes back to the // the ident cache, nor does it contribute identifiers it computes back to the
// ident cache. Because of this, multiple invocations of EncodeAll will have // ident cache. Because of this, multiple invocations of EncodeAll will have
// similar cost and does not amortise when combined with a call to Cure. // similar cost and does not amortise when combined with a call to Cure.
func (c *Cache) EncodeAll(w io.Writer, a Artifact) error { func (ic *irCache) EncodeAll(w io.Writer, a Artifact) error {
return c.encodeAll(w, a, make(map[Artifact]struct{})) return ic.encodeAll(w, a, make(map[Artifact]struct{}))
} }
// ErrRemainingIR is returned for a [IRReadFunc] that failed to call // ErrRemainingIR is returned for a [IRReadFunc] that failed to call

View File

@@ -8,7 +8,6 @@ import (
"testing" "testing"
"testing/fstest" "testing/fstest"
"unique" "unique"
"unsafe"
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -33,20 +32,14 @@ func TestHTTPGet(t *testing.T) {
checkWithCache(t, []cacheTestCase{ checkWithCache(t, []cacheTestCase{
{"direct", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"direct", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext r := newRContext(t, c)
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
var got []byte var got []byte
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -65,7 +58,7 @@ func TestHTTPGet(t *testing.T) {
wantErrMismatch := &pkg.ChecksumMismatchError{ wantErrMismatch := &pkg.ChecksumMismatchError{
Got: testdataChecksum.Value(), Got: testdataChecksum.Value(),
} }
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -76,7 +69,7 @@ func TestHTTPGet(t *testing.T) {
} }
// check fallback validation // check fallback validation
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) { } else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) {
t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch) t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch)
@@ -89,18 +82,13 @@ func TestHTTPGet(t *testing.T) {
pkg.Checksum{}, pkg.Checksum{},
) )
wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound) wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound)
if _, err := f.Cure(&r); !reflect.DeepEqual(err, wantErrNotFound) { if _, err := f.Cure(r); !reflect.DeepEqual(err, wantErrNotFound) {
t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound) t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound)
} }
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")}, }, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"cure", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"cure", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext r := newRContext(t, c)
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
@@ -120,7 +108,7 @@ func TestHTTPGet(t *testing.T) {
} }
var got []byte var got []byte
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -136,7 +124,7 @@ func TestHTTPGet(t *testing.T) {
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)

View File

@@ -72,6 +72,10 @@ func MustDecode(s string) (checksum Checksum) {
// common holds elements and receives methods shared between different contexts. // common holds elements and receives methods shared between different contexts.
type common struct { type common struct {
// Context specific to this [Artifact]. The toplevel context in [Cache] must
// not be exposed directly.
ctx context.Context
// Address of underlying [Cache], should be zeroed or made unusable after // Address of underlying [Cache], should be zeroed or made unusable after
// Cure returns and must not be exposed directly. // Cure returns and must not be exposed directly.
cache *Cache cache *Cache
@@ -183,11 +187,15 @@ func (t *TContext) destroy(errP *error) {
} }
// Unwrap returns the underlying [context.Context]. // Unwrap returns the underlying [context.Context].
func (c *common) Unwrap() context.Context { return c.cache.ctx } func (c *common) Unwrap() context.Context { return c.ctx }
// GetMessage returns [message.Msg] held by the underlying [Cache]. // GetMessage returns [message.Msg] held by the underlying [Cache].
func (c *common) GetMessage() message.Msg { return c.cache.msg } func (c *common) GetMessage() message.Msg { return c.cache.msg }
// GetJobs returns the preferred number of jobs to run, when applicable. Its
// value must not affect cure outcome.
func (c *common) GetJobs() int { return c.cache.jobs }
// GetWorkDir returns a pathname to a directory which [Artifact] is expected to // GetWorkDir returns a pathname to a directory which [Artifact] is expected to
// write its output to. This is not the final resting place of the [Artifact] // write its output to. This is not the final resting place of the [Artifact]
// and this pathname should not be directly referred to in the final contents. // and this pathname should not be directly referred to in the final contents.
@@ -207,11 +215,11 @@ func (t *TContext) GetTempDir() *check.Absolute { return t.temp }
// [ChecksumMismatchError], or the underlying implementation may block on Close. // [ChecksumMismatchError], or the underlying implementation may block on Close.
func (c *common) Open(a Artifact) (r io.ReadCloser, err error) { func (c *common) Open(a Artifact) (r io.ReadCloser, err error) {
if f, ok := a.(FileArtifact); ok { if f, ok := a.(FileArtifact); ok {
return c.cache.openFile(f) return c.cache.openFile(c.ctx, f)
} }
var pathname *check.Absolute var pathname *check.Absolute
if pathname, _, err = c.cache.Cure(a); err != nil { if pathname, _, err = c.cache.cure(a, true); err != nil {
return return
} }
@@ -372,6 +380,9 @@ type KnownChecksum interface {
} }
// FileArtifact refers to an [Artifact] backed by a single file. // FileArtifact refers to an [Artifact] backed by a single file.
//
// FileArtifact does not support fine-grained cancellation. Its context is
// inherited from the first [TrivialArtifact] or [FloodArtifact] that opens it.
type FileArtifact interface { type FileArtifact interface {
// Cure returns [io.ReadCloser] of the full contents of [FileArtifact]. If // Cure returns [io.ReadCloser] of the full contents of [FileArtifact]. If
// [FileArtifact] implements [KnownChecksum], Cure is responsible for // [FileArtifact] implements [KnownChecksum], Cure is responsible for
@@ -521,8 +532,40 @@ const (
// was caused by an incorrect checksum accidentally left behind while // was caused by an incorrect checksum accidentally left behind while
// bumping a package. Only enable this if you are really sure you need it. // bumping a package. Only enable this if you are really sure you need it.
CAssumeChecksum CAssumeChecksum
// CHostAbstract disables restriction of sandboxed processes from connecting
// to an abstract UNIX socket created by a host process.
//
// This is considered less secure in some systems, but does not introduce
// impurity due to [KindExecNet] being [KnownChecksum]. This flag exists
// to support kernels without Landlock LSM enabled.
CHostAbstract
) )
// toplevel holds [context.WithCancel] over caller-supplied context, where all
// [Artifact] context are derived from.
type toplevel struct {
ctx context.Context
cancel context.CancelFunc
}
// newToplevel returns the address of a new toplevel via ctx.
func newToplevel(ctx context.Context) *toplevel {
var t toplevel
t.ctx, t.cancel = context.WithCancel(ctx)
return &t
}
// pendingCure provides synchronisation and cancellation for pending cures.
type pendingCure struct {
// Closed on cure completion.
done <-chan struct{}
// Error outcome, safe to access after done is closed.
err error
// Cancels the corresponding cure.
cancel context.CancelFunc
}
// Cache is a support layer that implementations of [Artifact] can use to store // Cache is a support layer that implementations of [Artifact] can use to store
// cured [Artifact] data in a content addressed fashion. // cured [Artifact] data in a content addressed fashion.
type Cache struct { type Cache struct {
@@ -530,11 +573,10 @@ type Cache struct {
// implementation and receives an equal amount of elements after. // implementation and receives an equal amount of elements after.
cures chan struct{} cures chan struct{}
// [context.WithCancel] over caller-supplied context, used by [Artifact] and // Parent context which toplevel was derived from.
// all dependency curing goroutines. parent context.Context
ctx context.Context // For deriving curing context, must not be accessed directly.
// Cancels ctx. toplevel atomic.Pointer[toplevel]
cancel context.CancelFunc
// For waiting on dependency curing goroutines. // For waiting on dependency curing goroutines.
wg sync.WaitGroup wg sync.WaitGroup
// Reports new cures and passed to [Artifact]. // Reports new cures and passed to [Artifact].
@@ -544,11 +586,11 @@ type Cache struct {
base *check.Absolute base *check.Absolute
// Immutable cure options set by [Open]. // Immutable cure options set by [Open].
flags int flags int
// Immutable job count, when applicable.
jobs int
// Artifact to [unique.Handle] of identifier cache. // Must not be exposed directly.
artifact sync.Map irCache
// Identifier free list, must not be accessed directly.
identPool sync.Pool
// Synchronises access to dirChecksum. // Synchronises access to dirChecksum.
checksumMu sync.RWMutex checksumMu sync.RWMutex
@@ -558,9 +600,11 @@ type Cache struct {
// Identifier to error pair for unrecoverably faulted [Artifact]. // Identifier to error pair for unrecoverably faulted [Artifact].
identErr map[unique.Handle[ID]]error identErr map[unique.Handle[ID]]error
// Pending identifiers, accessed through Cure for entries not in ident. // Pending identifiers, accessed through Cure for entries not in ident.
identPending map[unique.Handle[ID]]<-chan struct{} identPending map[unique.Handle[ID]]*pendingCure
// Synchronises access to ident and corresponding filesystem entries. // Synchronises access to ident and corresponding filesystem entries.
identMu sync.RWMutex identMu sync.RWMutex
// Synchronises entry into Abort and Cure.
abortMu sync.RWMutex
// Synchronises entry into exclusive artifacts for the cure method. // Synchronises entry into exclusive artifacts for the cure method.
exclMu sync.Mutex exclMu sync.Mutex
@@ -569,8 +613,10 @@ type Cache struct {
// Unlocks the on-filesystem cache. Must only be called from Close. // Unlocks the on-filesystem cache. Must only be called from Close.
unlock func() unlock func()
// Synchronises calls to Close. // Whether [Cache] is considered closed.
closeOnce sync.Once closed bool
// Synchronises calls to Abort and Close.
closeMu sync.Mutex
// Whether EnterExec has not yet returned. // Whether EnterExec has not yet returned.
inExec atomic.Bool inExec atomic.Bool
@@ -580,24 +626,24 @@ type Cache struct {
type extIdent [wordSize + len(ID{})]byte type extIdent [wordSize + len(ID{})]byte
// getIdentBuf returns the address of an extIdent for Ident. // getIdentBuf returns the address of an extIdent for Ident.
func (c *Cache) getIdentBuf() *extIdent { return c.identPool.Get().(*extIdent) } func (ic *irCache) getIdentBuf() *extIdent { return ic.identPool.Get().(*extIdent) }
// putIdentBuf adds buf to identPool. // putIdentBuf adds buf to identPool.
func (c *Cache) putIdentBuf(buf *extIdent) { c.identPool.Put(buf) } func (ic *irCache) putIdentBuf(buf *extIdent) { ic.identPool.Put(buf) }
// storeIdent adds an [Artifact] to the artifact cache. // storeIdent adds an [Artifact] to the artifact cache.
func (c *Cache) storeIdent(a Artifact, buf *extIdent) unique.Handle[ID] { func (ic *irCache) storeIdent(a Artifact, buf *extIdent) unique.Handle[ID] {
idu := unique.Make(ID(buf[wordSize:])) idu := unique.Make(ID(buf[wordSize:]))
c.artifact.Store(a, idu) ic.artifact.Store(a, idu)
return idu return idu
} }
// Ident returns the identifier of an [Artifact]. // Ident returns the identifier of an [Artifact].
func (c *Cache) Ident(a Artifact) unique.Handle[ID] { func (ic *irCache) Ident(a Artifact) unique.Handle[ID] {
buf, idu := c.unsafeIdent(a, false) buf, idu := ic.unsafeIdent(a, false)
if buf != nil { if buf != nil {
idu = c.storeIdent(a, buf) idu = ic.storeIdent(a, buf)
c.putIdentBuf(buf) ic.putIdentBuf(buf)
} }
return idu return idu
} }
@@ -605,17 +651,17 @@ func (c *Cache) Ident(a Artifact) unique.Handle[ID] {
// unsafeIdent implements Ident but returns the underlying buffer for a newly // unsafeIdent implements Ident but returns the underlying buffer for a newly
// computed identifier. Callers must return this buffer to identPool. encodeKind // computed identifier. Callers must return this buffer to identPool. encodeKind
// is only a hint, kind may still be encoded in the buffer. // is only a hint, kind may still be encoded in the buffer.
func (c *Cache) unsafeIdent(a Artifact, encodeKind bool) ( func (ic *irCache) unsafeIdent(a Artifact, encodeKind bool) (
buf *extIdent, buf *extIdent,
idu unique.Handle[ID], idu unique.Handle[ID],
) { ) {
if id, ok := c.artifact.Load(a); ok { if id, ok := ic.artifact.Load(a); ok {
idu = id.(unique.Handle[ID]) idu = id.(unique.Handle[ID])
return return
} }
if ki, ok := a.(KnownIdent); ok { if ki, ok := a.(KnownIdent); ok {
buf = c.getIdentBuf() buf = ic.getIdentBuf()
if encodeKind { if encodeKind {
binary.LittleEndian.PutUint64(buf[:], uint64(a.Kind())) binary.LittleEndian.PutUint64(buf[:], uint64(a.Kind()))
} }
@@ -623,9 +669,9 @@ func (c *Cache) unsafeIdent(a Artifact, encodeKind bool) (
return return
} }
buf = c.getIdentBuf() buf = ic.getIdentBuf()
h := sha512.New384() h := sha512.New384()
if err := c.Encode(h, a); err != nil { if err := ic.Encode(h, a); err != nil {
// unreachable // unreachable
panic(err) panic(err)
} }
@@ -994,7 +1040,11 @@ func (c *Cache) Scrub(checks int) error {
// loadOrStoreIdent attempts to load a cached [Artifact] by its identifier or // loadOrStoreIdent attempts to load a cached [Artifact] by its identifier or
// wait for a pending [Artifact] to cure. If neither is possible, the current // wait for a pending [Artifact] to cure. If neither is possible, the current
// identifier is stored in identPending and a non-nil channel is returned. // identifier is stored in identPending and a non-nil channel is returned.
//
// Since identErr is treated as grow-only, loadOrStoreIdent must not be entered
// without holding a read lock on abortMu.
func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) ( func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) (
ctx context.Context,
done chan<- struct{}, done chan<- struct{},
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
@@ -1011,20 +1061,23 @@ func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) (
return return
} }
var notify <-chan struct{} var pending *pendingCure
if notify, ok = c.identPending[id]; ok { if pending, ok = c.identPending[id]; ok {
c.identMu.Unlock() c.identMu.Unlock()
<-notify <-pending.done
c.identMu.RLock() c.identMu.RLock()
if checksum, ok = c.ident[id]; !ok { if checksum, ok = c.ident[id]; !ok {
err = c.identErr[id] err = pending.err
} }
c.identMu.RUnlock() c.identMu.RUnlock()
return return
} }
d := make(chan struct{}) d := make(chan struct{})
c.identPending[id] = d pending = &pendingCure{done: d}
ctx, pending.cancel = context.WithCancel(c.toplevel.Load().ctx)
c.wg.Add(1)
c.identPending[id] = pending
c.identMu.Unlock() c.identMu.Unlock()
done = d done = d
return return
@@ -1040,21 +1093,62 @@ func (c *Cache) finaliseIdent(
) { ) {
c.identMu.Lock() c.identMu.Lock()
if err != nil { if err != nil {
c.identPending[id].err = err
c.identErr[id] = err c.identErr[id] = err
} else { } else {
c.ident[id] = checksum c.ident[id] = checksum
} }
delete(c.identPending, id) delete(c.identPending, id)
c.identMu.Unlock() c.identMu.Unlock()
c.wg.Done()
close(done) close(done)
} }
// Done returns a channel that is closed when the ongoing cure of an [Artifact]
// referred to by the specified identifier completes. Done may return nil if
// no ongoing cure of the specified identifier exists.
func (c *Cache) Done(id unique.Handle[ID]) <-chan struct{} {
c.identMu.RLock()
pending, ok := c.identPending[id]
c.identMu.RUnlock()
if !ok || pending == nil {
return nil
}
return pending.done
}
// Cancel cancels the ongoing cure of an [Artifact] referred to by the specified
// identifier. Cancel returns whether the [context.CancelFunc] has been killed.
// Cancel returns after the cure is complete.
func (c *Cache) Cancel(id unique.Handle[ID]) bool {
c.identMu.RLock()
pending, ok := c.identPending[id]
c.identMu.RUnlock()
if !ok || pending == nil || pending.cancel == nil {
return false
}
pending.cancel()
<-pending.done
c.abortMu.Lock()
c.identMu.Lock()
delete(c.identErr, id)
c.identMu.Unlock()
c.abortMu.Unlock()
return true
}
// openFile tries to load [FileArtifact] from [Cache], and if that fails, // openFile tries to load [FileArtifact] from [Cache], and if that fails,
// obtains it via [FileArtifact.Cure] instead. Notably, it does not cure // obtains it via [FileArtifact.Cure] instead. Notably, it does not cure
// [FileArtifact] to the filesystem. If err is nil, the caller is responsible // [FileArtifact] to the filesystem. If err is nil, the caller is responsible
// for closing the resulting [io.ReadCloser]. // for closing the resulting [io.ReadCloser].
func (c *Cache) openFile(f FileArtifact) (r io.ReadCloser, err error) { //
// The context must originate from loadOrStoreIdent to enable cancellation.
func (c *Cache) openFile(
ctx context.Context,
f FileArtifact,
) (r io.ReadCloser, err error) {
if kc, ok := f.(KnownChecksum); c.flags&CAssumeChecksum != 0 && ok { if kc, ok := f.(KnownChecksum); c.flags&CAssumeChecksum != 0 && ok {
c.checksumMu.RLock() c.checksumMu.RLock()
r, err = os.Open(c.base.Append( r, err = os.Open(c.base.Append(
@@ -1085,7 +1179,7 @@ func (c *Cache) openFile(f FileArtifact) (r io.ReadCloser, err error) {
} }
}() }()
} }
return f.Cure(&RContext{common{c}}) return f.Cure(&RContext{common{ctx, c}})
} }
return return
} }
@@ -1233,12 +1327,11 @@ func (c *Cache) Cure(a Artifact) (
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
) { ) {
select { c.abortMu.RLock()
case <-c.ctx.Done(): defer c.abortMu.RUnlock()
err = c.ctx.Err()
return
default: if err = c.toplevel.Load().ctx.Err(); err != nil {
return
} }
return c.cure(a, true) return c.cure(a, true)
@@ -1324,15 +1417,16 @@ func (c *Cache) enterCure(a Artifact, curesExempt bool) error {
return nil return nil
} }
ctx := c.toplevel.Load().ctx
select { select {
case c.cures <- struct{}{}: case c.cures <- struct{}{}:
return nil return nil
case <-c.ctx.Done(): case <-ctx.Done():
if a.IsExclusive() { if a.IsExclusive() {
c.exclMu.Unlock() c.exclMu.Unlock()
} }
return c.ctx.Err() return ctx.Err()
} }
} }
@@ -1430,7 +1524,8 @@ func (r *RContext) NewMeasuredReader(
return r.cache.newMeasuredReader(rc, checksum) return r.cache.newMeasuredReader(rc, checksum)
} }
// cure implements Cure without checking the full dependency graph. // cure implements Cure without acquiring a read lock on abortMu. cure must not
// be entered during Abort.
func (c *Cache) cure(a Artifact, curesExempt bool) ( func (c *Cache) cure(a Artifact, curesExempt bool) (
pathname *check.Absolute, pathname *check.Absolute,
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
@@ -1449,8 +1544,11 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
} }
}() }()
var done chan<- struct{} var (
done, checksum, err = c.loadOrStoreIdent(id) ctx context.Context
done chan<- struct{}
)
ctx, done, checksum, err = c.loadOrStoreIdent(id)
if done == nil { if done == nil {
return return
} else { } else {
@@ -1563,7 +1661,7 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
if err = c.enterCure(a, curesExempt); err != nil { if err = c.enterCure(a, curesExempt); err != nil {
return return
} }
r, err = f.Cure(&RContext{common{c}}) r, err = f.Cure(&RContext{common{ctx, c}})
if err == nil { if err == nil {
if checksumPathname == nil || c.flags&CValidateKnown != 0 { if checksumPathname == nil || c.flags&CValidateKnown != 0 {
h := sha512.New384() h := sha512.New384()
@@ -1643,7 +1741,7 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
c.base.Append(dirWork, ids), c.base.Append(dirWork, ids),
c.base.Append(dirTemp, ids), c.base.Append(dirTemp, ids),
ids, nil, nil, nil, ids, nil, nil, nil,
common{c}, common{ctx, c},
} }
switch ca := a.(type) { switch ca := a.(type) {
case TrivialArtifact: case TrivialArtifact:
@@ -1794,14 +1892,42 @@ func (c *Cache) OpenStatus(a Artifact) (r io.ReadSeekCloser, err error) {
return return
} }
// Abort cancels all pending cures and waits for them to clean up, but does not
// close the cache.
func (c *Cache) Abort() {
c.closeMu.Lock()
defer c.closeMu.Unlock()
if c.closed {
return
}
c.toplevel.Load().cancel()
c.abortMu.Lock()
defer c.abortMu.Unlock()
// holding abortMu, identPending stays empty
c.wg.Wait()
c.identMu.Lock()
c.toplevel.Store(newToplevel(c.parent))
clear(c.identErr)
c.identMu.Unlock()
}
// Close cancels all pending cures and waits for them to clean up. // Close cancels all pending cures and waits for them to clean up.
func (c *Cache) Close() { func (c *Cache) Close() {
c.closeOnce.Do(func() { c.closeMu.Lock()
c.cancel() defer c.closeMu.Unlock()
if c.closed {
return
}
c.closed = true
c.toplevel.Load().cancel()
c.wg.Wait() c.wg.Wait()
close(c.cures) close(c.cures)
c.unlock() c.unlock()
})
} }
// Open returns the address of a newly opened instance of [Cache]. // Open returns the address of a newly opened instance of [Cache].
@@ -1810,7 +1936,7 @@ func (c *Cache) Close() {
// caller-supplied value, however direct calls to [Cache.Cure] is not subject // caller-supplied value, however direct calls to [Cache.Cure] is not subject
// to this limitation. // to this limitation.
// //
// A cures value of 0 or lower is equivalent to the value returned by // A cures or jobs value of 0 or lower is equivalent to the value returned by
// [runtime.NumCPU]. // [runtime.NumCPU].
// //
// A successful call to Open guarantees exclusive access to the on-filesystem // A successful call to Open guarantees exclusive access to the on-filesystem
@@ -1820,10 +1946,10 @@ func (c *Cache) Close() {
func Open( func Open(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
) (*Cache, error) { ) (*Cache, error) {
return open(ctx, msg, flags, cures, base, true) return open(ctx, msg, flags, cures, jobs, base, true)
} }
// open implements Open but allows omitting the [lockedfile] lock when called // open implements Open but allows omitting the [lockedfile] lock when called
@@ -1831,13 +1957,16 @@ func Open(
func open( func open(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
lock bool, lock bool,
) (*Cache, error) { ) (*Cache, error) {
if cures < 1 { if cures < 1 {
cures = runtime.NumCPU() cures = runtime.NumCPU()
} }
if jobs < 1 {
jobs = runtime.NumCPU()
}
for _, name := range []string{ for _, name := range []string{
dirIdentifier, dirIdentifier,
@@ -1852,22 +1981,25 @@ func open(
} }
c := Cache{ c := Cache{
parent: ctx,
cures: make(chan struct{}, cures), cures: make(chan struct{}, cures),
flags: flags, flags: flags,
jobs: jobs,
msg: msg, msg: msg,
base: base, base: base,
identPool: sync.Pool{New: func() any { return new(extIdent) }}, irCache: zeroIRCache(),
ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]), ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]),
identErr: make(map[unique.Handle[ID]]error), identErr: make(map[unique.Handle[ID]]error),
identPending: make(map[unique.Handle[ID]]<-chan struct{}), identPending: make(map[unique.Handle[ID]]*pendingCure),
brPool: sync.Pool{New: func() any { return new(bufio.Reader) }}, brPool: sync.Pool{New: func() any { return new(bufio.Reader) }},
bwPool: sync.Pool{New: func() any { return new(bufio.Writer) }}, bwPool: sync.Pool{New: func() any { return new(bufio.Writer) }},
} }
c.ctx, c.cancel = context.WithCancel(ctx) c.toplevel.Store(newToplevel(ctx))
if lock || !testing.Testing() { if lock || !testing.Testing() {
if unlock, err := lockedfile.MutexAt( if unlock, err := lockedfile.MutexAt(

View File

@@ -16,6 +16,7 @@ import (
"path/filepath" "path/filepath"
"reflect" "reflect"
"strconv" "strconv"
"sync"
"syscall" "syscall"
"testing" "testing"
"unique" "unique"
@@ -24,6 +25,8 @@ import (
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/container" "hakurei.app/container"
"hakurei.app/fhs" "hakurei.app/fhs"
"hakurei.app/internal/info"
"hakurei.app/internal/landlock"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
"hakurei.app/internal/stub" "hakurei.app/internal/stub"
"hakurei.app/message" "hakurei.app/message"
@@ -33,11 +36,28 @@ import (
func unsafeOpen( func unsafeOpen(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
lock bool, lock bool,
) (*pkg.Cache, error) ) (*pkg.Cache, error)
// newRContext returns the address of a new [pkg.RContext] unsafely created for
// the specified [testing.TB].
func newRContext(tb testing.TB, c *pkg.Cache) *pkg.RContext {
var r pkg.RContext
rContextVal := reflect.ValueOf(&r).Elem().FieldByName("ctx")
reflect.NewAt(
rContextVal.Type(),
unsafe.Pointer(rContextVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(tb.Context()))
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
return &r
}
func TestMain(m *testing.M) { container.TryArgv0(nil); os.Exit(m.Run()) } func TestMain(m *testing.M) { container.TryArgv0(nil); os.Exit(m.Run()) }
// overrideIdent overrides the ID method of [Artifact]. // overrideIdent overrides the ID method of [Artifact].
@@ -228,7 +248,7 @@ func TestIdent(t *testing.T) {
var cache *pkg.Cache var cache *pkg.Cache
if a, err := check.NewAbs(t.TempDir()); err != nil { if a, err := check.NewAbs(t.TempDir()); err != nil {
t.Fatal(err) t.Fatal(err)
} else if cache, err = pkg.Open(t.Context(), msg, 0, 0, a); err != nil { } else if cache, err = pkg.Open(t.Context(), msg, 0, 0, 0, a); err != nil {
t.Fatal(err) t.Fatal(err)
} }
t.Cleanup(cache.Close) t.Cleanup(cache.Close)
@@ -289,8 +309,20 @@ func checkWithCache(t *testing.T, testCases []cacheTestCase) {
msg := message.New(log.New(os.Stderr, "cache: ", 0)) msg := message.New(log.New(os.Stderr, "cache: ", 0))
msg.SwapVerbose(testing.Verbose()) msg.SwapVerbose(testing.Verbose())
flags := tc.flags
if info.CanDegrade {
if _, err := landlock.GetABI(); err != nil {
if !errors.Is(err, syscall.ENOSYS) {
t.Fatalf("LandlockGetABI: error = %v", err)
}
flags |= pkg.CHostAbstract
t.Log("Landlock LSM is unavailable, setting CHostAbstract")
}
}
var scrubFunc func() error // scrub after hashing var scrubFunc func() error // scrub after hashing
if c, err := pkg.Open(t.Context(), msg, tc.flags, 1<<4, base); err != nil { if c, err := pkg.Open(t.Context(), msg, flags, 1<<4, 0, base); err != nil {
t.Fatalf("Open: error = %v", err) t.Fatalf("Open: error = %v", err)
} else { } else {
t.Cleanup(c.Close) t.Cleanup(c.Close)
@@ -592,7 +624,7 @@ func TestCache(t *testing.T) {
if c0, err := unsafeOpen( if c0, err := unsafeOpen(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, base, false, 0, 0, 0, base, false,
); err != nil { ); err != nil {
t.Fatalf("open: error = %v", err) t.Fatalf("open: error = %v", err)
} else { } else {
@@ -862,17 +894,69 @@ func TestCache(t *testing.T) {
t.Fatalf("Scrub: error = %#v, want %#v", err, wantErrScrub) t.Fatalf("Scrub: error = %#v, want %#v", err, wantErrScrub)
} }
identPendingVal := reflect.ValueOf(c).Elem().FieldByName("identPending") notify := c.Done(unique.Make(pkg.ID{0xff}))
identPending := reflect.NewAt(
identPendingVal.Type(),
unsafe.Pointer(identPendingVal.UnsafeAddr()),
).Elem().Interface().(map[unique.Handle[pkg.ID]]<-chan struct{})
notify := identPending[unique.Make(pkg.ID{0xff})]
go close(n) go close(n)
if notify != nil {
<-notify <-notify
}
for c.Done(unique.Make(pkg.ID{0xff})) != nil {
}
<-wCureDone <-wCureDone
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")}, }, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"cancel abort block", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var wg sync.WaitGroup
defer wg.Wait()
var started sync.WaitGroup
defer started.Wait()
blockCures := func(d byte, e stub.UniqueError, n int) {
started.Add(n)
for i := range n {
wg.Go(func() {
if _, _, err := c.Cure(overrideIdent{pkg.ID{d, byte(i)}, &stubArtifact{
kind: pkg.KindTar,
cure: func(t *pkg.TContext) error {
started.Done()
<-t.Unwrap().Done()
return e + stub.UniqueError(i)
},
}}); !reflect.DeepEqual(err, e+stub.UniqueError(i)) {
panic(err)
}
})
}
started.Wait()
}
blockCures(0xfd, 0xbad, 16)
c.Abort()
wg.Wait()
blockCures(0xfd, 0xcafe, 16)
c.Abort()
wg.Wait()
blockCures(0xff, 0xbad, 1)
if !c.Cancel(unique.Make(pkg.ID{0xff})) {
t.Fatal("missed cancellation")
}
wg.Wait()
blockCures(0xff, 0xcafe, 1)
if !c.Cancel(unique.Make(pkg.ID{0xff})) {
t.Fatal("missed cancellation")
}
wg.Wait()
for c.Cancel(unique.Make(pkg.ID{0xff})) {
}
c.Close()
c.Abort()
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"no assume checksum", 0, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"no assume checksum", 0, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
makeGarbage := func(work *check.Absolute, wantErr error) error { makeGarbage := func(work *check.Absolute, wantErr error) error {
if err := os.Mkdir(work.String(), 0700); err != nil { if err := os.Mkdir(work.String(), 0700); err != nil {
@@ -1249,7 +1333,7 @@ func TestNew(t *testing.T) {
if _, err := pkg.Open( if _, err := pkg.Open(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, check.MustAbs(container.Nonexistent), 0, 0, 0, check.MustAbs(container.Nonexistent),
); !reflect.DeepEqual(err, wantErr) { ); !reflect.DeepEqual(err, wantErr) {
t.Errorf("Open: error = %#v, want %#v", err, wantErr) t.Errorf("Open: error = %#v, want %#v", err, wantErr)
} }
@@ -1277,7 +1361,7 @@ func TestNew(t *testing.T) {
if _, err := pkg.Open( if _, err := pkg.Open(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, tempDir.Append("cache"), 0, 0, 0, tempDir.Append("cache"),
); !reflect.DeepEqual(err, wantErr) { ); !reflect.DeepEqual(err, wantErr) {
t.Errorf("Open: error = %#v, want %#v", err, wantErr) t.Errorf("Open: error = %#v, want %#v", err, wantErr)
} }

View File

@@ -43,8 +43,7 @@ var _ fmt.Stringer = new(tarArtifactNamed)
func (a *tarArtifactNamed) String() string { return a.name + "-unpack" } func (a *tarArtifactNamed) String() string { return a.name + "-unpack" }
// NewTar returns a new [Artifact] backed by the supplied [Artifact] and // NewTar returns a new [Artifact] backed by the supplied [Artifact] and
// compression method. The source [Artifact] must be compatible with // compression method. The source [Artifact] must be a [FileArtifact].
// [TContext.Open].
func NewTar(a Artifact, compression uint32) Artifact { func NewTar(a Artifact, compression uint32) Artifact {
ta := tarArtifact{a, compression} ta := tarArtifact{a, compression}
if s, ok := a.(fmt.Stringer); ok { if s, ok := a.(fmt.Stringer); ok {

View File

@@ -9,7 +9,9 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"runtime"
"slices" "slices"
"strconv"
"strings" "strings"
"hakurei.app/check" "hakurei.app/check"
@@ -21,6 +23,10 @@ func main() {
log.SetFlags(0) log.SetFlags(0)
log.SetPrefix("testtool: ") log.SetPrefix("testtool: ")
environ := slices.DeleteFunc(slices.Clone(os.Environ()), func(s string) bool {
return s == "CURE_JOBS="+strconv.Itoa(runtime.NumCPU())
})
var hostNet, layers, promote bool var hostNet, layers, promote bool
if len(os.Args) == 2 && os.Args[0] == "testtool" { if len(os.Args) == 2 && os.Args[0] == "testtool" {
switch os.Args[1] { switch os.Args[1] {
@@ -48,15 +54,15 @@ func main() {
var overlayRoot bool var overlayRoot bool
wantEnv := []string{"HAKUREI_TEST=1"} wantEnv := []string{"HAKUREI_TEST=1"}
if len(os.Environ()) == 2 { if len(environ) == 2 {
overlayRoot = true overlayRoot = true
if !layers && !promote { if !layers && !promote {
log.SetPrefix("testtool(overlay root): ") log.SetPrefix("testtool(overlay root): ")
} }
wantEnv = []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"} wantEnv = []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}
} }
if !slices.Equal(wantEnv, os.Environ()) { if !slices.Equal(wantEnv, environ) {
log.Fatalf("Environ: %q, want %q", os.Environ(), wantEnv) log.Fatalf("Environ: %q, want %q", environ, wantEnv)
} }
var overlayWork bool var overlayWork bool

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newAttr() (pkg.Artifact, string) {
version = "2.5.2" version = "2.5.2"
checksum = "YWEphrz6vg1sUMmHHVr1CRo53pFXRhq_pjN-AlG8UgwZK1y6m7zuDhxqJhD0SV0l" checksum = "YWEphrz6vg1sUMmHHVr1CRo53pFXRhq_pjN-AlG8UgwZK1y6m7zuDhxqJhD0SV0l"
) )
return t.NewPackage("attr", version, pkg.NewHTTPGetTar( return t.NewPackage("attr", version, newTar(
nil, "https://download.savannah.nongnu.org/releases/attr/"+ "https://download.savannah.nongnu.org/releases/attr/"+
"attr-"+version+".tar.gz", "attr-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
@@ -81,10 +81,10 @@ func (t Toolchain) newACL() (pkg.Artifact, string) {
version = "2.3.2" version = "2.3.2"
checksum = "-fY5nwH4K8ZHBCRXrzLdguPkqjKI6WIiGu4dBtrZ1o0t6AIU73w8wwJz_UyjIS0P" checksum = "-fY5nwH4K8ZHBCRXrzLdguPkqjKI6WIiGu4dBtrZ1o0t6AIU73w8wwJz_UyjIS0P"
) )
return t.NewPackage("acl", version, pkg.NewHTTPGetTar( return t.NewPackage("acl", version, newTar(
nil, "https://download.savannah.nongnu.org/releases/acl/"+ "https://download.savannah.nongnu.org/releases/acl/"+
"acl-"+version+".tar.gz", "acl-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// makes assumptions about uid_map/gid_map // makes assumptions about uid_map/gid_map

View File

@@ -16,9 +16,9 @@ import (
type PArtifact int type PArtifact int
const ( const (
LLVMCompilerRT PArtifact = iota CompilerRT PArtifact = iota
LLVMRuntimes LLVMRuntimes
LLVMClang Clang
// EarlyInit is the Rosa OS init program. // EarlyInit is the Rosa OS init program.
EarlyInit EarlyInit
@@ -64,6 +64,7 @@ const (
GenInitCPIO GenInitCPIO
Gettext Gettext
Git Git
Glslang
GnuTLS GnuTLS
Go Go
Gperf Gperf
@@ -76,13 +77,17 @@ const (
LibXau LibXau
Libbsd Libbsd
Libcap Libcap
Libclc
Libdrm
Libev Libev
Libexpat Libexpat
Libffi Libffi
Libgd Libgd
Libglvnd
Libiconv Libiconv
Libmd Libmd
Libmnl Libmnl
Libpciaccess
Libnftnl Libnftnl
Libpsl Libpsl
Libseccomp Libseccomp
@@ -119,15 +124,18 @@ const (
PerlTermReadKey PerlTermReadKey
PerlTextCharWidth PerlTextCharWidth
PerlTextWrapI18N PerlTextWrapI18N
PerlUnicodeGCString PerlUnicodeLineBreak
PerlYAMLTiny PerlYAMLTiny
PkgConfig PkgConfig
Procps Procps
Python Python
PythonIniConfig PythonIniConfig
PythonMako
PythonMarkupSafe
PythonPackaging PythonPackaging
PythonPluggy PythonPluggy
PythonPyTest PythonPyTest
PythonPyYAML
PythonPygments PythonPygments
QEMU QEMU
Rdfind Rdfind
@@ -135,6 +143,8 @@ const (
Rsync Rsync
Sed Sed
Setuptools Setuptools
SPIRVHeaders
SPIRVTools
SquashfsTools SquashfsTools
Strace Strace
TamaGo TamaGo
@@ -148,15 +158,17 @@ const (
WaylandProtocols WaylandProtocols
XCB XCB
XCBProto XCBProto
Xproto XDGDBusProxy
XZ XZ
Xproto
Zlib Zlib
Zstd Zstd
// PresetUnexportedStart is the first unexported preset. // PresetUnexportedStart is the first unexported preset.
PresetUnexportedStart PresetUnexportedStart
buildcatrust = iota - 1 llvmSource = iota - 1
buildcatrust
utilMacros utilMacros
// Musl is a standalone libc that does not depend on the toolchain. // Musl is a standalone libc that does not depend on the toolchain.

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newArgpStandalone() (pkg.Artifact, string) {
version = "1.3" version = "1.3"
checksum = "vtW0VyO2pJ-hPyYmDI2zwSLS8QL0sPAUKC1t3zNYbwN2TmsaE-fADhaVtNd3eNFl" checksum = "vtW0VyO2pJ-hPyYmDI2zwSLS8QL0sPAUKC1t3zNYbwN2TmsaE-fADhaVtNd3eNFl"
) )
return t.NewPackage("argp-standalone", version, pkg.NewHTTPGetTar( return t.NewPackage("argp-standalone", version, newTar(
nil, "http://www.lysator.liu.se/~nisse/misc/"+ "http://www.lysator.liu.se/~nisse/misc/"+
"argp-standalone-"+version+".tar.gz", "argp-standalone-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newBzip2() (pkg.Artifact, string) {
version = "1.0.8" version = "1.0.8"
checksum = "cTLykcco7boom-s05H1JVsQi1AtChYL84nXkg_92Dm1Xt94Ob_qlMg_-NSguIK-c" checksum = "cTLykcco7boom-s05H1JVsQi1AtChYL84nXkg_92Dm1Xt94Ob_qlMg_-NSguIK-c"
) )
return t.NewPackage("bzip2", version, pkg.NewHTTPGetTar( return t.NewPackage("bzip2", version, newTar(
nil, "https://sourceware.org/pub/bzip2/bzip2-"+version+".tar.gz", "https://sourceware.org/pub/bzip2/bzip2-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,

View File

@@ -13,10 +13,11 @@ func (t Toolchain) newCMake() (pkg.Artifact, string) {
version = "4.3.1" version = "4.3.1"
checksum = "RHpzZiM1kJ5bwLjo9CpXSeHJJg3hTtV9QxBYpQoYwKFtRh5YhGWpShrqZCSOzQN6" checksum = "RHpzZiM1kJ5bwLjo9CpXSeHJJg3hTtV9QxBYpQoYwKFtRh5YhGWpShrqZCSOzQN6"
) )
return t.NewPackage("cmake", version, pkg.NewHTTPGetTar( return t.NewPackage("cmake", version, newFromGitHubRelease(
nil, "https://github.com/Kitware/CMake/releases/download/"+ "Kitware/CMake",
"v"+version+"/cmake-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "cmake-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// test suite expects writable source tree // test suite expects writable source tree
@@ -90,7 +91,7 @@ index 2ead810437..f85cbb8b1c 100644
ConfigureName: "/usr/src/cmake/bootstrap", ConfigureName: "/usr/src/cmake/bootstrap",
Configure: []KV{ Configure: []KV{
{"prefix", "/system"}, {"prefix", "/system"},
{"parallel", `"$(nproc)"`}, {"parallel", jobsE},
{"--"}, {"--"},
{"-DCMAKE_USE_OPENSSL", "OFF"}, {"-DCMAKE_USE_OPENSSL", "OFF"},
{"-DCMake_TEST_NO_NETWORK", "ON"}, {"-DCMake_TEST_NO_NETWORK", "ON"},
@@ -118,9 +119,6 @@ func init() {
// CMakeHelper is the [CMake] build system helper. // CMakeHelper is the [CMake] build system helper.
type CMakeHelper struct { type CMakeHelper struct {
// Joined with name with a dash if non-empty.
Variant string
// Path elements joined with source. // Path elements joined with source.
Append []string Append []string
@@ -135,14 +133,6 @@ type CMakeHelper struct {
var _ Helper = new(CMakeHelper) var _ Helper = new(CMakeHelper)
// name returns its arguments and an optional variant string joined with '-'.
func (attr *CMakeHelper) name(name, version string) string {
if attr != nil && attr.Variant != "" {
name += "-" + attr.Variant
}
return name + "-" + version
}
// extra returns a hardcoded slice of [CMake] and [Ninja]. // extra returns a hardcoded slice of [CMake] and [Ninja].
func (attr *CMakeHelper) extra(int) P { func (attr *CMakeHelper) extra(int) P {
if attr != nil && attr.Make { if attr != nil && attr.Make {
@@ -180,10 +170,8 @@ func (attr *CMakeHelper) script(name string) string {
} }
generate := "Ninja" generate := "Ninja"
jobs := ""
if attr.Make { if attr.Make {
generate = "'Unix Makefiles'" generate = "'Unix Makefiles'"
jobs += ` "--parallel=$(nproc)"`
} }
return ` return `
@@ -201,7 +189,7 @@ cmake -G ` + generate + ` \
}), " \\\n\t") + ` \ }), " \\\n\t") + ` \
-DCMAKE_INSTALL_PREFIX=/system \ -DCMAKE_INSTALL_PREFIX=/system \
'/usr/src/` + name + `/` + filepath.Join(attr.Append...) + `' '/usr/src/` + name + `/` + filepath.Join(attr.Append...) + `'
cmake --build .` + jobs + ` cmake --build . --parallel=` + jobsE + `
cmake --install . --prefix=/work/system cmake --install . --prefix=/work/system
` + attr.Script ` + attr.Script
} }

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newConnman() (pkg.Artifact, string) {
version = "2.0" version = "2.0"
checksum = "MhVTdJOhndnZn2SWd8URKo_Pj7Zvc14tntEbrVOf9L3yVWJvpb3v3Q6104tWJgtW" checksum = "MhVTdJOhndnZn2SWd8URKo_Pj7Zvc14tntEbrVOf9L3yVWJvpb3v3Q6104tWJgtW"
) )
return t.NewPackage("connman", version, pkg.NewHTTPGetTar( return t.NewPackage("connman", version, newTar(
nil, "https://git.kernel.org/pub/scm/network/connman/connman.git/"+ "https://git.kernel.org/pub/scm/network/connman/connman.git/"+
"snapshot/connman-"+version+".tar.gz", "snapshot/connman-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{

Some files were not shown because too many files have changed in this diff Show More