107 Commits

Author SHA1 Message Date
b208af8b85 release: 0.3.7
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-15 21:04:55 +09:00
8d650c0c8f all: migrate to rosa/hakurei
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-15 20:12:51 +09:00
a720efc32d internal/rosa/llvm: arch-specific versions
This enables temporarily avoiding a broken release on specific targets.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-15 15:06:36 +09:00
400540cd41 internal/rosa/llvm: arch-specific patches
Broken aarch64 tests in LLVM seem unlikely to be fixed soon.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-15 11:37:24 +09:00
1113efa5c2 internal/rosa/kernel: enable arm64 block drivers
These are added separately to the amd64 patch due to the arm64 toolchain not being available at that time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-15 00:22:05 +09:00
8b875f865c cmd/earlyinit: remount root and set firmware path
The default search paths cannot be configured, configuring them here is most sound for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 19:50:04 +09:00
8905d653ba cmd/earlyinit: mount pseudo-filesystems
The proposal for merging both init programs was unanimously accepted, so this is set up here alongside devtmpfs.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 19:43:42 +09:00
9c2fb6246f internal/rosa/kernel: enable FW_LOADER
This wants to be loaded early, so having it as a dlkm is not helpful as it will always be loaded anyway.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 19:32:14 +09:00
9c116acec6 internal/rosa/kernel: enable amd64 block drivers
These have to be built into initramfs, anyway, so build them into the kernel instead. The arm64 toolchain is not yet ready, so will be updated in a later patch.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 19:22:56 +09:00
988239a2bc internal/rosa: basic system image
This is a simple image for debugging and is not yet set up for dm-verity.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 15:54:13 +09:00
bc03118142 cmd/earlyinit: handle args from cmdline
These are set by the bootloader.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-14 15:13:52 +09:00
74c213264a internal/rosa/git: install libexec symlinks
This is less clumsy to represent.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 20:43:23 +09:00
345cffddc2 cmd/mbf: optionally export output
This is for debugging for now, as no program consumes this format yet.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 19:53:55 +09:00
49163758c8 internal/rosa/llvm: 22.1.0 to 22.1.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 16:08:49 +09:00
ad22c15fb1 internal/rosa/perl: 5.42.0 to 5.42.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 16:08:24 +09:00
9c774f7e0a internal/rosa/python: setuptools 82.0.0 to 82.0.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 15:32:00 +09:00
707f0a349f internal/rosa/gtk: glib 2.87.3 to 2.87.5
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 15:26:42 +09:00
7c35be066a internal/rosa/tamago: 1.26.0 to 1.26.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 15:23:29 +09:00
f91d55fa5e internal/rosa/curl: 8.18.0 to 8.19.0
The test suite now depends on python to run mock servers. SMB is disabled because it is completely unused, and pulls in a python dependency for tests. A broken test is fixed and the patch hopefully upstreamed before next release.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 15:23:07 +09:00
5862cc1966 internal/rosa/kernel: firmware 20260221 to 20260309
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 14:06:21 +09:00
b3f0360a05 internal/rosa: populate runtime dependencies
This also removes manually resolved indirect dependencies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 13:23:30 +09:00
8938994036 cmd/mbf: display runtime dependency info
This only presents top-level dependencies, resolving indirect dependencies can be misleading in this context.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 10:46:37 +09:00
96d382f805 cmd/mbf: resolve runtime dependencies
This also adds the collection meta-artifact for concurrent curing.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 10:41:22 +09:00
5c785c135c internal/rosa: collection meta-artifact
This is a stub FloodArtifact for concurrently curing multiple artifacts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 10:34:45 +09:00
0130f8ea6d internal/rosa: represent runtime dependencies
This also resolves indirect dependencies, reducing noise.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-13 10:31:14 +09:00
faac5c4a83 internal/rosa: store artifact results in struct
This is cleaner and makes adding additional values easier.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-12 18:08:41 +09:00
620062cca9 hst: expose scheduling priority
This is useful when limits are configured to allow it.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-12 02:15:14 +09:00
196b200d0f container: expose priority and SCHED_OTHER policy
The more explicit API removes the arbitrary limit preventing use of SCHED_OTHER (referred to as SCHED_NORMAL in the kernel). This change also exposes priority value to set.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-12 01:14:03 +09:00
04e6bc3c5c hst: expose scheduling policy
This is primarily useful for poorly written music players for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-12 00:52:18 +09:00
5c540f90aa internal/outcome: improve doc comments
This improves readability on smaller displays.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-11 21:04:02 +09:00
1e8ac5f68e container: use policy name in log message
This is more helpful than having the user resolve the integer.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-11 20:20:34 +09:00
fd515badff container: move scheduler policy constants to std
This avoids depending on cgo.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-11 20:03:08 +09:00
330a344845 hst: improve doc comments
These now read a lot better both in source and on pkgsite.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-11 19:21:55 +09:00
48cdf8bf85 go: 1.26
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-10 03:29:19 +09:00
7fb42ba49d internal/rosa/llvm: set LLVM_LIT_ARGS
This replaces the progress bar, which was worse than useless.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-10 02:05:11 +09:00
19a2737148 container: sched policy string representation
This also uses priority obtained via sched_get_priority_min, and improves bounds checking.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-09 18:38:31 +09:00
baf2def9cc internal/rosa/kmod: prefix moduledir
This change also works around the kernel build system being unaware of this option.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-09 16:40:55 +09:00
242e042cb9 internal/rosa/nss: rename from ssl
The SSL name came from earlier on and is counterintuitive.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-09 14:58:31 +09:00
6988c9c4db internal/rosa: firmware artifact
Required for generic hardware.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 22:50:36 +09:00
d6e0ed8c76 internal/rosa/python: various pypi artifacts
These are dependencies of pre-commit.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 22:25:16 +09:00
53be3309c5 internal/rosa: rdfind artifact
Required by linux firmware.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 20:26:15 +09:00
644dd18a52 internal/rosa: nettle artifact
Required by rdfind, which is required by linux firmware.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 20:22:09 +09:00
27c6f976df internal/rosa/gnu: parallel artifact
Used by linux firmware.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 19:56:40 +09:00
279a973633 internal/rosa: build independent earlyinit
This avoids unnecessarily rebuilding hakurei during development.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 18:29:04 +09:00
9c1b522689 internal/rosa/hakurei: optional hostname tool
This makes it more efficient to reuse the helper for partial builds.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 18:26:03 +09:00
5c8cd46c02 internal/rosa: update arm64 kernel config
This was not feasible during the bump, now there is a viable toolchain.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 03:17:53 +09:00
2dba550a2b internal/rosa/zlib: 1.3.1 to 1.3.2
This also switches to the CMake build system because upstream broke their old build system.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 02:36:59 +09:00
8c64812b34 internal/rosa: add zlib runtime dependency
For transitioning to dynamically linking zlib.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 02:36:09 +09:00
d1423d980d internal/rosa/cmake: bake in CMAKE_INSTALL_LIBDIR
There is never a good reason to set this to anything else, and the default value of lib64 breaks everything. This did not manifest on LLVM (which the CMake helper was initially written for) because it did not use this value.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 01:20:41 +09:00
104da0f66a internal/rosa/cmake: pass correct prefix
This can change build output similar to autotools --prefix and DESTDIR, but was not clearly indicated to do so.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 01:04:02 +09:00
d996d9fbb7 internal/rosa/cmake: pass parallel argument for make
This uses the default value for each build system, which is parallel for ninja but not for make.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 00:55:58 +09:00
469f97ccc1 internal/rosa/gnu: libiconv 1.18 to 1.19
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-08 00:36:38 +09:00
af7a6180a1 internal/rosa/cmake: optionally use makefile
This breaks the dependency loop in zlib.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 22:47:30 +09:00
03b5c0e20a internal/rosa/tamago: populate Anitya project id
This had to wait quite a while due to Microsoft Github rate-limiting.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 19:37:03 +09:00
6a31fb4fa3 internal/rosa: hakurei 0.3.5 to 0.3.6
This also removes the backport patch.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 18:53:48 +09:00
bae45363bc release: 0.3.6
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 16:32:04 +09:00
2c17d1abe0 cmd/mbf: create report with reasonable perm
Making it inaccessible certainly is not reasonable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 16:16:47 +09:00
0aa459d1a9 cmd/mbf: check for updates concurrently
Runs much faster this way.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 16:05:16 +09:00
00053e6287 internal/rosa: set User-Agent for Anitya requests
This is cleaner than using the default string.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 16:03:06 +09:00
3a0c020150 internal/rosa/gnu: coreutils 9.9 to 9.10
This breaks two tests, one of them is fixed and the other disabled. Additionally, two fixed tests are re-enabled.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 14:31:03 +09:00
78655f159e internal/rosa/ncurses: use stable Anitya project
The alpine mapping points to ncurses~devel for some reason.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 13:43:38 +09:00
30bb52e380 internal/rosa/x: libXau 1.0.7 to 1.0.12
This also switches to individual releases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 13:39:48 +09:00
66197ebdb2 internal/rosa/x: xproto 7.0.23 to 7.0.31
This also switches to individual releases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 13:39:23 +09:00
f7a2744025 internal/rosa/x: util-macros 1.17 to 1.20.2
This also switches to individual releases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 13:38:54 +09:00
f16b7bfaf0 internal/rosa: do not keep underlying file
No operation require further filesystem interaction for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 13:04:06 +09:00
6228cda7ad cmd/mbf: optionally read report in info
This is a useful frontend for the report files before web server is ready.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 02:26:35 +09:00
86c336de88 cmd/mbf: cure status report command
This emits a report stream for the opened cache into the specified file.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 02:20:40 +09:00
ba5d882ef2 internal/rosa: stream format for cure report
This is for efficient cure status retrieval by the package website server.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-07 02:18:00 +09:00
1e0d68a29e internal/pkg: move output buffer to reader
This side is the read end of a pipe and buffering reads from it ended up performing better than buffering one half of the TeeReader (which already goes through the kernel page cache anyway).

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 23:39:12 +09:00
80f2367c16 cmd/mbf: merge status and info commands
This is cleaner, and offers better integration with the work-in-progress report file.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 23:20:00 +09:00
5ea4dae4b8 cmd/mbf: info accept multiple names
This also improves formatting for use with multiple info blocks.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 23:10:43 +09:00
eb1a3918a8 internal/rosa/gnu: texinfo 7.2 to 7.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 22:09:00 +09:00
349011a5e6 internal/rosa/perl: compile dynamic libperl
Required by texinfo 7.3.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 22:08:38 +09:00
861249751a internal/rosa/openssl: 3.5.5 to 3.6.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:39:52 +09:00
e3445c2a7e internal/rosa/libffi: 3.4.5 to 3.5.2
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:39:25 +09:00
7315e64a8a internal/rosa/ssl: nss 3.120 to 3.121
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:38:41 +09:00
7d74454f6d internal/rosa/python: 3.14.2 to 3.14.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:38:17 +09:00
96956c849a internal/rosa/gnu: gawk 5.3.2 to 5.4.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:30:37 +09:00
aabdcbba1c internal/rosa/gnu: m4 1.4.20 to 1.4.21
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 21:22:33 +09:00
38cc4a6429 internal/rosa/openssl: check stable versions
This has a bunch of strange malformed tags.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 19:22:41 +09:00
27ef7f81fa internal/rosa/perl: check stable versions
This uses odd-even versioning.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 19:16:07 +09:00
f7888074b9 internal/rosa/util-linux: check stable versions
Anitya appears to get confused when seeing release candidates.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 19:15:16 +09:00
95ffe0429c internal/rosa: overridable version check
For projects with strange versioning practices.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 19:13:55 +09:00
16d0cf04c1 internal/rosa/python: setuptools 80.10.1 to 82.0.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 18:40:55 +09:00
6a2b32b48c internal/rosa/libxml2: 2.15.1 to 2.15.2
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 18:36:06 +09:00
c1472fc54d internal/rosa/wayland: 1.24.0 to 1.24.91
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 18:33:26 +09:00
179cf07e48 internal/rosa/git: 2.52.0 to 2.53.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 18:32:41 +09:00
c2d2795e2b internal/rosa/libexpat: 2.7.3 to 2.7.4
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 18:22:39 +09:00
2c1d7edd7a internal/rosa/squashfs: 4.7.4 to 4.7.5
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 17:47:23 +09:00
1ee8d09223 internal/rosa/pcre2: 10.43 to 10.47
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 17:46:59 +09:00
7f01cb3d59 internal/rosa/gtk: glib 2.86.4 to 2.87.3
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 17:46:32 +09:00
65ae4f57c2 internal/rosa/go: 1.26.0 to 1.26.1
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 17:46:05 +09:00
77110601cc internal/rosa/gnu: binutils 2.45 to 2.46.0
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 17:45:10 +09:00
c5b1949430 internal/rosa/kernel: backport AMD display patches
These reduce stack usage in dml30_ModeSupportAndSystemConfigurationFull enough to fix compile on clang 22.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 16:22:20 +09:00
17805cdfa8 internal/rosa/kernel: 6.12.73 to 6.12.76
Toolchain is broken on arm64 at the moment so the configuration is not updated.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-06 15:01:01 +09:00
9c9befb4c9 internal/rosa/llvm: separate major version
For pathname formatting at compile time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 22:59:51 +09:00
fcdf9ecee4 internal/rosa/llvm: 21.1.8 to 22.1.0
New patch should not be affected next time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 22:42:27 +09:00
fbd97b658f cmd/mbf: display metadata
For viewing package metadata before the website is ready.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 22:11:26 +09:00
c93725ac58 internal/rosa: prefix python constants
These have confusing names.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 21:37:06 +09:00
f14ab80253 internal/rosa: populate Anitya project ids
This enables release monitoring for all applicable projects.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 21:32:15 +09:00
9989881dd9 internal/rosa/llvm: populate metadata
This enables use of release monitoring for LLVM.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 21:27:33 +09:00
a36b3ece16 internal/rosa: release monitoring via Anitya
This is much more sustainable than manual package flagging.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 20:57:05 +09:00
75970a5650 internal/rosa: check name uniqueness
This should prevent adding packages with nonunique names.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 18:37:55 +09:00
572c99825d Revert "internal/rosa/zlib: 1.3.1 to 1.3.2"
The bump broke elfutils build.

This reverts commit 0eb2bfa12e.
2026-03-05 17:06:15 +09:00
ebdf9dcecc cmd/mbf: preset status command
This exposes the new OpenStatus cache method.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 16:59:47 +09:00
8ea2a56d5b internal/pkg: expose status file
This is useful for external tooling.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 16:58:52 +09:00
159a45c027 internal/rosa: export preset bounds
These are useful for external tooling.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-03-05 16:34:25 +09:00
112 changed files with 3983 additions and 732 deletions

View File

@@ -1,5 +1,5 @@
<p align="center"> <p align="center">
<a href="https://git.gensokyo.uk/security/hakurei"> <a href="https://git.gensokyo.uk/rosa/hakurei">
<picture> <picture>
<img src="https://basement.gensokyo.uk/images/yukari1.png" width="200px" alt="Yukari"> <img src="https://basement.gensokyo.uk/images/yukari1.png" width="200px" alt="Yukari">
</picture> </picture>
@@ -8,16 +8,16 @@
<p align="center"> <p align="center">
<a href="https://pkg.go.dev/hakurei.app"><img src="https://pkg.go.dev/badge/hakurei.app.svg" alt="Go Reference" /></a> <a href="https://pkg.go.dev/hakurei.app"><img src="https://pkg.go.dev/badge/hakurei.app.svg" alt="Go Reference" /></a>
<a href="https://git.gensokyo.uk/security/hakurei/actions"><img src="https://git.gensokyo.uk/security/hakurei/actions/workflows/test.yml/badge.svg?branch=staging&style=flat-square" alt="Gitea Workflow Status" /></a> <a href="https://git.gensokyo.uk/rosa/hakurei/actions"><img src="https://git.gensokyo.uk/rosa/hakurei/actions/workflows/test.yml/badge.svg?branch=staging&style=flat-square" alt="Gitea Workflow Status" /></a>
<br/> <br/>
<a href="https://git.gensokyo.uk/security/hakurei/releases"><img src="https://img.shields.io/gitea/v/release/security/hakurei?gitea_url=https%3A%2F%2Fgit.gensokyo.uk&color=purple" alt="Release" /></a> <a href="https://git.gensokyo.uk/rosa/hakurei/releases"><img src="https://img.shields.io/gitea/v/release/rosa/hakurei?gitea_url=https%3A%2F%2Fgit.gensokyo.uk&color=purple" alt="Release" /></a>
<a href="https://goreportcard.com/report/hakurei.app"><img src="https://goreportcard.com/badge/hakurei.app" alt="Go Report Card" /></a> <a href="https://goreportcard.com/report/hakurei.app"><img src="https://goreportcard.com/badge/hakurei.app" alt="Go Report Card" /></a>
<a href="https://hakurei.app"><img src="https://img.shields.io/website?url=https%3A%2F%2Fhakurei.app" alt="Website" /></a> <a href="https://hakurei.app"><img src="https://img.shields.io/website?url=https%3A%2F%2Fhakurei.app" alt="Website" /></a>
</p> </p>
Hakurei is a tool for running sandboxed desktop applications as dedicated Hakurei is a tool for running sandboxed desktop applications as dedicated
subordinate users on the Linux kernel. It implements the application container subordinate users on the Linux kernel. It implements the application container
of [planterette (WIP)](https://git.gensokyo.uk/security/planterette), a of [planterette (WIP)](https://git.gensokyo.uk/rosa/planterette), a
self-contained Android-like package manager with modern security features. self-contained Android-like package manager with modern security features.
Interaction with hakurei happens entirely through structures described by Interaction with hakurei happens entirely through structures described by
@@ -62,4 +62,4 @@ are very likely to be rejected.
## NixOS Module (deprecated) ## NixOS Module (deprecated)
The NixOS module is in maintenance mode and will be removed once planterette is The NixOS module is in maintenance mode and will be removed once planterette is
feature-complete. Full module documentation can be found [here](options.md). feature-complete. Full module documentation can be found [here](options.md).

View File

@@ -4,6 +4,7 @@ import (
"log" "log"
"os" "os"
"runtime" "runtime"
"strings"
. "syscall" . "syscall"
) )
@@ -12,6 +13,22 @@ func main() {
log.SetFlags(0) log.SetFlags(0)
log.SetPrefix("earlyinit: ") log.SetPrefix("earlyinit: ")
var (
option map[string]string
flags []string
)
if len(os.Args) > 1 {
option = make(map[string]string)
for _, s := range os.Args[1:] {
key, value, ok := strings.Cut(s, "=")
if !ok {
flags = append(flags, s)
continue
}
option[key] = value
}
}
if err := Mount( if err := Mount(
"devtmpfs", "devtmpfs",
"/dev/", "/dev/",
@@ -55,4 +72,56 @@ func main() {
} }
} }
// staying in rootfs, these are no longer used
must(os.Remove("/root"))
must(os.Remove("/init"))
must(os.Mkdir("/proc", 0))
mustSyscall("mount proc", Mount(
"proc",
"/proc",
"proc",
MS_NOSUID|MS_NOEXEC|MS_NODEV,
"hidepid=1",
))
must(os.Mkdir("/sys", 0))
mustSyscall("mount sysfs", Mount(
"sysfs",
"/sys",
"sysfs",
0,
"",
))
// after top level has been set up
mustSyscall("remount root", Mount(
"",
"/",
"",
MS_REMOUNT|MS_BIND|
MS_RDONLY|MS_NODEV|MS_NOSUID|MS_NOEXEC,
"",
))
must(os.WriteFile(
"/sys/module/firmware_class/parameters/path",
[]byte("/system/lib/firmware"),
0,
))
}
// mustSyscall calls [log.Fatalln] if err is non-nil.
func mustSyscall(action string, err error) {
if err != nil {
log.Fatalln("cannot "+action+":", err)
}
}
// must calls [log.Fatal] with err if it is non-nil.
func must(err error) {
if err != nil {
log.Fatal(err)
}
} }

View File

@@ -16,6 +16,7 @@ import (
"hakurei.app/command" "hakurei.app/command"
"hakurei.app/container/check" "hakurei.app/container/check"
"hakurei.app/container/fhs" "hakurei.app/container/fhs"
"hakurei.app/container/std"
"hakurei.app/hst" "hakurei.app/hst"
"hakurei.app/internal/dbus" "hakurei.app/internal/dbus"
"hakurei.app/internal/env" "hakurei.app/internal/env"
@@ -89,6 +90,9 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
flagHomeDir string flagHomeDir string
flagUserName string flagUserName string
flagSchedPolicy string
flagSchedPriority int
flagPrivateRuntime, flagPrivateTmpdir bool flagPrivateRuntime, flagPrivateTmpdir bool
flagWayland, flagX11, flagDBus, flagPipeWire, flagPulse bool flagWayland, flagX11, flagDBus, flagPipeWire, flagPulse bool
@@ -131,7 +135,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
log.Fatal(optionalErrorUnwrap(err)) log.Fatal(optionalErrorUnwrap(err))
return err return err
} else if progPath, err = check.NewAbs(p); err != nil { } else if progPath, err = check.NewAbs(p); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
return err return err
} }
} }
@@ -150,7 +154,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
et |= hst.EPipeWire et |= hst.EPipeWire
} }
config := &hst.Config{ config := hst.Config{
ID: flagID, ID: flagID,
Identity: flagIdentity, Identity: flagIdentity,
Groups: flagGroups, Groups: flagGroups,
@@ -177,6 +181,13 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
}, },
} }
if err := config.SchedPolicy.UnmarshalText(
[]byte(flagSchedPolicy),
); err != nil {
log.Fatal(err)
}
config.SchedPriority = std.Int(flagSchedPriority)
// bind GPU stuff // bind GPU stuff
if et&(hst.EX11|hst.EWayland) != 0 { if et&(hst.EX11|hst.EWayland) != 0 {
config.Container.Filesystem = append(config.Container.Filesystem, hst.FilesystemConfigJSON{FilesystemConfig: &hst.FSBind{ config.Container.Filesystem = append(config.Container.Filesystem, hst.FilesystemConfigJSON{FilesystemConfig: &hst.FSBind{
@@ -214,7 +225,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
homeDir = passwd.HomeDir homeDir = passwd.HomeDir
} }
if a, err := check.NewAbs(homeDir); err != nil { if a, err := check.NewAbs(homeDir); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
return err return err
} else { } else {
config.Container.Home = a config.Container.Home = a
@@ -234,11 +245,11 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
config.SessionBus = dbus.NewConfig(flagID, true, flagDBusMpris) config.SessionBus = dbus.NewConfig(flagID, true, flagDBusMpris)
} else { } else {
if f, err := os.Open(flagDBusConfigSession); err != nil { if f, err := os.Open(flagDBusConfigSession); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
} else { } else {
decodeJSON(log.Fatal, "load session bus proxy config", f, &config.SessionBus) decodeJSON(log.Fatal, "load session bus proxy config", f, &config.SessionBus)
if err = f.Close(); err != nil { if err = f.Close(); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
} }
} }
} }
@@ -246,11 +257,11 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
// system bus proxy is optional // system bus proxy is optional
if flagDBusConfigSystem != "nil" { if flagDBusConfigSystem != "nil" {
if f, err := os.Open(flagDBusConfigSystem); err != nil { if f, err := os.Open(flagDBusConfigSystem); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
} else { } else {
decodeJSON(log.Fatal, "load system bus proxy config", f, &config.SystemBus) decodeJSON(log.Fatal, "load system bus proxy config", f, &config.SystemBus)
if err = f.Close(); err != nil { if err = f.Close(); err != nil {
log.Fatal(err.Error()) log.Fatal(err)
} }
} }
} }
@@ -266,7 +277,7 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
} }
} }
outcome.Main(ctx, msg, config, -1) outcome.Main(ctx, msg, &config, -1)
panic("unreachable") panic("unreachable")
}). }).
Flag(&flagDBusConfigSession, "dbus-config", command.StringFlag("builtin"), Flag(&flagDBusConfigSession, "dbus-config", command.StringFlag("builtin"),
@@ -287,6 +298,10 @@ func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErr
"Container home directory"). "Container home directory").
Flag(&flagUserName, "u", command.StringFlag("chronos"), Flag(&flagUserName, "u", command.StringFlag("chronos"),
"Passwd user name within sandbox"). "Passwd user name within sandbox").
Flag(&flagSchedPolicy, "policy", command.StringFlag(""),
"Scheduling policy to set for the container").
Flag(&flagSchedPriority, "priority", command.IntFlag(0),
"Scheduling priority to set for the container").
Flag(&flagPrivateRuntime, "private-runtime", command.BoolFlag(false), Flag(&flagPrivateRuntime, "private-runtime", command.BoolFlag(false),
"Do not share XDG_RUNTIME_DIR between containers under the same identity"). "Do not share XDG_RUNTIME_DIR between containers under the same identity").
Flag(&flagPrivateTmpdir, "private-tmpdir", command.BoolFlag(false), Flag(&flagPrivateTmpdir, "private-tmpdir", command.BoolFlag(false),

View File

@@ -36,7 +36,7 @@ Commands:
}, },
{ {
"run", []string{"run", "-h"}, ` "run", []string{"run", "-h"}, `
Usage: hakurei run [-h | --help] [--dbus-config <value>] [--dbus-system <value>] [--mpris] [--dbus-log] [--id <value>] [-a <int>] [-g <value>] [-d <value>] [-u <value>] [--private-runtime] [--private-tmpdir] [--wayland] [-X] [--dbus] [--pipewire] [--pulse] COMMAND [OPTIONS] Usage: hakurei run [-h | --help] [--dbus-config <value>] [--dbus-system <value>] [--mpris] [--dbus-log] [--id <value>] [-a <int>] [-g <value>] [-d <value>] [-u <value>] [--policy <value>] [--priority <int>] [--private-runtime] [--private-tmpdir] [--wayland] [-X] [--dbus] [--pipewire] [--pulse] COMMAND [OPTIONS]
Flags: Flags:
-X Enable direct connection to X11 -X Enable direct connection to X11
@@ -60,6 +60,10 @@ Flags:
Allow owning MPRIS D-Bus path, has no effect if custom config is available Allow owning MPRIS D-Bus path, has no effect if custom config is available
-pipewire -pipewire
Enable connection to PipeWire via SecurityContext Enable connection to PipeWire via SecurityContext
-policy string
Scheduling policy to set for the container
-priority int
Scheduling priority to set for the container
-private-runtime -private-runtime
Do not share XDG_RUNTIME_DIR between containers under the same identity Do not share XDG_RUNTIME_DIR between containers under the same identity
-private-tmpdir -private-tmpdir

View File

@@ -4,11 +4,16 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"io"
"log" "log"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"runtime" "runtime"
"strconv"
"strings"
"sync"
"sync/atomic"
"syscall" "syscall"
"time" "time"
"unique" "unique"
@@ -82,7 +87,7 @@ func main() {
} }
if flagIdle { if flagIdle {
pkg.SchedPolicy = container.SCHED_IDLE pkg.SetSchedIdle = true
} }
return return
@@ -128,6 +133,218 @@ func main() {
) )
} }
{
var (
flagStatus bool
flagReport string
)
c.NewCommand(
"info",
"Display out-of-band metadata of an artifact",
func(args []string) (err error) {
if len(args) == 0 {
return errors.New("info requires at least 1 argument")
}
var r *rosa.Report
if flagReport != "" {
if r, err = rosa.OpenReport(flagReport); err != nil {
return err
}
defer func() {
if closeErr := r.Close(); err == nil {
err = closeErr
}
}()
defer r.HandleAccess(&err)()
}
for i, name := range args {
if p, ok := rosa.ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
var suffix string
if version := rosa.Std.Version(p); version != rosa.Unversioned {
suffix += "-" + version
}
fmt.Println("name : " + name + suffix)
meta := rosa.GetMetadata(p)
fmt.Println("description : " + meta.Description)
if meta.Website != "" {
fmt.Println("website : " +
strings.TrimSuffix(meta.Website, "/"))
}
if len(meta.Dependencies) > 0 {
fmt.Print("depends on :")
for _, d := range meta.Dependencies {
s := rosa.GetMetadata(d).Name
if version := rosa.Std.Version(d); version != rosa.Unversioned {
s += "-" + version
}
fmt.Print(" " + s)
}
fmt.Println()
}
const statusPrefix = "status : "
if flagStatus {
if r == nil {
var f io.ReadSeekCloser
f, err = cache.OpenStatus(rosa.Std.Load(p))
if err != nil {
if errors.Is(err, os.ErrNotExist) {
fmt.Println(
statusPrefix + "not yet cured",
)
} else {
return
}
} else {
fmt.Print(statusPrefix)
_, err = io.Copy(os.Stdout, f)
if err = errors.Join(err, f.Close()); err != nil {
return
}
}
} else {
status, n := r.ArtifactOf(cache.Ident(rosa.Std.Load(p)))
if status == nil {
fmt.Println(
statusPrefix + "not in report",
)
} else {
fmt.Println("size :", n)
fmt.Print(statusPrefix)
if _, err = os.Stdout.Write(status); err != nil {
return
}
}
}
}
if i != len(args)-1 {
fmt.Println()
}
}
}
return nil
},
).
Flag(
&flagStatus,
"status", command.BoolFlag(false),
"Display cure status if available",
).
Flag(
&flagReport,
"report", command.StringFlag(""),
"Load cure status from this report file instead of cache",
)
}
c.NewCommand(
"report",
"Generate an artifact cure report for the current cache",
func(args []string) (err error) {
var w *os.File
switch len(args) {
case 0:
w = os.Stdout
case 1:
if w, err = os.OpenFile(
args[0],
os.O_CREATE|os.O_EXCL|syscall.O_WRONLY,
0400,
); err != nil {
return
}
defer func() {
closeErr := w.Close()
if err == nil {
err = closeErr
}
}()
default:
return errors.New("report requires 1 argument")
}
if container.Isatty(int(w.Fd())) {
return errors.New("output appears to be a terminal")
}
return rosa.WriteReport(msg, w, cache)
},
)
{
var flagJobs int
c.NewCommand("updates", command.UsageInternal, func([]string) error {
var (
errsMu sync.Mutex
errs []error
n atomic.Uint64
)
w := make(chan rosa.PArtifact)
var wg sync.WaitGroup
for range max(flagJobs, 1) {
wg.Go(func() {
for p := range w {
meta := rosa.GetMetadata(p)
if meta.ID == 0 {
continue
}
v, err := meta.GetVersions(ctx)
if err != nil {
errsMu.Lock()
errs = append(errs, err)
errsMu.Unlock()
continue
}
if current, latest :=
rosa.Std.Version(p),
meta.GetLatest(v); current != latest {
n.Add(1)
log.Printf("%s %s < %s", meta.Name, current, latest)
continue
}
msg.Verbosef("%s is up to date", meta.Name)
}
})
}
done:
for i := range rosa.PresetEnd {
select {
case w <- rosa.PArtifact(i):
break
case <-ctx.Done():
break done
}
}
close(w)
wg.Wait()
if v := n.Load(); v > 0 {
errs = append(errs, errors.New(strconv.Itoa(int(v))+
" package(s) are out of date"))
}
return errors.Join(errs...)
}).
Flag(
&flagJobs,
"j", command.IntFlag(32),
"Maximum number of simultaneous connections",
)
}
{ {
var ( var (
flagGentoo string flagGentoo string
@@ -217,7 +434,8 @@ func main() {
{ {
var ( var (
flagDump string flagDump string
flagExport string
) )
c.NewCommand( c.NewCommand(
"cure", "cure",
@@ -230,10 +448,34 @@ func main() {
return fmt.Errorf("unknown artifact %q", args[0]) return fmt.Errorf("unknown artifact %q", args[0])
} else if flagDump == "" { } else if flagDump == "" {
pathname, _, err := cache.Cure(rosa.Std.Load(p)) pathname, _, err := cache.Cure(rosa.Std.Load(p))
if err == nil { if err != nil {
log.Println(pathname) return err
} }
return err log.Println(pathname)
if flagExport != "" {
msg.Verbosef("exporting %s to %s...", args[0], flagExport)
var f *os.File
if f, err = os.OpenFile(
flagExport,
os.O_WRONLY|os.O_CREATE|os.O_EXCL,
0400,
); err != nil {
return err
} else if _, err = pkg.Flatten(
os.DirFS(pathname.String()),
".",
f,
); err != nil {
_ = f.Close()
return err
} else if err = f.Close(); err != nil {
return err
}
}
return nil
} else { } else {
f, err := os.OpenFile( f, err := os.OpenFile(
flagDump, flagDump,
@@ -257,6 +499,11 @@ func main() {
&flagDump, &flagDump,
"dump", command.StringFlag(""), "dump", command.StringFlag(""),
"Write IR to specified pathname and terminate", "Write IR to specified pathname and terminate",
).
Flag(
&flagExport,
"export", command.StringFlag(""),
"Export cured artifact to specified pathname",
) )
} }
@@ -271,17 +518,19 @@ func main() {
"shell", "shell",
"Interactive shell in the specified Rosa OS environment", "Interactive shell in the specified Rosa OS environment",
func(args []string) error { func(args []string) error {
root := make([]pkg.Artifact, 0, 6+len(args)) presets := make([]rosa.PArtifact, len(args))
for _, arg := range args { for i, arg := range args {
p, ok := rosa.ResolveName(arg) p, ok := rosa.ResolveName(arg)
if !ok { if !ok {
return fmt.Errorf("unknown artifact %q", arg) return fmt.Errorf("unknown artifact %q", arg)
} }
root = append(root, rosa.Std.Load(p)) presets[i] = p
} }
root := make(rosa.Collect, 0, 6+len(args))
root = rosa.Std.AppendPresets(root, presets...)
if flagWithToolchain { if flagWithToolchain {
musl, compilerRT, runtimes, clang := rosa.Std.NewLLVM() musl, compilerRT, runtimes, clang := (rosa.Std - 1).NewLLVM()
root = append(root, musl, compilerRT, runtimes, clang) root = append(root, musl, compilerRT, runtimes, clang)
} else { } else {
root = append(root, rosa.Std.Load(rosa.Musl)) root = append(root, rosa.Std.Load(rosa.Musl))
@@ -291,6 +540,12 @@ func main() {
rosa.Std.Load(rosa.Toybox), rosa.Std.Load(rosa.Toybox),
) )
if _, _, err := cache.Cure(&root); err == nil {
return errors.New("unreachable")
} else if !errors.Is(err, rosa.Collected{}) {
return err
}
type cureRes struct { type cureRes struct {
pathname *check.Absolute pathname *check.Absolute
checksum unique.Handle[pkg.Checksum] checksum unique.Handle[pkg.Checksum]
@@ -401,6 +656,16 @@ func main() {
if cache != nil { if cache != nil {
cache.Close() cache.Close()
} }
log.Fatal(err) if w, ok := err.(interface{ Unwrap() []error }); !ok {
log.Fatal(err)
} else {
errs := w.Unwrap()
for i, e := range errs {
if i == len(errs)-1 {
log.Fatal(e)
}
log.Println(e)
}
}
}) })
} }

View File

@@ -38,9 +38,13 @@ type (
Container struct { Container struct {
// Whether the container init should stay alive after its parent terminates. // Whether the container init should stay alive after its parent terminates.
AllowOrphan bool AllowOrphan bool
// Scheduling policy to set via sched_setscheduler(2). The zero value // Whether to set SchedPolicy and SchedPriority via sched_setscheduler(2).
// skips this call. Supported policies are [SCHED_BATCH], [SCHED_IDLE]. SetScheduler bool
SchedPolicy int // Scheduling policy to set via sched_setscheduler(2).
SchedPolicy std.SchedPolicy
// Scheduling priority to set via sched_setscheduler(2). The zero value
// implies the minimum value supported by the current SchedPolicy.
SchedPriority std.Int
// Cgroup fd, nil to disable. // Cgroup fd, nil to disable.
Cgroup *int Cgroup *int
// ExtraFiles passed through to initial process in the container, with // ExtraFiles passed through to initial process in the container, with
@@ -373,16 +377,38 @@ func (p *Container) Start() error {
// sched_setscheduler: thread-directed but acts on all processes // sched_setscheduler: thread-directed but acts on all processes
// created from the calling thread // created from the calling thread
if p.SchedPolicy > 0 { if p.SetScheduler {
p.msg.Verbosef("setting scheduling policy %d", p.SchedPolicy) if p.SchedPolicy < 0 || p.SchedPolicy > std.SCHED_LAST {
return &StartError{
Fatal: false,
Step: "set scheduling policy",
Err: EINVAL,
}
}
var param schedParam
if priority, err := p.SchedPolicy.GetPriorityMin(); err != nil {
return &StartError{
Fatal: true,
Step: "get minimum priority",
Err: err,
}
} else {
param.priority = max(priority, p.SchedPriority)
}
p.msg.Verbosef(
"setting scheduling policy %s priority %d",
p.SchedPolicy, param.priority,
)
if err := schedSetscheduler( if err := schedSetscheduler(
0, // calling thread 0, // calling thread
p.SchedPolicy, p.SchedPolicy,
&schedParam{0}, &param,
); err != nil { ); err != nil {
return &StartError{ return &StartError{
Fatal: true, Fatal: true,
Step: "enforce landlock ruleset", Step: "set scheduling policy",
Err: err, Err: err,
} }
} }

View File

@@ -1,6 +1,12 @@
package std package std
import "iter" import (
"encoding"
"iter"
"strconv"
"sync"
"syscall"
)
// Syscalls returns an iterator over all wired syscalls. // Syscalls returns an iterator over all wired syscalls.
func Syscalls() iter.Seq2[string, ScmpSyscall] { func Syscalls() iter.Seq2[string, ScmpSyscall] {
@@ -26,3 +32,128 @@ func SyscallResolveName(name string) (num ScmpSyscall, ok bool) {
num, ok = syscallNumExtra[name] num, ok = syscallNumExtra[name]
return return
} }
// SchedPolicy denotes a scheduling policy defined in include/uapi/linux/sched.h.
type SchedPolicy int
// include/uapi/linux/sched.h
const (
SCHED_NORMAL SchedPolicy = iota
SCHED_FIFO
SCHED_RR
SCHED_BATCH
_SCHED_ISO // SCHED_ISO: reserved but not implemented yet
SCHED_IDLE
SCHED_DEADLINE
SCHED_EXT
SCHED_LAST SchedPolicy = iota - 1
)
var _ encoding.TextMarshaler = SCHED_LAST
var _ encoding.TextUnmarshaler = new(SCHED_LAST)
// String returns a unique representation of policy, also used in encoding.
func (policy SchedPolicy) String() string {
switch policy {
case SCHED_NORMAL:
return ""
case SCHED_FIFO:
return "fifo"
case SCHED_RR:
return "rr"
case SCHED_BATCH:
return "batch"
case SCHED_IDLE:
return "idle"
case SCHED_DEADLINE:
return "deadline"
case SCHED_EXT:
return "ext"
default:
return "invalid policy " + strconv.Itoa(int(policy))
}
}
// MarshalText performs bounds checking and returns the result of String.
func (policy SchedPolicy) MarshalText() ([]byte, error) {
if policy == _SCHED_ISO || policy < 0 || policy > SCHED_LAST {
return nil, syscall.EINVAL
}
return []byte(policy.String()), nil
}
// InvalidSchedPolicyError is an invalid string representation of a [SchedPolicy].
type InvalidSchedPolicyError string
func (InvalidSchedPolicyError) Unwrap() error { return syscall.EINVAL }
func (e InvalidSchedPolicyError) Error() string {
return "invalid scheduling policy " + strconv.Quote(string(e))
}
// UnmarshalText is the inverse of MarshalText.
func (policy *SchedPolicy) UnmarshalText(text []byte) error {
switch string(text) {
case "fifo":
*policy = SCHED_FIFO
case "rr":
*policy = SCHED_RR
case "batch":
*policy = SCHED_BATCH
case "idle":
*policy = SCHED_IDLE
case "deadline":
*policy = SCHED_DEADLINE
case "ext":
*policy = SCHED_EXT
case "":
*policy = 0
return nil
default:
return InvalidSchedPolicyError(text)
}
return nil
}
// for sched_get_priority_max and sched_get_priority_min
var (
schedPriority [SCHED_LAST + 1][2]Int
schedPriorityErr [SCHED_LAST + 1][2]error
schedPriorityOnce [SCHED_LAST + 1][2]sync.Once
)
// GetPriorityMax returns the maximum priority value that can be used with the
// scheduling algorithm identified by policy.
func (policy SchedPolicy) GetPriorityMax() (Int, error) {
schedPriorityOnce[policy][0].Do(func() {
priority, _, errno := syscall.Syscall(
syscall.SYS_SCHED_GET_PRIORITY_MAX,
uintptr(policy),
0, 0,
)
schedPriority[policy][0] = Int(priority)
if errno != 0 {
schedPriorityErr[policy][0] = errno
}
})
return schedPriority[policy][0], schedPriorityErr[policy][0]
}
// GetPriorityMin returns the minimum priority value that can be used with the
// scheduling algorithm identified by policy.
func (policy SchedPolicy) GetPriorityMin() (Int, error) {
schedPriorityOnce[policy][1].Do(func() {
priority, _, errno := syscall.Syscall(
syscall.SYS_SCHED_GET_PRIORITY_MIN,
uintptr(policy),
0, 0,
)
schedPriority[policy][1] = Int(priority)
if errno != 0 {
schedPriorityErr[policy][1] = errno
}
})
return schedPriority[policy][1], schedPriorityErr[policy][1]
}

View File

@@ -1,6 +1,11 @@
package std_test package std_test
import ( import (
"encoding/json"
"errors"
"math"
"reflect"
"syscall"
"testing" "testing"
"hakurei.app/container/std" "hakurei.app/container/std"
@@ -19,3 +24,90 @@ func TestSyscallResolveName(t *testing.T) {
}) })
} }
} }
func TestSchedPolicyJSON(t *testing.T) {
t.Parallel()
testCases := []struct {
policy std.SchedPolicy
want string
encodeErr error
decodeErr error
}{
{std.SCHED_NORMAL, `""`, nil, nil},
{std.SCHED_FIFO, `"fifo"`, nil, nil},
{std.SCHED_RR, `"rr"`, nil, nil},
{std.SCHED_BATCH, `"batch"`, nil, nil},
{4, `"invalid policy 4"`, syscall.EINVAL, std.InvalidSchedPolicyError("invalid policy 4")},
{std.SCHED_IDLE, `"idle"`, nil, nil},
{std.SCHED_DEADLINE, `"deadline"`, nil, nil},
{std.SCHED_EXT, `"ext"`, nil, nil},
{math.MaxInt, `"iso"`, syscall.EINVAL, std.InvalidSchedPolicyError("iso")},
}
for _, tc := range testCases {
name := tc.policy.String()
if tc.policy == std.SCHED_NORMAL {
name = "normal"
}
t.Run(name, func(t *testing.T) {
t.Parallel()
got, err := json.Marshal(tc.policy)
if !errors.Is(err, tc.encodeErr) {
t.Fatalf("Marshal: error = %v, want %v", err, tc.encodeErr)
}
if err == nil && string(got) != tc.want {
t.Fatalf("Marshal: %s, want %s", string(got), tc.want)
}
var v std.SchedPolicy
if err = json.Unmarshal([]byte(tc.want), &v); !reflect.DeepEqual(err, tc.decodeErr) {
t.Fatalf("Unmarshal: error = %v, want %v", err, tc.decodeErr)
}
if err == nil && v != tc.policy {
t.Fatalf("Unmarshal: %d, want %d", v, tc.policy)
}
})
}
}
func TestSchedPolicyMinMax(t *testing.T) {
t.Parallel()
testCases := []struct {
policy std.SchedPolicy
min, max std.Int
err error
}{
{std.SCHED_NORMAL, 0, 0, nil},
{std.SCHED_FIFO, 1, 99, nil},
{std.SCHED_RR, 1, 99, nil},
{std.SCHED_BATCH, 0, 0, nil},
{4, -1, -1, syscall.EINVAL},
{std.SCHED_IDLE, 0, 0, nil},
{std.SCHED_DEADLINE, 0, 0, nil},
{std.SCHED_EXT, 0, 0, nil},
}
for _, tc := range testCases {
name := tc.policy.String()
if tc.policy == std.SCHED_NORMAL {
name = "normal"
}
t.Run(name, func(t *testing.T) {
t.Parallel()
if priority, err := tc.policy.GetPriorityMax(); !reflect.DeepEqual(err, tc.err) {
t.Fatalf("GetPriorityMax: error = %v, want %v", err, tc.err)
} else if priority != tc.max {
t.Fatalf("GetPriorityMax: %d, want %d", priority, tc.max)
}
if priority, err := tc.policy.GetPriorityMin(); !reflect.DeepEqual(err, tc.err) {
t.Fatalf("GetPriorityMin: error = %v, want %v", err, tc.err)
} else if priority != tc.min {
t.Fatalf("GetPriorityMin: %d, want %d", priority, tc.min)
}
})
}
}

View File

@@ -43,18 +43,6 @@ func Isatty(fd int) bool {
return r == 0 return r == 0
} }
// include/uapi/linux/sched.h
const (
SCHED_NORMAL = iota
SCHED_FIFO
SCHED_RR
SCHED_BATCH
_ // SCHED_ISO: reserved but not implemented yet
SCHED_IDLE
SCHED_DEADLINE
SCHED_EXT
)
// schedParam is equivalent to struct sched_param from include/linux/sched.h. // schedParam is equivalent to struct sched_param from include/linux/sched.h.
type schedParam struct { type schedParam struct {
// sched_priority // sched_priority
@@ -74,13 +62,13 @@ type schedParam struct {
// this if you do not have something similar in place! // this if you do not have something similar in place!
// //
// [very subtle to use correctly]: https://www.openwall.com/lists/musl/2016/03/01/4 // [very subtle to use correctly]: https://www.openwall.com/lists/musl/2016/03/01/4
func schedSetscheduler(tid, policy int, param *schedParam) error { func schedSetscheduler(tid int, policy std.SchedPolicy, param *schedParam) error {
if r, _, errno := Syscall( if _, _, errno := Syscall(
SYS_SCHED_SETSCHEDULER, SYS_SCHED_SETSCHEDULER,
uintptr(tid), uintptr(tid),
uintptr(policy), uintptr(policy),
uintptr(unsafe.Pointer(param)), uintptr(unsafe.Pointer(param)),
); r < 0 { ); errno != 0 {
return errno return errno
} }
return nil return nil

12
flake.lock generated
View File

@@ -7,11 +7,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1765384171, "lastModified": 1772985280,
"narHash": "sha256-FuFtkJrW1Z7u+3lhzPRau69E0CNjADku1mLQQflUORo=", "narHash": "sha256-FdrNykOoY9VStevU4zjSUdvsL9SzJTcXt4omdEDZDLk=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "44777152652bc9eacf8876976fa72cc77ca8b9d8", "rev": "8f736f007139d7f70752657dff6a401a585d6cbc",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -23,11 +23,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1765311797, "lastModified": 1772822230,
"narHash": "sha256-mSD5Ob7a+T2RNjvPvOA1dkJHGVrNVl8ZOrAwBjKBDQo=", "narHash": "sha256-yf3iYLGbGVlIthlQIk5/4/EQDZNNEmuqKZkQssMljuw=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "09eb77e94fa25202af8f3e81ddc7353d9970ac1b", "rev": "71caefce12ba78d84fe618cf61644dce01cf3a96",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -99,7 +99,7 @@
hakurei = pkgs.pkgsStatic.callPackage ./package.nix { hakurei = pkgs.pkgsStatic.callPackage ./package.nix {
inherit (pkgs) inherit (pkgs)
# passthru.buildInputs # passthru.buildInputs
go go_1_26
clang clang
# nativeBuildInputs # nativeBuildInputs
@@ -182,7 +182,7 @@
let let
# this is used for interactive vm testing during development, where tests might be broken # this is used for interactive vm testing during development, where tests might be broken
package = self.packages.${pkgs.stdenv.hostPlatform.system}.hakurei.override { package = self.packages.${pkgs.stdenv.hostPlatform.system}.hakurei.override {
buildGoModule = previousArgs: pkgs.pkgsStatic.buildGoModule (previousArgs // { doCheck = false; }); buildGo126Module = previousArgs: pkgs.pkgsStatic.buildGo126Module (previousArgs // { doCheck = false; });
}; };
in in
{ {

2
go.mod
View File

@@ -1,3 +1,3 @@
module hakurei.app module hakurei.app
go 1.25 go 1.26

View File

@@ -6,96 +6,137 @@ import (
"strings" "strings"
"hakurei.app/container/check" "hakurei.app/container/check"
"hakurei.app/container/std"
) )
// Config configures an application container, implemented in internal/app. // Config configures an application container.
type Config struct { type Config struct {
// Reverse-DNS style configured arbitrary identifier string. // Reverse-DNS style configured arbitrary identifier string.
// Passed to wayland security-context-v1 and used as part of defaults in dbus session proxy. //
// This value is passed as is to Wayland security-context-v1 and used as
// part of defaults in D-Bus session proxy. The zero value causes a default
// value to be derived from the container instance.
ID string `json:"id,omitempty"` ID string `json:"id,omitempty"`
// System services to make available in the container. // System services to make available in the container.
Enablements *Enablements `json:"enablements,omitempty"` Enablements *Enablements `json:"enablements,omitempty"`
// Session D-Bus proxy configuration. // Session D-Bus proxy configuration.
// If set to nil, session bus proxy assume built-in defaults. //
// Has no effect if [EDBus] but is not set in Enablements. The zero value
// assumes built-in defaults derived from ID.
SessionBus *BusConfig `json:"session_bus,omitempty"` SessionBus *BusConfig `json:"session_bus,omitempty"`
// System D-Bus proxy configuration. // System D-Bus proxy configuration.
// If set to nil, system bus proxy is disabled. //
// Has no effect if [EDBus] but is not set in Enablements. The zero value
// disables system bus proxy.
SystemBus *BusConfig `json:"system_bus,omitempty"` SystemBus *BusConfig `json:"system_bus,omitempty"`
// Direct access to wayland socket, no attempt is made to attach security-context-v1 // Direct access to Wayland socket, no attempt is made to attach
// and the bare socket is made available to the container. // security-context-v1 and the bare socket is made available to the
// container.
// //
// This option is unsupported and most likely enables full control over the Wayland // This option is unsupported and will most likely enable full control over
// session. Do not set this to true unless you are sure you know what you are doing. // the Wayland session from within the container. Do not set this to true
// unless you are sure you know what you are doing.
DirectWayland bool `json:"direct_wayland,omitempty"` DirectWayland bool `json:"direct_wayland,omitempty"`
// Direct access to the PipeWire socket established via SecurityContext::Create, no
// attempt is made to start the pipewire-pulse server. // Direct access to the PipeWire socket established via SecurityContext::Create,
// no attempt is made to start the pipewire-pulse server.
// //
// The SecurityContext machinery is fatally flawed, it blindly sets read and execute // The SecurityContext machinery is fatally flawed, it unconditionally sets
// bits on all objects for clients with the lowest achievable privilege level (by // read and execute bits on all objects for clients with the lowest achievable
// setting PW_KEY_ACCESS to "restricted"). This enables them to call any method // privilege level (by setting PW_KEY_ACCESS to "restricted" or by satisfying
// targeting any object, and since Registry::Destroy checks for the read and execute bit, // all conditions of [the /.flatpak-info hack]). This enables them to call
// allows the destruction of any object other than PW_ID_CORE as well. This behaviour // any method targeting any object, and since Registry::Destroy checks for
// is implemented separately in media-session and wireplumber, with the wireplumber // the read and execute bit, allows the destruction of any object other than
// implementation in Lua via an embedded Lua vm. In all known setups, wireplumber is // PW_ID_CORE as well.
// in use, and there is no known way to change its behaviour and set permissions
// differently without replacing the Lua script. Also, since PipeWire relies on these
// permissions to work, reducing them is not possible.
// //
// Currently, the only other sandboxed use case is flatpak, which is not aware of // This behaviour is implemented separately in media-session and wireplumber,
// PipeWire and blindly exposes the bare PulseAudio socket to the container (behaves // with the wireplumber implementation in Lua via an embedded Lua vm. In all
// like DirectPulse). This socket is backed by the pipewire-pulse compatibility daemon, // known setups, wireplumber is in use, and in that case, no option for
// which obtains client pid via the SO_PEERCRED option. The PipeWire daemon, pipewire-pulse // configuring this behaviour exists, without replacing the Lua script.
// daemon and the session manager daemon then separately performs the /.flatpak-info hack // Also, since PipeWire relies on these permissions to work, reducing them
// described in https://git.gensokyo.uk/security/hakurei/issues/21. Under such use case, // was never possible in the first place.
// since the client has no direct access to PipeWire, insecure parts of the protocol are
// obscured by pipewire-pulse simply not implementing them, and thus hiding the flaws
// described above.
// //
// Hakurei does not rely on the /.flatpak-info hack. Instead, a socket is sets up via // Currently, the only other sandboxed use case is flatpak, which is not
// SecurityContext. A pipewire-pulse server connected through it achieves the same // aware of PipeWire and blindly exposes the bare PulseAudio socket to the
// permissions as flatpak does via the /.flatpak-info hack and is maintained for the // container (behaves like DirectPulse). This socket is backed by the
// life of the container. // pipewire-pulse compatibility daemon, which obtains client pid via the
// SO_PEERCRED option. The PipeWire daemon, pipewire-pulse daemon and the
// session manager daemon then separately performs [the /.flatpak-info hack].
// Under such use case, since the client has no direct access to PipeWire,
// insecure parts of the protocol are obscured by the absence of an
// equivalent API in PulseAudio, or pipewire-pulse simply not implementing
// them.
//
// Hakurei does not rely on [the /.flatpak-info hack]. Instead, a socket is
// sets up via SecurityContext. A pipewire-pulse server connected through it
// achieves the same permissions as flatpak does via [the /.flatpak-info hack]
// and is maintained for the life of the container.
//
// This option is unsupported and enables a denial-of-service attack as the
// sandboxed client is able to destroy any client object and thus
// disconnecting them from PipeWire, or destroy the SecurityContext object,
// preventing any further container creation.
// //
// This option is unsupported and enables a denial-of-service attack as the sandboxed
// client is able to destroy any client object and thus disconnecting them from PipeWire,
// or destroy the SecurityContext object preventing any further container creation.
// Do not set this to true, it is insecure under any configuration. // Do not set this to true, it is insecure under any configuration.
DirectPipeWire bool `json:"direct_pipewire,omitempty"`
// Direct access to PulseAudio socket, no attempt is made to establish pipewire-pulse
// server via a PipeWire socket with a SecurityContext attached and the bare socket
// is made available to the container.
// //
// This option is unsupported and enables arbitrary code execution as the PulseAudio // [the /.flatpak-info hack]: https://git.gensokyo.uk/rosa/hakurei/issues/21
// server. Do not set this to true, it is insecure under any configuration. DirectPipeWire bool `json:"direct_pipewire,omitempty"`
// Direct access to PulseAudio socket, no attempt is made to establish
// pipewire-pulse server via a PipeWire socket with a SecurityContext
// attached, and the bare socket is made available to the container.
//
// This option is unsupported and enables arbitrary code execution as the
// PulseAudio server.
//
// Do not set this to true, it is insecure under any configuration.
DirectPulse bool `json:"direct_pulse,omitempty"` DirectPulse bool `json:"direct_pulse,omitempty"`
// Extra acl updates to perform before setuid. // Extra acl updates to perform before setuid.
ExtraPerms []ExtraPermConfig `json:"extra_perms,omitempty"` ExtraPerms []ExtraPermConfig `json:"extra_perms,omitempty"`
// Numerical application id, passed to hsu, used to derive init user namespace credentials. // Numerical application id, passed to hsu, used to derive init user
// namespace credentials.
Identity int `json:"identity"` Identity int `json:"identity"`
// Init user namespace supplementary groups inherited by all container processes. // Init user namespace supplementary groups inherited by all container processes.
Groups []string `json:"groups"` Groups []string `json:"groups"`
// Scheduling policy to set for the container.
//
// The zero value retains the current scheduling policy.
SchedPolicy std.SchedPolicy `json:"sched_policy,omitempty"`
// Scheduling priority to set for the container.
//
// The zero value implies the minimum priority of the current SchedPolicy.
// Has no effect if SchedPolicy is zero.
SchedPriority std.Int `json:"sched_priority,omitempty"`
// High level configuration applied to the underlying [container]. // High level configuration applied to the underlying [container].
Container *ContainerConfig `json:"container"` Container *ContainerConfig `json:"container"`
} }
var ( var (
// ErrConfigNull is returned by [Config.Validate] for an invalid configuration that contains a null value for any // ErrConfigNull is returned by [Config.Validate] for an invalid configuration
// field that must not be null. // that contains a null value for any field that must not be null.
ErrConfigNull = errors.New("unexpected null in config") ErrConfigNull = errors.New("unexpected null in config")
// ErrIdentityBounds is returned by [Config.Validate] for an out of bounds [Config.Identity] value. // ErrIdentityBounds is returned by [Config.Validate] for an out of bounds
// [Config.Identity] value.
ErrIdentityBounds = errors.New("identity out of bounds") ErrIdentityBounds = errors.New("identity out of bounds")
// ErrEnviron is returned by [Config.Validate] if an environment variable name contains '=' or NUL. // ErrSchedPolicyBounds is returned by [Config.Validate] for an out of bounds
// [Config.SchedPolicy] value.
ErrSchedPolicyBounds = errors.New("scheduling policy out of bounds")
// ErrEnviron is returned by [Config.Validate] if an environment variable
// name contains '=' or NUL.
ErrEnviron = errors.New("invalid environment variable name") ErrEnviron = errors.New("invalid environment variable name")
// ErrInsecure is returned by [Config.Validate] if the configuration is considered insecure. // ErrInsecure is returned by [Config.Validate] if the configuration is
// considered insecure.
ErrInsecure = errors.New("configuration is insecure") ErrInsecure = errors.New("configuration is insecure")
) )
@@ -112,6 +153,13 @@ func (config *Config) Validate() error {
Msg: "identity " + strconv.Itoa(config.Identity) + " out of range"} Msg: "identity " + strconv.Itoa(config.Identity) + " out of range"}
} }
if config.SchedPolicy < 0 || config.SchedPolicy > std.SCHED_LAST {
return &AppError{Step: "validate configuration", Err: ErrSchedPolicyBounds,
Msg: "scheduling policy " +
strconv.Itoa(int(config.SchedPolicy)) +
" out of range"}
}
if err := config.SessionBus.CheckInterfaces("session"); err != nil { if err := config.SessionBus.CheckInterfaces("session"); err != nil {
return err return err
} }

View File

@@ -22,6 +22,10 @@ func TestConfigValidate(t *testing.T) {
Msg: "identity -1 out of range"}}, Msg: "identity -1 out of range"}},
{"identity upper", &hst.Config{Identity: 10000}, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds, {"identity upper", &hst.Config{Identity: 10000}, &hst.AppError{Step: "validate configuration", Err: hst.ErrIdentityBounds,
Msg: "identity 10000 out of range"}}, Msg: "identity 10000 out of range"}},
{"sched lower", &hst.Config{SchedPolicy: -1}, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds,
Msg: "scheduling policy -1 out of range"}},
{"sched upper", &hst.Config{SchedPolicy: 0xcafe}, &hst.AppError{Step: "validate configuration", Err: hst.ErrSchedPolicyBounds,
Msg: "scheduling policy 51966 out of range"}},
{"dbus session", &hst.Config{SessionBus: &hst.BusConfig{See: []string{""}}}, {"dbus session", &hst.Config{SessionBus: &hst.BusConfig{See: []string{""}}},
&hst.BadInterfaceError{Interface: "", Segment: "session"}}, &hst.BadInterfaceError{Interface: "", Segment: "session"}},
{"dbus system", &hst.Config{SystemBus: &hst.BusConfig{See: []string{""}}}, {"dbus system", &hst.Config{SystemBus: &hst.BusConfig{See: []string{""}}},

View File

@@ -16,18 +16,20 @@ const PrivateTmp = "/.hakurei"
var AbsPrivateTmp = check.MustAbs(PrivateTmp) var AbsPrivateTmp = check.MustAbs(PrivateTmp)
const ( const (
// WaitDelayDefault is used when WaitDelay has its zero value. // WaitDelayDefault is used when WaitDelay has the zero value.
WaitDelayDefault = 5 * time.Second WaitDelayDefault = 5 * time.Second
// WaitDelayMax is used if WaitDelay exceeds its value. // WaitDelayMax is used when WaitDelay exceeds its value.
WaitDelayMax = 30 * time.Second WaitDelayMax = 30 * time.Second
) )
const ( const (
// ExitFailure is returned if the container fails to start. // ExitFailure is returned if the container fails to start.
ExitFailure = iota + 1 ExitFailure = iota + 1
// ExitCancel is returned if the container is terminated by a shim-directed signal which cancels its context. // ExitCancel is returned if the container is terminated by a shim-directed
// signal which cancels its context.
ExitCancel ExitCancel
// ExitOrphan is returned when the shim is orphaned before priv side delivers a signal. // ExitOrphan is returned when the shim is orphaned before priv side process
// delivers a signal.
ExitOrphan ExitOrphan
// ExitRequest is returned when the priv side process requests shim exit. // ExitRequest is returned when the priv side process requests shim exit.
@@ -38,10 +40,12 @@ const (
type Flags uintptr type Flags uintptr
const ( const (
// FMultiarch unblocks syscalls required for multiarch to work on applicable targets. // FMultiarch unblocks system calls required for multiarch to work on
// multiarch-enabled targets (amd64, arm64).
FMultiarch Flags = 1 << iota FMultiarch Flags = 1 << iota
// FSeccompCompat changes emitted seccomp filter programs to be identical to that of Flatpak. // FSeccompCompat changes emitted seccomp filter programs to be identical to
// that of Flatpak in enabled rulesets.
FSeccompCompat FSeccompCompat
// FDevel unblocks ptrace and friends. // FDevel unblocks ptrace and friends.
FDevel FDevel
@@ -54,12 +58,15 @@ const (
// FTty unblocks dangerous terminal I/O (faking input). // FTty unblocks dangerous terminal I/O (faking input).
FTty FTty
// FMapRealUID maps the target user uid to the privileged user uid in the container user namespace. // FMapRealUID maps the target user uid to the privileged user uid in the
// Some programs fail to connect to dbus session running as a different uid, // container user namespace.
// this option works around it by mapping priv-side caller uid in container. //
// Some programs fail to connect to dbus session running as a different uid,
// this option works around it by mapping priv-side caller uid in container.
FMapRealUID FMapRealUID
// FDevice mount /dev/ from the init mount namespace as-is in the container mount namespace. // FDevice mount /dev/ from the init mount namespace as is in the container
// mount namespace.
FDevice FDevice
// FShareRuntime shares XDG_RUNTIME_DIR between containers under the same identity. // FShareRuntime shares XDG_RUNTIME_DIR between containers under the same identity.
@@ -112,30 +119,37 @@ func (flags Flags) String() string {
} }
} }
// ContainerConfig describes the container configuration to be applied to an underlying [container]. // ContainerConfig describes the container configuration to be applied to an
// underlying [container]. It is validated by [Config.Validate].
type ContainerConfig struct { type ContainerConfig struct {
// Container UTS namespace hostname. // Container UTS namespace hostname.
Hostname string `json:"hostname,omitempty"` Hostname string `json:"hostname,omitempty"`
// Duration in nanoseconds to wait for after interrupting the initial process. // Duration in nanoseconds to wait for after interrupting the initial process.
// Defaults to [WaitDelayDefault] if zero, or [WaitDelayMax] if greater than [WaitDelayMax]. //
// Values lesser than zero is equivalent to zero, bypassing [WaitDelayDefault]. // Defaults to [WaitDelayDefault] if zero, or [WaitDelayMax] if greater than
// [WaitDelayMax]. Values lesser than zero is equivalent to zero, bypassing
// [WaitDelayDefault].
WaitDelay time.Duration `json:"wait_delay,omitempty"` WaitDelay time.Duration `json:"wait_delay,omitempty"`
// Initial process environment variables. // Initial process environment variables.
Env map[string]string `json:"env"` Env map[string]string `json:"env"`
/* Container mount points. // Container mount points.
//
If the first element targets /, it is inserted early and excluded from path hiding. */ // If the first element targets /, it is inserted early and excluded from
// path hiding. Otherwise, an anonymous instance of tmpfs is set up on /.
Filesystem []FilesystemConfigJSON `json:"filesystem"` Filesystem []FilesystemConfigJSON `json:"filesystem"`
// String used as the username of the emulated user, validated against the default NAME_REGEX from adduser. // String used as the username of the emulated user, validated against the
// default NAME_REGEX from adduser.
//
// Defaults to passwd name of target uid or chronos. // Defaults to passwd name of target uid or chronos.
Username string `json:"username,omitempty"` Username string `json:"username,omitempty"`
// Pathname of shell in the container filesystem to use for the emulated user. // Pathname of shell in the container filesystem to use for the emulated user.
Shell *check.Absolute `json:"shell"` Shell *check.Absolute `json:"shell"`
// Directory in the container filesystem to enter and use as the home directory of the emulated user. // Directory in the container filesystem to enter and use as the home
// directory of the emulated user.
Home *check.Absolute `json:"home"` Home *check.Absolute `json:"home"`
// Pathname to executable file in the container filesystem. // Pathname to executable file in the container filesystem.
@@ -148,6 +162,7 @@ type ContainerConfig struct {
} }
// ContainerConfigF is [ContainerConfig] stripped of its methods. // ContainerConfigF is [ContainerConfig] stripped of its methods.
//
// The [ContainerConfig.Flags] field does not survive a [json] round trip. // The [ContainerConfig.Flags] field does not survive a [json] round trip.
type ContainerConfigF ContainerConfig type ContainerConfigF ContainerConfig

View File

@@ -5,8 +5,26 @@ import (
"strings" "strings"
) )
// BadInterfaceError is returned when Interface fails an undocumented check in xdg-dbus-proxy, // BadInterfaceError is returned when Interface fails an undocumented check in
// which would have cause a silent failure. // xdg-dbus-proxy, which would have cause a silent failure.
//
// xdg-dbus-proxy fails without output when this condition is not met:
//
// char *dot = strrchr (filter->interface, '.');
// if (dot != NULL)
// {
// *dot = 0;
// if (strcmp (dot + 1, "*") != 0)
// filter->member = g_strdup (dot + 1);
// }
//
// trim ".*" since they are removed before searching for '.':
//
// if (g_str_has_suffix (name, ".*"))
// {
// name[strlen (name) - 2] = 0;
// wildcard = TRUE;
// }
type BadInterfaceError struct { type BadInterfaceError struct {
// Interface is the offending interface string. // Interface is the offending interface string.
Interface string Interface string
@@ -19,7 +37,8 @@ func (e *BadInterfaceError) Error() string {
if e == nil { if e == nil {
return "<nil>" return "<nil>"
} }
return "bad interface string " + strconv.Quote(e.Interface) + " in " + e.Segment + " bus configuration" return "bad interface string " + strconv.Quote(e.Interface) +
" in " + e.Segment + " bus configuration"
} }
// BusConfig configures the xdg-dbus-proxy process. // BusConfig configures the xdg-dbus-proxy process.
@@ -76,31 +95,14 @@ func (c *BusConfig) Interfaces(yield func(string) bool) {
} }
} }
// CheckInterfaces checks for invalid interface strings based on an undocumented check in xdg-dbus-error, // CheckInterfaces checks for invalid interface strings based on an undocumented
// returning [BadInterfaceError] if one is encountered. // check in xdg-dbus-error, returning [BadInterfaceError] if one is encountered.
func (c *BusConfig) CheckInterfaces(segment string) error { func (c *BusConfig) CheckInterfaces(segment string) error {
if c == nil { if c == nil {
return nil return nil
} }
for iface := range c.Interfaces { for iface := range c.Interfaces {
/*
xdg-dbus-proxy fails without output when this condition is not met:
char *dot = strrchr (filter->interface, '.');
if (dot != NULL)
{
*dot = 0;
if (strcmp (dot + 1, "*") != 0)
filter->member = g_strdup (dot + 1);
}
trim ".*" since they are removed before searching for '.':
if (g_str_has_suffix (name, ".*"))
{
name[strlen (name) - 2] = 0;
wildcard = TRUE;
}
*/
if strings.IndexByte(strings.TrimSuffix(iface, ".*"), '.') == -1 { if strings.IndexByte(strings.TrimSuffix(iface, ".*"), '.') == -1 {
return &BadInterfaceError{iface, segment} return &BadInterfaceError{iface, segment}
} }

View File

@@ -11,15 +11,17 @@ import (
type Enablement byte type Enablement byte
const ( const (
// EWayland exposes a wayland pathname socket via security-context-v1. // EWayland exposes a Wayland pathname socket via security-context-v1.
EWayland Enablement = 1 << iota EWayland Enablement = 1 << iota
// EX11 adds the target user via X11 ChangeHosts and exposes the X11 pathname socket. // EX11 adds the target user via X11 ChangeHosts and exposes the X11
// pathname socket.
EX11 EX11
// EDBus enables the per-container xdg-dbus-proxy daemon. // EDBus enables the per-container xdg-dbus-proxy daemon.
EDBus EDBus
// EPipeWire exposes a pipewire pathname socket via SecurityContext. // EPipeWire exposes a pipewire pathname socket via SecurityContext.
EPipeWire EPipeWire
// EPulse copies the PulseAudio cookie to [hst.PrivateTmp] and exposes the PulseAudio socket. // EPulse copies the PulseAudio cookie to [hst.PrivateTmp] and exposes the
// PulseAudio socket.
EPulse EPulse
// EM is a noop. // EM is a noop.

View File

@@ -24,7 +24,8 @@ type FilesystemConfig interface {
fmt.Stringer fmt.Stringer
} }
// The Ops interface enables [FilesystemConfig] to queue container ops without depending on the container package. // The Ops interface enables [FilesystemConfig] to queue container ops without
// depending on the container package.
type Ops interface { type Ops interface {
// Tmpfs appends an op that mounts tmpfs on a container path. // Tmpfs appends an op that mounts tmpfs on a container path.
Tmpfs(target *check.Absolute, size int, perm os.FileMode) Ops Tmpfs(target *check.Absolute, size int, perm os.FileMode) Ops
@@ -41,12 +42,15 @@ type Ops interface {
// Link appends an op that creates a symlink in the container filesystem. // Link appends an op that creates a symlink in the container filesystem.
Link(target *check.Absolute, linkName string, dereference bool) Ops Link(target *check.Absolute, linkName string, dereference bool) Ops
// Root appends an op that expands a directory into a toplevel bind mount mirror on container root. // Root appends an op that expands a directory into a toplevel bind mount
// mirror on container root.
Root(host *check.Absolute, flags int) Ops Root(host *check.Absolute, flags int) Ops
// Etc appends an op that expands host /etc into a toplevel symlink mirror with /etc semantics. // Etc appends an op that expands host /etc into a toplevel symlink mirror
// with /etc semantics.
Etc(host *check.Absolute, prefix string) Ops Etc(host *check.Absolute, prefix string) Ops
// Daemon appends an op that starts a daemon in the container and blocks until target appears. // Daemon appends an op that starts a daemon in the container and blocks
// until target appears.
Daemon(target, path *check.Absolute, args ...string) Ops Daemon(target, path *check.Absolute, args ...string) Ops
} }
@@ -61,7 +65,8 @@ type ApplyState struct {
// ErrFSNull is returned by [json] on encountering a null [FilesystemConfig] value. // ErrFSNull is returned by [json] on encountering a null [FilesystemConfig] value.
var ErrFSNull = errors.New("unexpected null in mount point") var ErrFSNull = errors.New("unexpected null in mount point")
// FSTypeError is returned when [ContainerConfig.Filesystem] contains an entry with invalid type. // FSTypeError is returned when [ContainerConfig.Filesystem] contains an entry
// with invalid type.
type FSTypeError string type FSTypeError string
func (f FSTypeError) Error() string { return fmt.Sprintf("invalid filesystem type %q", string(f)) } func (f FSTypeError) Error() string { return fmt.Sprintf("invalid filesystem type %q", string(f)) }

View File

@@ -18,7 +18,9 @@ type FSLink struct {
Target *check.Absolute `json:"dst"` Target *check.Absolute `json:"dst"`
// Arbitrary linkname value store in the symlink. // Arbitrary linkname value store in the symlink.
Linkname string `json:"linkname"` Linkname string `json:"linkname"`
// Whether to treat Linkname as an absolute pathname and dereference before creating the link.
// Whether to treat Linkname as an absolute pathname and dereference before
// creating the link.
Dereference bool `json:"dereference,omitempty"` Dereference bool `json:"dereference,omitempty"`
} }

View File

@@ -19,9 +19,11 @@ type FSOverlay struct {
// Any filesystem, does not need to be on a writable filesystem, must not be nil. // Any filesystem, does not need to be on a writable filesystem, must not be nil.
Lower []*check.Absolute `json:"lower"` Lower []*check.Absolute `json:"lower"`
// The upperdir is normally on a writable filesystem, leave as nil to mount Lower readonly. // The upperdir is normally on a writable filesystem, leave as nil to mount
// Lower readonly.
Upper *check.Absolute `json:"upper,omitempty"` Upper *check.Absolute `json:"upper,omitempty"`
// The workdir needs to be an empty directory on the same filesystem as Upper, must not be nil if Upper is populated. // The workdir needs to be an empty directory on the same filesystem as
// Upper, must not be nil if Upper is populated.
Work *check.Absolute `json:"work,omitempty"` Work *check.Absolute `json:"work,omitempty"`
} }

View File

@@ -44,11 +44,13 @@ func (e *AppError) Message() string {
type Paths struct { type Paths struct {
// Temporary directory returned by [os.TempDir], usually equivalent to [fhs.AbsTmp]. // Temporary directory returned by [os.TempDir], usually equivalent to [fhs.AbsTmp].
TempDir *check.Absolute `json:"temp_dir"` TempDir *check.Absolute `json:"temp_dir"`
// Shared directory specific to the hsu userid, usually (`/tmp/hakurei.%d`, [Info.User]). // Shared directory specific to the hsu userid, usually
// (`/tmp/hakurei.%d`, [Info.User]).
SharePath *check.Absolute `json:"share_path"` SharePath *check.Absolute `json:"share_path"`
// Checked XDG_RUNTIME_DIR value, usually (`/run/user/%d`, uid). // Checked XDG_RUNTIME_DIR value, usually (`/run/user/%d`, uid).
RuntimePath *check.Absolute `json:"runtime_path"` RuntimePath *check.Absolute `json:"runtime_path"`
// Shared directory specific to the hsu userid located in RuntimePath, usually (`/run/user/%d/hakurei`, uid). // Shared directory specific to the hsu userid located in RuntimePath,
// usually (`/run/user/%d/hakurei`, uid).
RunDirPath *check.Absolute `json:"run_dir_path"` RunDirPath *check.Absolute `json:"run_dir_path"`
} }
@@ -74,10 +76,23 @@ func Template() *Config {
SessionBus: &BusConfig{ SessionBus: &BusConfig{
See: nil, See: nil,
Talk: []string{"org.freedesktop.Notifications", "org.freedesktop.FileManager1", "org.freedesktop.ScreenSaver", Talk: []string{
"org.freedesktop.secrets", "org.kde.kwalletd5", "org.kde.kwalletd6", "org.gnome.SessionManager"}, "org.freedesktop.Notifications",
Own: []string{"org.chromium.Chromium.*", "org.mpris.MediaPlayer2.org.chromium.Chromium.*", "org.freedesktop.FileManager1",
"org.mpris.MediaPlayer2.chromium.*"}, "org.freedesktop.ScreenSaver",
"org.freedesktop.secrets",
"org.kde.kwalletd5",
"org.kde.kwalletd6",
"org.gnome.SessionManager",
},
Own: []string{
"org.chromium.Chromium.*",
"org.mpris.MediaPlayer2.org.chromium.Chromium.*",
"org.mpris.MediaPlayer2.chromium.*",
},
Call: map[string]string{"org.freedesktop.portal.*": "*"}, Call: map[string]string{"org.freedesktop.portal.*": "*"},
Broadcast: map[string]string{"org.freedesktop.portal.*": "@/org/freedesktop/portal/*"}, Broadcast: map[string]string{"org.freedesktop.portal.*": "@/org/freedesktop/portal/*"},
Log: false, Log: false,
@@ -112,7 +127,12 @@ func Template() *Config {
"GOOGLE_DEFAULT_CLIENT_SECRET": "OTJgUOQcT7lO7GsGZq2G4IlT", "GOOGLE_DEFAULT_CLIENT_SECRET": "OTJgUOQcT7lO7GsGZq2G4IlT",
}, },
Filesystem: []FilesystemConfigJSON{ Filesystem: []FilesystemConfigJSON{
{&FSBind{Target: fhs.AbsRoot, Source: fhs.AbsVarLib.Append("hakurei/base/org.debian"), Write: true, Special: true}}, {&FSBind{
Target: fhs.AbsRoot,
Source: fhs.AbsVarLib.Append("hakurei/base/org.debian"),
Write: true,
Special: true,
}},
{&FSBind{Target: fhs.AbsEtc, Source: fhs.AbsEtc, Special: true}}, {&FSBind{Target: fhs.AbsEtc, Source: fhs.AbsEtc, Special: true}},
{&FSEphemeral{Target: fhs.AbsTmp, Write: true, Perm: 0755}}, {&FSEphemeral{Target: fhs.AbsTmp, Write: true, Perm: 0755}},
{&FSOverlay{ {&FSOverlay{
@@ -121,11 +141,27 @@ func Template() *Config {
Upper: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/upper"), Upper: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/upper"),
Work: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/work"), Work: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/work"),
}}, }},
{&FSLink{Target: fhs.AbsRun.Append("current-system"), Linkname: "/run/current-system", Dereference: true}}, {&FSLink{
{&FSLink{Target: fhs.AbsRun.Append("opengl-driver"), Linkname: "/run/opengl-driver", Dereference: true}}, Target: fhs.AbsRun.Append("current-system"),
{&FSBind{Source: fhs.AbsVarLib.Append("hakurei/u0/org.chromium.Chromium"), Linkname: "/run/current-system",
Target: check.MustAbs("/data/data/org.chromium.Chromium"), Write: true, Ensure: true}}, Dereference: true,
{&FSBind{Source: fhs.AbsDev.Append("dri"), Device: true, Optional: true}}, }},
{&FSLink{
Target: fhs.AbsRun.Append("opengl-driver"),
Linkname: "/run/opengl-driver",
Dereference: true,
}},
{&FSBind{
Source: fhs.AbsVarLib.Append("hakurei/u0/org.chromium.Chromium"),
Target: check.MustAbs("/data/data/org.chromium.Chromium"),
Write: true,
Ensure: true,
}},
{&FSBind{
Source: fhs.AbsDev.Append("dri"),
Device: true,
Optional: true,
}},
}, },
Username: "chronos", Username: "chronos",

View File

@@ -12,10 +12,12 @@ import (
// An ID is a unique identifier held by a running hakurei container. // An ID is a unique identifier held by a running hakurei container.
type ID [16]byte type ID [16]byte
// ErrIdentifierLength is returned when encountering a [hex] representation of [ID] with unexpected length. // ErrIdentifierLength is returned when encountering a [hex] representation of
// [ID] with unexpected length.
var ErrIdentifierLength = errors.New("identifier string has unexpected length") var ErrIdentifierLength = errors.New("identifier string has unexpected length")
// IdentifierDecodeError is returned by [ID.UnmarshalText] to provide relevant error descriptions. // IdentifierDecodeError is returned by [ID.UnmarshalText] to provide relevant
// error descriptions.
type IdentifierDecodeError struct{ Err error } type IdentifierDecodeError struct{ Err error }
func (e IdentifierDecodeError) Unwrap() error { return e.Err } func (e IdentifierDecodeError) Unwrap() error { return e.Err }
@@ -23,7 +25,10 @@ func (e IdentifierDecodeError) Error() string {
var invalidByteError hex.InvalidByteError var invalidByteError hex.InvalidByteError
switch { switch {
case errors.As(e.Err, &invalidByteError): case errors.As(e.Err, &invalidByteError):
return fmt.Sprintf("got invalid byte %#U in identifier", rune(invalidByteError)) return fmt.Sprintf(
"got invalid byte %#U in identifier",
rune(invalidByteError),
)
case errors.Is(e.Err, hex.ErrLength): case errors.Is(e.Err, hex.ErrLength):
return "odd length identifier hex string" return "odd length identifier hex string"
@@ -41,7 +46,9 @@ func (a *ID) CreationTime() time.Time {
} }
// NewInstanceID creates a new unique [ID]. // NewInstanceID creates a new unique [ID].
func NewInstanceID(id *ID) error { return newInstanceID(id, uint64(time.Now().UnixNano())) } func NewInstanceID(id *ID) error {
return newInstanceID(id, uint64(time.Now().UnixNano()))
}
// newInstanceID creates a new unique [ID] with the specified timestamp. // newInstanceID creates a new unique [ID] with the specified timestamp.
func newInstanceID(id *ID, p uint64) error { func newInstanceID(id *ID, p uint64) error {

View File

@@ -38,6 +38,7 @@ func (h *Hsu) ensureDispatcher() {
} }
// ID returns the current user hsurc identifier. // ID returns the current user hsurc identifier.
//
// [ErrHsuAccess] is returned if the current user is not in hsurc. // [ErrHsuAccess] is returned if the current user is not in hsurc.
func (h *Hsu) ID() (int, error) { func (h *Hsu) ID() (int, error) {
h.ensureDispatcher() h.ensureDispatcher()

View File

@@ -1,4 +1,5 @@
// Package outcome implements the outcome of the privileged and container sides of a hakurei container. // Package outcome implements the outcome of the privileged and container sides
// of a hakurei container.
package outcome package outcome
import ( import (
@@ -27,8 +28,9 @@ func Info() *hst.Info {
return &hi return &hi
} }
// envAllocSize is the initial size of the env map pre-allocated when the configured env map is nil. // envAllocSize is the initial size of the env map pre-allocated when the
// It should be large enough to fit all insertions by outcomeOp.toContainer. // configured env map is nil. It should be large enough to fit all insertions by
// outcomeOp.toContainer.
const envAllocSize = 1 << 6 const envAllocSize = 1 << 6
func newInt(v int) *stringPair[int] { return &stringPair[int]{v, strconv.Itoa(v)} } func newInt(v int) *stringPair[int] { return &stringPair[int]{v, strconv.Itoa(v)} }
@@ -43,7 +45,8 @@ func (s *stringPair[T]) unwrap() T { return s.v }
func (s *stringPair[T]) String() string { return s.s } func (s *stringPair[T]) String() string { return s.s }
// outcomeState is copied to the shim process and available while applying outcomeOp. // outcomeState is copied to the shim process and available while applying outcomeOp.
// This is transmitted from the priv side to the shim, so exported fields should be kept to a minimum. // This is transmitted from the priv side to the shim, so exported fields should
// be kept to a minimum.
type outcomeState struct { type outcomeState struct {
// Params only used by the shim process. Populated by populateEarly. // Params only used by the shim process. Populated by populateEarly.
Shim *shimParams Shim *shimParams
@@ -89,14 +92,25 @@ func (s *outcomeState) valid() bool {
s.Paths != nil s.Paths != nil
} }
// newOutcomeState returns the address of a new outcomeState with its exported fields populated via syscallDispatcher. // newOutcomeState returns the address of a new outcomeState with its exported
// fields populated via syscallDispatcher.
func newOutcomeState(k syscallDispatcher, msg message.Msg, id *hst.ID, config *hst.Config, hsu *Hsu) *outcomeState { func newOutcomeState(k syscallDispatcher, msg message.Msg, id *hst.ID, config *hst.Config, hsu *Hsu) *outcomeState {
s := outcomeState{ s := outcomeState{
Shim: &shimParams{PrivPID: k.getpid(), Verbose: msg.IsVerbose()}, Shim: &shimParams{
ID: id, PrivPID: k.getpid(),
Identity: config.Identity, Verbose: msg.IsVerbose(),
UserID: hsu.MustID(msg),
Paths: env.CopyPathsFunc(k.fatalf, k.tempdir, func(key string) string { v, _ := k.lookupEnv(key); return v }), SchedPolicy: config.SchedPolicy,
SchedPriority: config.SchedPriority,
},
ID: id,
Identity: config.Identity,
UserID: hsu.MustID(msg),
Paths: env.CopyPathsFunc(k.fatalf, k.tempdir, func(key string) string {
v, _ := k.lookupEnv(key)
return v
}),
Container: config.Container, Container: config.Container,
} }
@@ -121,6 +135,7 @@ func newOutcomeState(k syscallDispatcher, msg message.Msg, id *hst.ID, config *h
} }
// populateLocal populates unexported fields from transmitted exported fields. // populateLocal populates unexported fields from transmitted exported fields.
//
// These fields are cheaper to recompute per-process. // These fields are cheaper to recompute per-process.
func (s *outcomeState) populateLocal(k syscallDispatcher, msg message.Msg) error { func (s *outcomeState) populateLocal(k syscallDispatcher, msg message.Msg) error {
if !s.valid() || k == nil || msg == nil { if !s.valid() || k == nil || msg == nil {
@@ -136,7 +151,10 @@ func (s *outcomeState) populateLocal(k syscallDispatcher, msg message.Msg) error
s.id = &stringPair[hst.ID]{*s.ID, s.ID.String()} s.id = &stringPair[hst.ID]{*s.ID, s.ID.String()}
s.Copy(&s.sc, s.UserID) s.Copy(&s.sc, s.UserID)
msg.Verbosef("process share directory at %q, runtime directory at %q", s.sc.SharePath, s.sc.RunDirPath) msg.Verbosef(
"process share directory at %q, runtime directory at %q",
s.sc.SharePath, s.sc.RunDirPath,
)
s.identity = newInt(s.Identity) s.identity = newInt(s.Identity)
s.mapuid, s.mapgid = newInt(s.Mapuid), newInt(s.Mapgid) s.mapuid, s.mapgid = newInt(s.Mapuid), newInt(s.Mapgid)
@@ -146,17 +164,25 @@ func (s *outcomeState) populateLocal(k syscallDispatcher, msg message.Msg) error
} }
// instancePath returns a path formatted for outcomeStateSys.instance. // instancePath returns a path formatted for outcomeStateSys.instance.
//
// This method must only be called from outcomeOp.toContainer if // This method must only be called from outcomeOp.toContainer if
// outcomeOp.toSystem has already called outcomeStateSys.instance. // outcomeOp.toSystem has already called outcomeStateSys.instance.
func (s *outcomeState) instancePath() *check.Absolute { return s.sc.SharePath.Append(s.id.String()) } func (s *outcomeState) instancePath() *check.Absolute {
return s.sc.SharePath.Append(s.id.String())
}
// runtimePath returns a path formatted for outcomeStateSys.runtime. // runtimePath returns a path formatted for outcomeStateSys.runtime.
//
// This method must only be called from outcomeOp.toContainer if // This method must only be called from outcomeOp.toContainer if
// outcomeOp.toSystem has already called outcomeStateSys.runtime. // outcomeOp.toSystem has already called outcomeStateSys.runtime.
func (s *outcomeState) runtimePath() *check.Absolute { return s.sc.RunDirPath.Append(s.id.String()) } func (s *outcomeState) runtimePath() *check.Absolute {
return s.sc.RunDirPath.Append(s.id.String())
}
// outcomeStateSys wraps outcomeState and [system.I]. Used on the priv side only. // outcomeStateSys wraps outcomeState and [system.I]. Used on the priv side only.
// Implementations of outcomeOp must not access fields other than sys unless explicitly stated. //
// Implementations of outcomeOp must not access fields other than sys unless
// explicitly stated.
type outcomeStateSys struct { type outcomeStateSys struct {
// Whether XDG_RUNTIME_DIR is used post hsu. // Whether XDG_RUNTIME_DIR is used post hsu.
useRuntimeDir bool useRuntimeDir bool
@@ -219,6 +245,7 @@ func (state *outcomeStateSys) ensureRuntimeDir() {
} }
// instance returns the pathname to a process-specific directory within TMPDIR. // instance returns the pathname to a process-specific directory within TMPDIR.
//
// This directory must only hold entries bound to [system.Process]. // This directory must only hold entries bound to [system.Process].
func (state *outcomeStateSys) instance() *check.Absolute { func (state *outcomeStateSys) instance() *check.Absolute {
if state.sharePath != nil { if state.sharePath != nil {
@@ -230,6 +257,7 @@ func (state *outcomeStateSys) instance() *check.Absolute {
} }
// runtime returns the pathname to a process-specific directory within XDG_RUNTIME_DIR. // runtime returns the pathname to a process-specific directory within XDG_RUNTIME_DIR.
//
// This directory must only hold entries bound to [system.Process]. // This directory must only hold entries bound to [system.Process].
func (state *outcomeStateSys) runtime() *check.Absolute { func (state *outcomeStateSys) runtime() *check.Absolute {
if state.runtimeSharePath != nil { if state.runtimeSharePath != nil {
@@ -242,22 +270,29 @@ func (state *outcomeStateSys) runtime() *check.Absolute {
return state.runtimeSharePath return state.runtimeSharePath
} }
// outcomeStateParams wraps outcomeState and [container.Params]. Used on the shim side only. // outcomeStateParams wraps outcomeState and [container.Params].
//
// Used on the shim side only.
type outcomeStateParams struct { type outcomeStateParams struct {
// Overrides the embedded [container.Params] in [container.Container]. The Env field must not be used. // Overrides the embedded [container.Params] in [container.Container].
//
// The Env field must not be used.
params *container.Params params *container.Params
// Collapsed into the Env slice in [container.Params] by the final outcomeOp. // Collapsed into the Env slice in [container.Params] by the final outcomeOp.
env map[string]string env map[string]string
// Filesystems with the optional root sliced off if present. Populated by spParamsOp. // Filesystems with the optional root sliced off if present.
// Safe for use by spFilesystemOp. //
// Populated by spParamsOp. Safe for use by spFilesystemOp.
filesystem []hst.FilesystemConfigJSON filesystem []hst.FilesystemConfigJSON
// Inner XDG_RUNTIME_DIR default formatting of `/run/user/%d` via mapped uid. // Inner XDG_RUNTIME_DIR default formatting of `/run/user/%d` via mapped uid.
//
// Populated by spRuntimeOp. // Populated by spRuntimeOp.
runtimeDir *check.Absolute runtimeDir *check.Absolute
// Path to pipewire-pulse server. // Path to pipewire-pulse server.
//
// Populated by spPipeWireOp if DirectPipeWire is false. // Populated by spPipeWireOp if DirectPipeWire is false.
pipewirePulsePath *check.Absolute pipewirePulsePath *check.Absolute
@@ -265,25 +300,32 @@ type outcomeStateParams struct {
*outcomeState *outcomeState
} }
// errNotEnabled is returned by outcomeOp.toSystem and used internally to exclude an outcomeOp from transmission. // errNotEnabled is returned by outcomeOp.toSystem and used internally to
// exclude an outcomeOp from transmission.
var errNotEnabled = errors.New("op not enabled in the configuration") var errNotEnabled = errors.New("op not enabled in the configuration")
// An outcomeOp inflicts an outcome on [system.I] and contains enough information to // An outcomeOp inflicts an outcome on [system.I] and contains enough
// inflict it on [container.Params] in a separate process. // information to inflict it on [container.Params] in a separate process.
// An implementation of outcomeOp must store cross-process states in exported fields only. //
// An implementation of outcomeOp must store cross-process states in exported
// fields only.
type outcomeOp interface { type outcomeOp interface {
// toSystem inflicts the current outcome on [system.I] in the priv side process. // toSystem inflicts the current outcome on [system.I] in the priv side process.
toSystem(state *outcomeStateSys) error toSystem(state *outcomeStateSys) error
// toContainer inflicts the current outcome on [container.Params] in the shim process. // toContainer inflicts the current outcome on [container.Params] in the
// The implementation must not write to the Env field of [container.Params] as it will be overwritten // shim process.
// by flattened env map. //
// Implementations must not write to the Env field of [container.Params]
// as it will be overwritten by flattened env map.
toContainer(state *outcomeStateParams) error toContainer(state *outcomeStateParams) error
} }
// toSystem calls the outcomeOp.toSystem method on all outcomeOp implementations and populates shimParams.Ops. // toSystem calls the outcomeOp.toSystem method on all outcomeOp implementations
// This function assumes the caller has already called the Validate method on [hst.Config] // and populates shimParams.Ops.
// and checked that it returns nil. //
// This function assumes the caller has already called the Validate method on
// [hst.Config] and checked that it returns nil.
func (state *outcomeStateSys) toSystem() error { func (state *outcomeStateSys) toSystem() error {
if state.Shim == nil || state.Shim.Ops != nil { if state.Shim == nil || state.Shim.Ops != nil {
return newWithMessage("invalid ops state reached") return newWithMessage("invalid ops state reached")

View File

@@ -30,7 +30,9 @@ const (
) )
// NewStore returns the address of a new instance of [store.Store]. // NewStore returns the address of a new instance of [store.Store].
func NewStore(sc *hst.Paths) *store.Store { return store.New(sc.SharePath.Append("state")) } func NewStore(sc *hst.Paths) *store.Store {
return store.New(sc.SharePath.Append("state"))
}
// main carries out outcome and terminates. main does not return. // main carries out outcome and terminates. main does not return.
func (k *outcome) main(msg message.Msg, identifierFd int) { func (k *outcome) main(msg message.Msg, identifierFd int) {
@@ -116,7 +118,11 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
processStatePrev, processStateCur = processStateCur, processState processStatePrev, processStateCur = processStateCur, processState
if !processTime.IsZero() && processStatePrev != processLifecycle { if !processTime.IsZero() && processStatePrev != processLifecycle {
msg.Verbosef("state %d took %.2f ms", processStatePrev, float64(time.Since(processTime).Nanoseconds())/1e6) msg.Verbosef(
"state %d took %.2f ms",
processStatePrev,
float64(time.Since(processTime).Nanoseconds())/1e6,
)
} }
processTime = time.Now() processTime = time.Now()
@@ -141,7 +147,10 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
case processCommit: case processCommit:
if isBeforeRevert { if isBeforeRevert {
perrorFatal(newWithMessage("invalid transition to commit state"), "commit", processLifecycle) perrorFatal(
newWithMessage("invalid transition to commit state"),
"commit", processLifecycle,
)
continue continue
} }
@@ -238,15 +247,26 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
case <-func() chan struct{} { case <-func() chan struct{} {
w := make(chan struct{}) w := make(chan struct{})
// this ties processLifecycle to ctx with the additional compensated timeout duration // This ties processLifecycle to ctx with the additional
// to allow transition to the next state on a locked up shim // compensated timeout duration to allow transition to the next
go func() { <-ctx.Done(); time.Sleep(k.state.Shim.WaitDelay + shimWaitTimeout); close(w) }() // state on a locked up shim.
go func() {
<-ctx.Done()
time.Sleep(k.state.Shim.WaitDelay + shimWaitTimeout)
close(w)
}()
return w return w
}(): }():
// this is only reachable when wait did not return within shimWaitTimeout, after its WaitDelay has elapsed. // This is only reachable when wait did not return within
// This is different from the container failing to terminate within its timeout period, as that is enforced // shimWaitTimeout, after its WaitDelay has elapsed. This is
// by the shim. This path is instead reached when there is a lockup in shim preventing it from completing. // different from the container failing to terminate within its
msg.GetLogger().Printf("process %d did not terminate", shimCmd.Process.Pid) // timeout period, as that is enforced by the shim. This path is
// instead reached when there is a lockup in shim preventing it
// from completing.
msg.GetLogger().Printf(
"process %d did not terminate",
shimCmd.Process.Pid,
)
} }
msg.Resume() msg.Resume()
@@ -271,8 +291,8 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
ec := system.Process ec := system.Process
if entries, _, err := handle.Entries(); err != nil { if entries, _, err := handle.Entries(); err != nil {
// it is impossible to continue from this point, // it is impossible to continue from this point, per-process
// per-process state will be reverted to limit damage // state will be reverted to limit damage
perror(err, "read store segment entries") perror(err, "read store segment entries")
} else { } else {
// accumulate enablements of remaining instances // accumulate enablements of remaining instances
@@ -295,7 +315,10 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
if n == 0 { if n == 0 {
ec |= system.User ec |= system.User
} else { } else {
msg.Verbosef("found %d instances, cleaning up without user-scoped operations", n) msg.Verbosef(
"found %d instances, cleaning up without user-scoped operations",
n,
)
} }
ec |= rt ^ (hst.EWayland | hst.EX11 | hst.EDBus | hst.EPulse) ec |= rt ^ (hst.EWayland | hst.EX11 | hst.EDBus | hst.EPulse)
if msg.IsVerbose() { if msg.IsVerbose() {
@@ -335,7 +358,9 @@ func (k *outcome) main(msg message.Msg, identifierFd int) {
// start starts the shim via cmd/hsu. // start starts the shim via cmd/hsu.
// //
// If successful, a [time.Time] value for [hst.State] is stored in the value pointed to by startTime. // If successful, a [time.Time] value for [hst.State] is stored in the value
// pointed to by startTime.
//
// The resulting [exec.Cmd] and write end of the shim setup pipe is returned. // The resulting [exec.Cmd] and write end of the shim setup pipe is returned.
func (k *outcome) start(ctx context.Context, msg message.Msg, func (k *outcome) start(ctx context.Context, msg message.Msg,
hsuPath *check.Absolute, hsuPath *check.Absolute,

View File

@@ -37,9 +37,12 @@ const (
shimMsgBadPID = C.HAKUREI_SHIM_BAD_PID shimMsgBadPID = C.HAKUREI_SHIM_BAD_PID
) )
// setupContSignal sets up the SIGCONT signal handler for the cross-uid shim exit hack. // setupContSignal sets up the SIGCONT signal handler for the cross-uid shim
// The signal handler is implemented in C, signals can be processed by reading from the returned reader. // exit hack.
// The returned function must be called after all signal processing concludes. //
// The signal handler is implemented in C, signals can be processed by reading
// from the returned reader. The returned function must be called after all
// signal processing concludes.
func setupContSignal(pid int) (io.ReadCloser, func(), error) { func setupContSignal(pid int) (io.ReadCloser, func(), error) {
if r, w, err := os.Pipe(); err != nil { if r, w, err := os.Pipe(); err != nil {
return nil, nil, err return nil, nil, err
@@ -51,22 +54,30 @@ func setupContSignal(pid int) (io.ReadCloser, func(), error) {
} }
} }
// shimEnv is the name of the environment variable storing decimal representation of // shimEnv is the name of the environment variable storing decimal representation
// setup pipe fd for [container.Receive]. // of setup pipe fd for [container.Receive].
const shimEnv = "HAKUREI_SHIM" const shimEnv = "HAKUREI_SHIM"
// shimParams is embedded in outcomeState and transmitted from priv side to shim. // shimParams is embedded in outcomeState and transmitted from priv side to shim.
type shimParams struct { type shimParams struct {
// Priv side pid, checked against ppid in signal handler for the syscall.SIGCONT hack. // Priv side pid, checked against ppid in signal handler for the
// syscall.SIGCONT hack.
PrivPID int PrivPID int
// Duration to wait for after the initial process receives os.Interrupt before the container is killed. // Duration to wait for after the initial process receives os.Interrupt
// before the container is killed.
//
// Limits are enforced on the priv side. // Limits are enforced on the priv side.
WaitDelay time.Duration WaitDelay time.Duration
// Verbosity pass through from [message.Msg]. // Verbosity pass through from [message.Msg].
Verbose bool Verbose bool
// Copied from [hst.Config].
SchedPolicy std.SchedPolicy
// Copied from [hst.Config].
SchedPriority std.Int
// Outcome setup ops, contains setup state. Populated by outcome.finalise. // Outcome setup ops, contains setup state. Populated by outcome.finalise.
Ops []outcomeOp Ops []outcomeOp
} }
@@ -77,7 +88,9 @@ func (p *shimParams) valid() bool { return p != nil && p.PrivPID > 0 }
// shimName is the prefix used by log.std in the shim process. // shimName is the prefix used by log.std in the shim process.
const shimName = "shim" const shimName = "shim"
// Shim is called by the main function of the shim process and runs as the unconstrained target user. // Shim is called by the main function of the shim process and runs as the
// unconstrained target user.
//
// Shim does not return. // Shim does not return.
func Shim(msg message.Msg) { func Shim(msg message.Msg) {
if msg == nil { if msg == nil {
@@ -131,7 +144,8 @@ func (sp *shimPrivate) destroy() {
} }
const ( const (
// shimPipeWireTimeout is the duration pipewire-pulse is allowed to run before its socket becomes available. // shimPipeWireTimeout is the duration pipewire-pulse is allowed to run
// before its socket becomes available.
shimPipeWireTimeout = 5 * time.Second shimPipeWireTimeout = 5 * time.Second
) )
@@ -262,6 +276,9 @@ func shimEntrypoint(k syscallDispatcher) {
cancelContainer.Store(&stop) cancelContainer.Store(&stop)
sp := shimPrivate{k: k, id: state.id} sp := shimPrivate{k: k, id: state.id}
z := container.New(ctx, msg) z := container.New(ctx, msg)
z.SetScheduler = state.Shim.SchedPolicy > 0
z.SchedPolicy = state.Shim.SchedPolicy
z.SchedPriority = state.Shim.SchedPriority
z.Params = *stateParams.params z.Params = *stateParams.params
z.Stdin, z.Stdout, z.Stderr = os.Stdin, os.Stdout, os.Stderr z.Stdin, z.Stdout, z.Stderr = os.Stdin, os.Stdout, os.Stderr

View File

@@ -27,7 +27,9 @@ const varRunNscd = fhs.Var + "run/nscd"
func init() { gob.Register(new(spParamsOp)) } func init() { gob.Register(new(spParamsOp)) }
// spParamsOp initialises unordered fields of [container.Params] and the optional root filesystem. // spParamsOp initialises unordered fields of [container.Params] and the
// optional root filesystem.
//
// This outcomeOp is hardcoded to always run first. // This outcomeOp is hardcoded to always run first.
type spParamsOp struct { type spParamsOp struct {
// Value of $TERM, stored during toSystem. // Value of $TERM, stored during toSystem.
@@ -67,8 +69,8 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
state.params.Args = state.Container.Args state.params.Args = state.Container.Args
} }
// the container is canceled when shim is requested to exit or receives an interrupt or termination signal; // The container is cancelled when shim is requested to exit or receives an
// this behaviour is implemented in the shim // interrupt or termination signal. This behaviour is implemented in the shim.
state.params.ForwardCancel = state.Shim.WaitDelay > 0 state.params.ForwardCancel = state.Shim.WaitDelay > 0
if state.Container.Flags&hst.FMultiarch != 0 { if state.Container.Flags&hst.FMultiarch != 0 {
@@ -115,7 +117,8 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
} else { } else {
state.params.Bind(fhs.AbsDev, fhs.AbsDev, std.BindWritable|std.BindDevice) state.params.Bind(fhs.AbsDev, fhs.AbsDev, std.BindWritable|std.BindDevice)
} }
// /dev is mounted readonly later on, this prevents /dev/shm from going readonly with it // /dev is mounted readonly later on, this prevents /dev/shm from going
// readonly with it
state.params.Tmpfs(fhs.AbsDevShm, 0, 01777) state.params.Tmpfs(fhs.AbsDevShm, 0, 01777)
return nil return nil
@@ -123,7 +126,9 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
func init() { gob.Register(new(spFilesystemOp)) } func init() { gob.Register(new(spFilesystemOp)) }
// spFilesystemOp applies configured filesystems to [container.Params], excluding the optional root filesystem. // spFilesystemOp applies configured filesystems to [container.Params],
// excluding the optional root filesystem.
//
// This outcomeOp is hardcoded to always run last. // This outcomeOp is hardcoded to always run last.
type spFilesystemOp struct { type spFilesystemOp struct {
// Matched paths to cover. Stored during toSystem. // Matched paths to cover. Stored during toSystem.
@@ -297,8 +302,8 @@ func (s *spFilesystemOp) toContainer(state *outcomeStateParams) error {
return nil return nil
} }
// resolveRoot handles the root filesystem special case for [hst.FilesystemConfig] and additionally resolves autoroot // resolveRoot handles the root filesystem special case for [hst.FilesystemConfig]
// as it requires special handling during path hiding. // and additionally resolves autoroot as it requires special handling during path hiding.
func resolveRoot(c *hst.ContainerConfig) (rootfs hst.FilesystemConfig, filesystem []hst.FilesystemConfigJSON, autoroot *hst.FSBind) { func resolveRoot(c *hst.ContainerConfig) (rootfs hst.FilesystemConfig, filesystem []hst.FilesystemConfigJSON, autoroot *hst.FSBind) {
// root filesystem special case // root filesystem special case
filesystem = c.Filesystem filesystem = c.Filesystem
@@ -316,7 +321,8 @@ func resolveRoot(c *hst.ContainerConfig) (rootfs hst.FilesystemConfig, filesyste
return return
} }
// evalSymlinks calls syscallDispatcher.evalSymlinks but discards errors unwrapping to [fs.ErrNotExist]. // evalSymlinks calls syscallDispatcher.evalSymlinks but discards errors
// unwrapping to [fs.ErrNotExist].
func evalSymlinks(msg message.Msg, k syscallDispatcher, v *string) error { func evalSymlinks(msg message.Msg, k syscallDispatcher, v *string) error {
if p, err := k.evalSymlinks(*v); err != nil { if p, err := k.evalSymlinks(*v); err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {

View File

@@ -12,6 +12,7 @@ import (
func init() { gob.Register(new(spDBusOp)) } func init() { gob.Register(new(spDBusOp)) }
// spDBusOp maintains an xdg-dbus-proxy instance for the container. // spDBusOp maintains an xdg-dbus-proxy instance for the container.
//
// Runs after spRuntimeOp. // Runs after spRuntimeOp.
type spDBusOp struct { type spDBusOp struct {
// Whether to bind the system bus socket. Populated during toSystem. // Whether to bind the system bus socket. Populated during toSystem.

View File

@@ -13,9 +13,12 @@ const pipewirePulseName = "pipewire-pulse"
func init() { gob.Register(new(spPipeWireOp)) } func init() { gob.Register(new(spPipeWireOp)) }
// spPipeWireOp exports the PipeWire server to the container via SecurityContext. // spPipeWireOp exports the PipeWire server to the container via SecurityContext.
//
// Runs after spRuntimeOp. // Runs after spRuntimeOp.
type spPipeWireOp struct { type spPipeWireOp struct {
// Path to pipewire-pulse server. Populated during toSystem if DirectPipeWire is false. // Path to pipewire-pulse server.
//
// Populated during toSystem if DirectPipeWire is false.
CompatServerPath *check.Absolute CompatServerPath *check.Absolute
} }

View File

@@ -20,6 +20,7 @@ const pulseCookieSizeMax = 1 << 8
func init() { gob.Register(new(spPulseOp)) } func init() { gob.Register(new(spPulseOp)) }
// spPulseOp exports the PulseAudio server to the container. // spPulseOp exports the PulseAudio server to the container.
//
// Runs after spRuntimeOp. // Runs after spRuntimeOp.
type spPulseOp struct { type spPulseOp struct {
// PulseAudio cookie data, populated during toSystem if a cookie is present. // PulseAudio cookie data, populated during toSystem if a cookie is present.
@@ -37,24 +38,40 @@ func (s *spPulseOp) toSystem(state *outcomeStateSys) error {
if _, err := state.k.stat(pulseRuntimeDir.String()); err != nil { if _, err := state.k.stat(pulseRuntimeDir.String()); err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {
return &hst.AppError{Step: fmt.Sprintf("access PulseAudio directory %q", pulseRuntimeDir), Err: err} return &hst.AppError{Step: fmt.Sprintf(
"access PulseAudio directory %q",
pulseRuntimeDir,
), Err: err}
} }
return newWithMessageError(fmt.Sprintf("PulseAudio directory %q not found", pulseRuntimeDir), err) return newWithMessageError(fmt.Sprintf(
"PulseAudio directory %q not found",
pulseRuntimeDir,
), err)
} }
if fi, err := state.k.stat(pulseSocket.String()); err != nil { if fi, err := state.k.stat(pulseSocket.String()); err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {
return &hst.AppError{Step: fmt.Sprintf("access PulseAudio socket %q", pulseSocket), Err: err} return &hst.AppError{Step: fmt.Sprintf(
"access PulseAudio socket %q",
pulseSocket,
), Err: err}
} }
return newWithMessageError(fmt.Sprintf("PulseAudio directory %q found but socket does not exist", pulseRuntimeDir), err) return newWithMessageError(fmt.Sprintf(
"PulseAudio directory %q found but socket does not exist",
pulseRuntimeDir,
), err)
} else { } else {
if m := fi.Mode(); m&0o006 != 0o006 { if m := fi.Mode(); m&0o006 != 0o006 {
return newWithMessage(fmt.Sprintf("unexpected permissions on %q: %s", pulseSocket, m)) return newWithMessage(fmt.Sprintf(
"unexpected permissions on %q: %s",
pulseSocket, m,
))
} }
} }
// pulse socket is world writable and its parent directory DAC permissions prevents access; // PulseAudio socket is world writable and its parent directory DAC
// hard link to target-executable share directory to grant access // permissions prevents access. Hard link to target-executable share
// directory to grant access
state.sys.Link(pulseSocket, state.runtime().Append("pulse")) state.sys.Link(pulseSocket, state.runtime().Append("pulse"))
// load up to pulseCookieSizeMax bytes of pulse cookie for transmission to shim // load up to pulseCookieSizeMax bytes of pulse cookie for transmission to shim
@@ -62,7 +79,13 @@ func (s *spPulseOp) toSystem(state *outcomeStateSys) error {
return err return err
} else if a != nil { } else if a != nil {
s.Cookie = new([pulseCookieSizeMax]byte) s.Cookie = new([pulseCookieSizeMax]byte)
if s.CookieSize, err = loadFile(state.msg, state.k, "PulseAudio cookie", a.String(), s.Cookie[:]); err != nil { if s.CookieSize, err = loadFile(
state.msg,
state.k,
"PulseAudio cookie",
a.String(),
s.Cookie[:],
); err != nil {
return err return err
} }
} else { } else {
@@ -101,8 +124,9 @@ func (s *spPulseOp) commonPaths(state *outcomeState) (pulseRuntimeDir, pulseSock
return return
} }
// discoverPulseCookie attempts to discover the pathname of the PulseAudio cookie of the current user. // discoverPulseCookie attempts to discover the pathname of the PulseAudio
// If both returned pathname and error are nil, the cookie is likely unavailable and can be silently skipped. // cookie of the current user. If both returned pathname and error are nil, the
// cookie is likely unavailable and can be silently skipped.
func discoverPulseCookie(k syscallDispatcher) (*check.Absolute, error) { func discoverPulseCookie(k syscallDispatcher) (*check.Absolute, error) {
const paLocateStep = "locate PulseAudio cookie" const paLocateStep = "locate PulseAudio cookie"
@@ -186,7 +210,10 @@ func loadFile(
&os.PathError{Op: "stat", Path: pathname, Err: syscall.ENOMEM}, &os.PathError{Op: "stat", Path: pathname, Err: syscall.ENOMEM},
) )
} else if s < int64(n) { } else if s < int64(n) {
msg.Verbosef("%s at %q is %d bytes shorter than expected", description, pathname, int64(n)-s) msg.Verbosef(
"%s at %q is %d bytes shorter than expected",
description, pathname, int64(n)-s,
)
} else { } else {
msg.Verbosef("loading %d bytes from %q", n, pathname) msg.Verbosef("loading %d bytes from %q", n, pathname)
} }

View File

@@ -67,7 +67,9 @@ const (
// spRuntimeOp sets up XDG_RUNTIME_DIR inside the container. // spRuntimeOp sets up XDG_RUNTIME_DIR inside the container.
type spRuntimeOp struct { type spRuntimeOp struct {
// SessionType determines the value of envXDGSessionType. Populated during toSystem. // SessionType determines the value of envXDGSessionType.
//
// Populated during toSystem.
SessionType uintptr SessionType uintptr
} }

View File

@@ -12,9 +12,12 @@ import (
func init() { gob.Register(new(spWaylandOp)) } func init() { gob.Register(new(spWaylandOp)) }
// spWaylandOp exports the Wayland display server to the container. // spWaylandOp exports the Wayland display server to the container.
//
// Runs after spRuntimeOp. // Runs after spRuntimeOp.
type spWaylandOp struct { type spWaylandOp struct {
// Path to host wayland socket. Populated during toSystem if DirectWayland is true. // Path to host wayland socket.
//
// Populated during toSystem if DirectWayland is true.
SocketPath *check.Absolute SocketPath *check.Absolute
} }

View File

@@ -50,7 +50,10 @@ func (s *spX11Op) toSystem(state *outcomeStateSys) error {
if socketPath != nil { if socketPath != nil {
if _, err := state.k.stat(socketPath.String()); err != nil { if _, err := state.k.stat(socketPath.String()); err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {
return &hst.AppError{Step: fmt.Sprintf("access X11 socket %q", socketPath), Err: err} return &hst.AppError{Step: fmt.Sprintf(
"access X11 socket %q",
socketPath,
), Err: err}
} }
} else { } else {
state.sys.UpdatePermType(hst.EX11, socketPath, acl.Read, acl.Write, acl.Execute) state.sys.UpdatePermType(hst.EX11, socketPath, acl.Read, acl.Write, acl.Execute)

View File

@@ -39,8 +39,8 @@ type ExecPath struct {
W bool W bool
} }
// SchedPolicy is the [container] scheduling policy. // SetSchedIdle is whether to set [std.SCHED_IDLE] scheduling priority.
var SchedPolicy int var SetSchedIdle bool
// PromoteLayers returns artifacts with identical-by-content layers promoted to // PromoteLayers returns artifacts with identical-by-content layers promoted to
// the highest priority instance, as if mounted via [ExecPath]. // the highest priority instance, as if mounted via [ExecPath].
@@ -413,7 +413,8 @@ func (a *execArtifact) cure(f *FContext, hostNet bool) (err error) {
z.ParentPerm = 0700 z.ParentPerm = 0700
z.HostNet = hostNet z.HostNet = hostNet
z.Hostname = "cure" z.Hostname = "cure"
z.SchedPolicy = SchedPolicy z.SetScheduler = SetSchedIdle
z.SchedPolicy = std.SCHED_IDLE
if z.HostNet { if z.HostNet {
z.Hostname = "cure-net" z.Hostname = "cure-net"
} }
@@ -440,28 +441,23 @@ func (a *execArtifact) cure(f *FContext, hostNet bool) (err error) {
} }
}() }()
bw := f.cache.getWriter(status) brStdout, brStderr := f.cache.getReader(stdout), f.cache.getReader(stderr)
stdoutDone, stderrDone := make(chan struct{}), make(chan struct{}) stdoutDone, stderrDone := make(chan struct{}), make(chan struct{})
go scanVerbose( go scanVerbose(
msg, cancel, stdoutDone, msg, cancel, stdoutDone,
"("+a.name+":1)", "("+a.name+":1)",
io.TeeReader(stdout, bw), io.TeeReader(brStdout, status),
) )
go scanVerbose( go scanVerbose(
msg, cancel, stderrDone, msg, cancel, stderrDone,
"("+a.name+":2)", "("+a.name+":2)",
io.TeeReader(stderr, bw), io.TeeReader(brStderr, status),
) )
defer func() { defer func() {
<-stdoutDone <-stdoutDone
<-stderrDone <-stderrDone
f.cache.putReader(brStdout)
flushErr := bw.Flush() f.cache.putReader(brStderr)
if err == nil {
err = flushErr
}
f.cache.putWriter(bw)
}() }()
} else { } else {
z.Stdout, z.Stderr = status, status z.Stdout, z.Stderr = status, status

View File

@@ -1790,6 +1790,18 @@ func (pending *pendingArtifactDep) cure(c *Cache) {
pending.errsMu.Unlock() pending.errsMu.Unlock()
} }
// OpenStatus attempts to open the status file associated to an [Artifact]. If
// err is nil, the caller must close the resulting reader.
func (c *Cache) OpenStatus(a Artifact) (r io.ReadSeekCloser, err error) {
c.identMu.RLock()
r, err = os.Open(c.base.Append(
dirStatus,
Encode(c.Ident(a).Value())).String(),
)
c.identMu.RUnlock()
return
}
// Close cancels all pending cures and waits for them to clean up. // Close cancels all pending cures and waits for them to clean up.
func (c *Cache) Close() { func (c *Cache) Close() {
c.closeOnce.Do(func() { c.closeOnce.Do(func() {

View File

@@ -71,6 +71,8 @@ func init() {
Name: "attr", Name: "attr",
Description: "Commands for Manipulating Filesystem Extended Attributes", Description: "Commands for Manipulating Filesystem Extended Attributes",
Website: "https://savannah.nongnu.org/projects/attr/", Website: "https://savannah.nongnu.org/projects/attr/",
ID: 137,
} }
} }
@@ -98,5 +100,11 @@ func init() {
Name: "acl", Name: "acl",
Description: "Commands for Manipulating POSIX Access Control Lists", Description: "Commands for Manipulating POSIX Access Control Lists",
Website: "https://savannah.nongnu.org/projects/acl/", Website: "https://savannah.nongnu.org/projects/acl/",
Dependencies: P{
Attr,
},
ID: 16,
} }
} }

View File

@@ -1,6 +1,12 @@
package rosa package rosa
import ( import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"sync" "sync"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -10,8 +16,16 @@ import (
type PArtifact int type PArtifact int
const ( const (
LLVMCompilerRT PArtifact = iota
LLVMRuntimes
LLVMClang
// EarlyInit is the Rosa OS init program.
EarlyInit
// ImageSystem is the Rosa OS /system image.
ImageSystem
// ImageInitramfs is the Rosa OS initramfs archive. // ImageInitramfs is the Rosa OS initramfs archive.
ImageInitramfs PArtifact = iota ImageInitramfs
// Kernel is the generic Rosa OS Linux kernel. // Kernel is the generic Rosa OS Linux kernel.
Kernel Kernel
@@ -19,6 +33,8 @@ const (
KernelHeaders KernelHeaders
// KernelSource is a writable kernel source tree installed to [AbsUsrSrc]. // KernelSource is a writable kernel source tree installed to [AbsUsrSrc].
KernelSource KernelSource
// Firmware is firmware blobs for use with the Linux kernel.
Firmware
ACL ACL
ArgpStandalone ArgpStandalone
@@ -52,7 +68,6 @@ const (
Gzip Gzip
Hakurei Hakurei
HakureiDist HakureiDist
IniConfig
Kmod Kmod
LibXau LibXau
Libcap Libcap
@@ -77,10 +92,11 @@ const (
NSS NSS
NSSCACert NSSCACert
Ncurses Ncurses
Nettle
Ninja Ninja
OpenSSL OpenSSL
PCRE2 PCRE2
Packaging Parallel
Patch Patch
Perl Perl
PerlLocaleGettext PerlLocaleGettext
@@ -94,12 +110,25 @@ const (
PerlUnicodeGCString PerlUnicodeGCString
PerlYAMLTiny PerlYAMLTiny
PkgConfig PkgConfig
Pluggy
Procps Procps
PyTest
Pygments
Python Python
PythonCfgv
PythonDiscovery
PythonDistlib
PythonFilelock
PythonIdentify
PythonIniConfig
PythonNodeenv
PythonPackaging
PythonPlatformdirs
PythonPluggy
PythonPreCommit
PythonPyTest
PythonPyYAML
PythonPygments
PythonVirtualenv
QEMU QEMU
Rdfind
Rsync Rsync
Sed Sed
Setuptools Setuptools
@@ -120,8 +149,8 @@ const (
Zlib Zlib
Zstd Zstd
// _presetUnexportedStart is the first unexported preset. // PresetUnexportedStart is the first unexported preset.
_presetUnexportedStart PresetUnexportedStart
buildcatrust = iota - 1 buildcatrust = iota - 1
utilMacros utilMacros
@@ -137,10 +166,40 @@ const (
// part of the [Std] toolchain. // part of the [Std] toolchain.
Stage0 Stage0
// _presetEnd is the total number of presets and does not denote a preset. // PresetEnd is the total number of presets and does not denote a preset.
_presetEnd PresetEnd
) )
// P represents multiple [PArtifact] and is stable through JSON.
type P []PArtifact
// MarshalJSON represents [PArtifact] by their [Metadata.Name].
func (s P) MarshalJSON() ([]byte, error) {
names := make([]string, len(s))
for i, p := range s {
names[i] = GetMetadata(p).Name
}
return json.Marshal(names)
}
// UnmarshalJSON resolves the value created by MarshalJSON back to [P].
func (s *P) UnmarshalJSON(data []byte) error {
var names []string
if err := json.Unmarshal(data, &names); err != nil {
return err
}
*s = make(P, len(names))
for i, name := range names {
if p, ok := ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
(*s)[i] = p
}
}
return nil
}
// Metadata is stage-agnostic information of a [PArtifact] not directly // Metadata is stage-agnostic information of a [PArtifact] not directly
// representable in the resulting [pkg.Artifact]. // representable in the resulting [pkg.Artifact].
type Metadata struct { type Metadata struct {
@@ -152,19 +211,91 @@ type Metadata struct {
Description string `json:"description"` Description string `json:"description"`
// Project home page. // Project home page.
Website string `json:"website,omitempty"` Website string `json:"website,omitempty"`
// Runtime dependencies.
Dependencies P `json:"dependencies"`
// Project identifier on [Anitya].
//
// [Anitya]: https://release-monitoring.org/
ID int `json:"-"`
// Optional custom version checking behaviour.
latest func(v *Versions) string
}
// GetLatest returns the latest version described by v.
func (meta *Metadata) GetLatest(v *Versions) string {
if meta.latest != nil {
return meta.latest(v)
}
return v.Latest
} }
// Unversioned denotes an unversioned [PArtifact]. // Unversioned denotes an unversioned [PArtifact].
const Unversioned = "\x00" const Unversioned = "\x00"
// UnpopulatedIDError is returned by [Metadata.GetLatest] for an instance of
// [Metadata] where ID is not populated.
type UnpopulatedIDError struct{}
func (UnpopulatedIDError) Unwrap() error { return errors.ErrUnsupported }
func (UnpopulatedIDError) Error() string { return "Anitya ID is not populated" }
// Versions are package versions returned by Anitya.
type Versions struct {
// The latest version for the project, as determined by the version sorting algorithm.
Latest string `json:"latest_version"`
// List of all versions that arent flagged as pre-release.
Stable []string `json:"stable_versions"`
// List of all versions stored, sorted from newest to oldest.
All []string `json:"versions"`
}
// getStable returns the first Stable version, or Latest if that is unavailable.
func (v *Versions) getStable() string {
if len(v.Stable) == 0 {
return v.Latest
}
return v.Stable[0]
}
// GetVersions returns versions fetched from Anitya.
func (meta *Metadata) GetVersions(ctx context.Context) (*Versions, error) {
if meta.ID == 0 {
return nil, UnpopulatedIDError{}
}
var resp *http.Response
if req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
"https://release-monitoring.org/api/v2/versions/?project_id="+
strconv.Itoa(meta.ID),
nil,
); err != nil {
return nil, err
} else {
req.Header.Set("User-Agent", "Rosa/1.1")
if resp, err = http.DefaultClient.Do(req); err != nil {
return nil, err
}
}
var v Versions
err := json.NewDecoder(resp.Body).Decode(&v)
return &v, errors.Join(err, resp.Body.Close())
}
var ( var (
// artifactsM is an array of [PArtifact] metadata. // artifactsM is an array of [PArtifact] metadata.
artifactsM [_presetEnd]Metadata artifactsM [PresetEnd]Metadata
// artifacts stores the result of Metadata.f. // artifacts stores the result of Metadata.f.
artifacts [_toolchainEnd][len(artifactsM)]pkg.Artifact artifacts [_toolchainEnd][len(artifactsM)]struct {
// versions stores the version of [PArtifact]. a pkg.Artifact
versions [_toolchainEnd][len(artifactsM)]string v string
}
// artifactsOnce is for lazy initialisation of artifacts. // artifactsOnce is for lazy initialisation of artifacts.
artifactsOnce [_toolchainEnd][len(artifactsM)]sync.Once artifactsOnce [_toolchainEnd][len(artifactsM)]sync.Once
) )
@@ -172,25 +303,28 @@ var (
// GetMetadata returns [Metadata] of a [PArtifact]. // GetMetadata returns [Metadata] of a [PArtifact].
func GetMetadata(p PArtifact) *Metadata { return &artifactsM[p] } func GetMetadata(p PArtifact) *Metadata { return &artifactsM[p] }
// construct constructs a [pkg.Artifact] corresponding to a [PArtifact] once.
func (t Toolchain) construct(p PArtifact) {
artifactsOnce[t][p].Do(func() {
artifacts[t][p].a, artifacts[t][p].v = artifactsM[p].f(t)
})
}
// Load returns the resulting [pkg.Artifact] of [PArtifact]. // Load returns the resulting [pkg.Artifact] of [PArtifact].
func (t Toolchain) Load(p PArtifact) pkg.Artifact { func (t Toolchain) Load(p PArtifact) pkg.Artifact {
artifactsOnce[t][p].Do(func() { t.construct(p)
artifacts[t][p], versions[t][p] = artifactsM[p].f(t) return artifacts[t][p].a
})
return artifacts[t][p]
} }
// Version returns the version string of [PArtifact]. // Version returns the version string of [PArtifact].
func (t Toolchain) Version(p PArtifact) string { func (t Toolchain) Version(p PArtifact) string {
artifactsOnce[t][p].Do(func() { t.construct(p)
artifacts[t][p], versions[t][p] = artifactsM[p].f(t) return artifacts[t][p].v
})
return versions[t][p]
} }
// ResolveName returns a [PArtifact] by name. // ResolveName returns a [PArtifact] by name.
func ResolveName(name string) (p PArtifact, ok bool) { func ResolveName(name string) (p PArtifact, ok bool) {
for i := range _presetUnexportedStart { for i := range PresetUnexportedStart {
if name == artifactsM[i].Name { if name == artifactsM[i].Name {
return i, true return i, true
} }

View File

@@ -1,19 +1,20 @@
package rosa package rosa_test
import "testing" import (
"testing"
// PresetEnd is the total PArtifact count exported for testing. "hakurei.app/internal/rosa"
const PresetEnd = _presetEnd )
func TestLoad(t *testing.T) { func TestLoad(t *testing.T) {
t.Parallel() t.Parallel()
for i := range _presetEnd { for i := range rosa.PresetEnd {
p := PArtifact(i) p := rosa.PArtifact(i)
t.Run(GetMetadata(p).Name, func(t *testing.T) { t.Run(rosa.GetMetadata(p).Name, func(t *testing.T) {
t.Parallel() t.Parallel()
Std.Load(p) rosa.Std.Load(p)
}) })
} }
} }
@@ -21,13 +22,13 @@ func TestLoad(t *testing.T) {
func TestResolveName(t *testing.T) { func TestResolveName(t *testing.T) {
t.Parallel() t.Parallel()
for i := range _presetUnexportedStart { for i := range rosa.PresetUnexportedStart {
p := i p := i
name := GetMetadata(p).Name name := rosa.GetMetadata(p).Name
t.Run(name, func(t *testing.T) { t.Run(name, func(t *testing.T) {
t.Parallel() t.Parallel()
if got, ok := ResolveName(name); !ok { if got, ok := rosa.ResolveName(name); !ok {
t.Fatal("ResolveName: ok = false") t.Fatal("ResolveName: ok = false")
} else if got != p { } else if got != p {
t.Fatalf("ResolveName: %d, want %d", got, p) t.Fatalf("ResolveName: %d, want %d", got, p)
@@ -38,15 +39,28 @@ func TestResolveName(t *testing.T) {
func TestResolveNameUnexported(t *testing.T) { func TestResolveNameUnexported(t *testing.T) {
t.Parallel() t.Parallel()
for i := _presetUnexportedStart; i < _presetEnd; i++ { for i := rosa.PresetUnexportedStart; i < rosa.PresetEnd; i++ {
p := i p := i
name := GetMetadata(p).Name name := rosa.GetMetadata(p).Name
t.Run(name, func(t *testing.T) { t.Run(name, func(t *testing.T) {
t.Parallel() t.Parallel()
if got, ok := ResolveName(name); ok { if got, ok := rosa.ResolveName(name); ok {
t.Fatalf("ResolveName: resolved unexported preset %d", got) t.Fatalf("ResolveName: resolved unexported preset %d", got)
} }
}) })
} }
} }
func TestUnique(t *testing.T) {
t.Parallel()
names := make(map[string]struct{})
for i := range rosa.PresetEnd {
name := rosa.GetMetadata(rosa.PArtifact(i)).Name
if _, ok := names[name]; ok {
t.Fatalf("name %s is not unique", name)
}
names[name] = struct{}{}
}
}

View File

@@ -32,5 +32,7 @@ func init() {
Name: "bzip2", Name: "bzip2",
Description: "a freely available, patent free, high-quality data compressor", Description: "a freely available, patent free, high-quality data compressor",
Website: "https://sourceware.org/bzip2/", Website: "https://sourceware.org/bzip2/",
ID: 237,
} }
} }

View File

@@ -111,6 +111,8 @@ func init() {
Name: "cmake", Name: "cmake",
Description: "cross-platform, open-source build system", Description: "cross-platform, open-source build system",
Website: "https://cmake.org/", Website: "https://cmake.org/",
ID: 306,
} }
} }
@@ -126,6 +128,9 @@ type CMakeHelper struct {
Cache [][2]string Cache [][2]string
// Runs after install. // Runs after install.
Script string Script string
// Whether to generate Makefile instead.
Make bool
} }
var _ Helper = new(CMakeHelper) var _ Helper = new(CMakeHelper)
@@ -139,7 +144,10 @@ func (attr *CMakeHelper) name(name, version string) string {
} }
// extra returns a hardcoded slice of [CMake] and [Ninja]. // extra returns a hardcoded slice of [CMake] and [Ninja].
func (*CMakeHelper) extra(int) []PArtifact { func (attr *CMakeHelper) extra(int) []PArtifact {
if attr != nil && attr.Make {
return []PArtifact{CMake, Make}
}
return []PArtifact{CMake, Ninja} return []PArtifact{CMake, Ninja}
} }
@@ -171,11 +179,19 @@ func (attr *CMakeHelper) script(name string) string {
panic("CACHE must be non-empty") panic("CACHE must be non-empty")
} }
generate := "Ninja"
jobs := ""
if attr.Make {
generate = "'Unix Makefiles'"
jobs += ` "--parallel=$(nproc)"`
}
return ` return `
cmake -G Ninja \ cmake -G ` + generate + ` \
-DCMAKE_C_COMPILER_TARGET="${ROSA_TRIPLE}" \ -DCMAKE_C_COMPILER_TARGET="${ROSA_TRIPLE}" \
-DCMAKE_CXX_COMPILER_TARGET="${ROSA_TRIPLE}" \ -DCMAKE_CXX_COMPILER_TARGET="${ROSA_TRIPLE}" \
-DCMAKE_ASM_COMPILER_TARGET="${ROSA_TRIPLE}" \ -DCMAKE_ASM_COMPILER_TARGET="${ROSA_TRIPLE}" \
-DCMAKE_INSTALL_LIBDIR=lib \
` + strings.Join(slices.Collect(func(yield func(string) bool) { ` + strings.Join(slices.Collect(func(yield func(string) bool) {
for _, v := range attr.Cache { for _, v := range attr.Cache {
if !yield("-D" + v[0] + "=" + v[1]) { if !yield("-D" + v[0] + "=" + v[1]) {
@@ -183,9 +199,9 @@ cmake -G Ninja \
} }
} }
}), " \\\n\t") + ` \ }), " \\\n\t") + ` \
-DCMAKE_INSTALL_PREFIX=/work/system \ -DCMAKE_INSTALL_PREFIX=/system \
'/usr/src/` + name + `/` + path.Join(attr.Append...) + `' '/usr/src/` + name + `/` + path.Join(attr.Append...) + `'
cmake --build . cmake --build .` + jobs + `
cmake --install . cmake --install . --prefix=/work/system
` + attr.Script ` + attr.Script
} }

View File

@@ -4,24 +4,48 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newCurl() (pkg.Artifact, string) { func (t Toolchain) newCurl() (pkg.Artifact, string) {
const ( const (
version = "8.18.0" version = "8.19.0"
checksum = "YpOolP_sx1DIrCEJ3elgVAu0wTLDS-EZMZFvOP0eha7FaLueZUlEpuMwDzJNyi7i" checksum = "YHuVLVVp8q_Y7-JWpID5ReNjq2Zk6t7ArHB6ngQXilp_R5l3cubdxu3UKo-xDByv"
) )
return t.NewPackage("curl", version, pkg.NewHTTPGetTar( return t.NewPackage("curl", version, pkg.NewHTTPGetTar(
nil, "https://curl.se/download/curl-"+version+".tar.bz2", nil, "https://curl.se/download/curl-"+version+".tar.bz2",
mustDecode(checksum), mustDecode(checksum),
pkg.TarBzip2, pkg.TarBzip2,
), nil, &MakeHelper{ ), &PackageAttr{
Patches: [][2]string{
{"test459-misplaced-line-break", `diff --git a/tests/data/test459 b/tests/data/test459
index 7a2e1db7b3..cc716aa65a 100644
--- a/tests/data/test459
+++ b/tests/data/test459
@@ -54,8 +54,8 @@ Content-Type: application/x-www-form-urlencoded
arg
</protocol>
<stderr mode="text">
-Warning: %LOGDIR/config:1 Option 'data' uses argument with unquoted whitespace.%SP
-Warning: This may cause side-effects. Consider double quotes.
+Warning: %LOGDIR/config:1 Option 'data' uses argument with unquoted%SP
+Warning: whitespace. This may cause side-effects. Consider double quotes.
</stderr>
</verify>
</testcase>
`},
},
}, &MakeHelper{
Configure: [][2]string{ Configure: [][2]string{
{"with-openssl"}, {"with-openssl"},
{"with-ca-bundle", "/system/etc/ssl/certs/ca-bundle.crt"}, {"with-ca-bundle", "/system/etc/ssl/certs/ca-bundle.crt"},
{"disable-smb"},
}, },
Check: []string{ Check: []string{
"TFLAGS=-j256", `TFLAGS="-j$(expr "$(nproc)" '*' 2)"`,
"check", "test-nonflaky",
}, },
}, },
Perl, Perl,
Python,
PkgConfig,
Diffutils,
Libpsl, Libpsl,
OpenSSL, OpenSSL,
@@ -34,5 +58,12 @@ func init() {
Name: "curl", Name: "curl",
Description: "command line tool and library for transferring data with URLs", Description: "command line tool and library for transferring data with URLs",
Website: "https://curl.se/", Website: "https://curl.se/",
Dependencies: P{
Libpsl,
OpenSSL,
},
ID: 381,
} }
} }

View File

@@ -37,5 +37,7 @@ func init() {
Name: "dtc", Name: "dtc",
Description: "The Device Tree Compiler", Description: "The Device Tree Compiler",
Website: "https://git.kernel.org/pub/scm/utils/dtc/dtc.git/", Website: "https://git.kernel.org/pub/scm/utils/dtc/dtc.git/",
ID: 16911,
} }
} }

View File

@@ -45,5 +45,15 @@ func init() {
Name: "elfutils", Name: "elfutils",
Description: "utilities and libraries to handle ELF files and DWARF data", Description: "utilities and libraries to handle ELF files and DWARF data",
Website: "https://sourceware.org/elfutils/", Website: "https://sourceware.org/elfutils/",
Dependencies: P{
Zlib,
Bzip2,
Zstd,
MuslFts,
MuslObstack,
},
ID: 5679,
} }
} }

View File

@@ -36,9 +36,6 @@ index f135ad9..85c784c 100644
// makes assumptions about /etc/passwd // makes assumptions about /etc/passwd
SkipCheck: true, SkipCheck: true,
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
Libtool, Libtool,
PkgConfig, PkgConfig,
@@ -55,5 +52,7 @@ func init() {
Name: "fakeroot", Name: "fakeroot",
Description: "tool for simulating superuser privileges", Description: "tool for simulating superuser privileges",
Website: "https://salsa.debian.org/clint/fakeroot", Website: "https://salsa.debian.org/clint/fakeroot",
ID: 12048,
} }
} }

View File

@@ -25,5 +25,7 @@ func init() {
Name: "flex", Name: "flex",
Description: "scanner generator for lexing in C and C++", Description: "scanner generator for lexing in C and C++",
Website: "https://github.com/westes/flex/", Website: "https://github.com/westes/flex/",
ID: 819,
} }
} }

View File

@@ -24,11 +24,7 @@ func (t Toolchain) newFuse() (pkg.Artifact, string) {
// this project uses pytest // this project uses pytest
SkipTest: true, SkipTest: true,
}, },
IniConfig, PythonPyTest,
Packaging,
Pluggy,
Pygments,
PyTest,
KernelHeaders, KernelHeaders,
), version ), version
@@ -40,5 +36,7 @@ func init() {
Name: "fuse", Name: "fuse",
Description: "the reference implementation of the Linux FUSE interface", Description: "the reference implementation of the Linux FUSE interface",
Website: "https://github.com/libfuse/libfuse/", Website: "https://github.com/libfuse/libfuse/",
ID: 861,
} }
} }

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newGit() (pkg.Artifact, string) { func (t Toolchain) newGit() (pkg.Artifact, string) {
const ( const (
version = "2.52.0" version = "2.53.0"
checksum = "uH3J1HAN_c6PfGNJd2OBwW4zo36n71wmkdvityYnrh8Ak0D1IifiAvEWz9Vi9DmS" checksum = "rlqSTeNgSeVKJA7nvzGqddFH8q3eFEPB4qRZft-4zth8wTHnbTbm7J90kp_obHGm"
) )
return t.NewPackage("git", version, pkg.NewHTTPGetTar( return t.NewPackage("git", version, pkg.NewHTTPGetTar(
nil, "https://www.kernel.org/pub/software/scm/git/"+ nil, "https://www.kernel.org/pub/software/scm/git/"+
@@ -52,16 +52,18 @@ disable_test t2200-add-update
`GIT_PROVE_OPTS="--jobs 32 --failures"`, `GIT_PROVE_OPTS="--jobs 32 --failures"`,
"prove", "prove",
}, },
Install: `make \
"-j$(nproc)" \
DESTDIR=/work \
NO_INSTALL_HARDLINKS=1 \
install`,
}, },
Perl,
Diffutils, Diffutils,
M4,
Autoconf, Autoconf,
Gettext, Gettext,
Zlib, Zlib,
Curl, Curl,
OpenSSL,
Libexpat, Libexpat,
), version ), version
} }
@@ -72,6 +74,14 @@ func init() {
Name: "git", Name: "git",
Description: "distributed version control system", Description: "distributed version control system",
Website: "https://www.git-scm.com/", Website: "https://www.git-scm.com/",
Dependencies: P{
Zlib,
Curl,
Libexpat,
},
ID: 5350,
} }
} }
@@ -80,14 +90,10 @@ func (t Toolchain) NewViaGit(
name, url, rev string, name, url, rev string,
checksum pkg.Checksum, checksum pkg.Checksum,
) pkg.Artifact { ) pkg.Artifact {
return t.New(name+"-"+rev, 0, []pkg.Artifact{ return t.New(name+"-"+rev, 0, t.AppendPresets(nil,
t.Load(NSSCACert), NSSCACert,
t.Load(OpenSSL), Git,
t.Load(Libpsl), ), &checksum, nil, `
t.Load(Curl),
t.Load(Libexpat),
t.Load(Git),
}, &checksum, nil, `
git \ git \
-c advice.detachedHead=false \ -c advice.detachedHead=false \
clone \ clone \

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newM4() (pkg.Artifact, string) { func (t Toolchain) newM4() (pkg.Artifact, string) {
const ( const (
version = "1.4.20" version = "1.4.21"
checksum = "RT0_L3m4Co86bVBY3lCFAEs040yI1WdeNmRylFpah8IZovTm6O4wI7qiHJN3qsW9" checksum = "pPa6YOo722Jw80l1OsH1tnUaklnPFjFT-bxGw5iAVrZTm1P8FQaWao_NXop46-pm"
) )
return t.NewPackage("m4", version, pkg.NewHTTPGetTar( return t.NewPackage("m4", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/m4/m4-"+version+".tar.bz2", nil, "https://ftpmirror.gnu.org/gnu/m4/m4-"+version+".tar.bz2",
@@ -18,6 +18,8 @@ chmod +w tests/test-c32ispunct.sh && echo '#!/bin/sh' > tests/test-c32ispunct.sh
`, `,
}, (*MakeHelper)(nil), }, (*MakeHelper)(nil),
Diffutils, Diffutils,
KernelHeaders,
), version ), version
} }
func init() { func init() {
@@ -27,6 +29,8 @@ func init() {
Name: "m4", Name: "m4",
Description: "a macro processor with GNU extensions", Description: "a macro processor with GNU extensions",
Website: "https://www.gnu.org/software/m4/", Website: "https://www.gnu.org/software/m4/",
ID: 1871,
} }
} }
@@ -52,6 +56,8 @@ func init() {
Name: "bison", Name: "bison",
Description: "a general-purpose parser generator", Description: "a general-purpose parser generator",
Website: "https://www.gnu.org/software/bison/", Website: "https://www.gnu.org/software/bison/",
ID: 193,
} }
} }
@@ -75,6 +81,8 @@ func init() {
Name: "sed", Name: "sed",
Description: "a non-interactive command-line text editor", Description: "a non-interactive command-line text editor",
Website: "https://www.gnu.org/software/sed/", Website: "https://www.gnu.org/software/sed/",
ID: 4789,
} }
} }
@@ -108,6 +116,13 @@ func init() {
Name: "autoconf", Name: "autoconf",
Description: "M4 macros to produce self-contained configure script", Description: "M4 macros to produce self-contained configure script",
Website: "https://www.gnu.org/software/autoconf/", Website: "https://www.gnu.org/software/autoconf/",
Dependencies: P{
M4,
Perl,
},
ID: 141,
} }
} }
@@ -133,8 +148,6 @@ test_disable '#!/bin/sh' t/distname.sh
test_disable '#!/bin/sh' t/pr9.sh test_disable '#!/bin/sh' t/pr9.sh
`, `,
}, (*MakeHelper)(nil), }, (*MakeHelper)(nil),
M4,
Perl,
Grep, Grep,
Gzip, Gzip,
Autoconf, Autoconf,
@@ -148,6 +161,12 @@ func init() {
Name: "automake", Name: "automake",
Description: "a tool for automatically generating Makefile.in files", Description: "a tool for automatically generating Makefile.in files",
Website: "https://www.gnu.org/software/automake/", Website: "https://www.gnu.org/software/automake/",
Dependencies: P{
Autoconf,
},
ID: 144,
} }
} }
@@ -177,6 +196,8 @@ func init() {
Name: "libtool", Name: "libtool",
Description: "a generic library support script", Description: "a generic library support script",
Website: "https://www.gnu.org/software/libtool/", Website: "https://www.gnu.org/software/libtool/",
ID: 1741,
} }
} }
@@ -201,6 +222,8 @@ func init() {
Name: "gzip", Name: "gzip",
Description: "a popular data compression program", Description: "a popular data compression program",
Website: "https://www.gnu.org/software/gzip/", Website: "https://www.gnu.org/software/gzip/",
ID: 1290,
} }
} }
@@ -245,6 +268,8 @@ func init() {
Name: "gettext", Name: "gettext",
Description: "tools for producing multi-lingual messages", Description: "tools for producing multi-lingual messages",
Website: "https://www.gnu.org/software/gettext/", Website: "https://www.gnu.org/software/gettext/",
ID: 898,
} }
} }
@@ -276,6 +301,8 @@ func init() {
Name: "diffutils", Name: "diffutils",
Description: "several programs related to finding differences between files", Description: "several programs related to finding differences between files",
Website: "https://www.gnu.org/software/diffutils/", Website: "https://www.gnu.org/software/diffutils/",
ID: 436,
} }
} }
@@ -306,6 +333,8 @@ func init() {
Name: "patch", Name: "patch",
Description: "a program to apply diff output to files", Description: "a program to apply diff output to files",
Website: "https://savannah.gnu.org/projects/patch/", Website: "https://savannah.gnu.org/projects/patch/",
ID: 2597,
} }
} }
@@ -334,13 +363,15 @@ func init() {
Name: "bash", Name: "bash",
Description: "the Bourne Again SHell", Description: "the Bourne Again SHell",
Website: "https://www.gnu.org/software/bash/", Website: "https://www.gnu.org/software/bash/",
ID: 166,
} }
} }
func (t Toolchain) newCoreutils() (pkg.Artifact, string) { func (t Toolchain) newCoreutils() (pkg.Artifact, string) {
const ( const (
version = "9.9" version = "9.10"
checksum = "B1_TaXj1j5aiVIcazLWu8Ix03wDV54uo2_iBry4qHG6Y-9bjDpUPlkNLmU_3Nvw6" checksum = "o-B9wssRnZySzJUI1ZJAgw-bZtj1RC67R9po2AcM2OjjS8FQIl16IRHpC6IwO30i"
) )
return t.NewPackage("coreutils", version, pkg.NewHTTPGetTar( return t.NewPackage("coreutils", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/coreutils/coreutils-"+version+".tar.gz", nil, "https://ftpmirror.gnu.org/gnu/coreutils/coreutils-"+version+".tar.gz",
@@ -353,12 +384,105 @@ test_disable() { chmod +w "$2" && echo "$1" > "$2"; }
test_disable '#!/bin/sh' gnulib-tests/test-c32ispunct.sh test_disable '#!/bin/sh' gnulib-tests/test-c32ispunct.sh
test_disable '#!/bin/sh' tests/split/line-bytes.sh test_disable '#!/bin/sh' tests/split/line-bytes.sh
test_disable '#!/bin/sh' tests/dd/no-allocate.sh test_disable '#!/bin/sh' tests/ls/hyperlink.sh
test_disable '#!/bin/sh' tests/env/env.sh
test_disable 'int main(){return 0;}' gnulib-tests/test-chown.c test_disable 'int main(){return 0;}' gnulib-tests/test-chown.c
test_disable 'int main(){return 0;}' gnulib-tests/test-fchownat.c test_disable 'int main(){return 0;}' gnulib-tests/test-fchownat.c
test_disable 'int main(){return 0;}' gnulib-tests/test-lchown.c test_disable 'int main(){return 0;}' gnulib-tests/test-lchown.c
`, `,
Patches: [][2]string{
{"tests-fix-job-control", `From 21d287324aa43aa3a31f39619ade0deac7fd6013 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= <P@draigBrady.com>
Date: Tue, 24 Feb 2026 15:44:41 +0000
Subject: [PATCH] tests: fix job control triggering test termination
This avoids the test harness being terminated like:
make[1]: *** [Makefile:24419: check-recursive] Hangup
make[3]: *** [Makefile:24668: check-TESTS] Hangup
make: *** [Makefile:24922: check] Hangup
make[2]: *** [Makefile:24920: check-am] Hangup
make[4]: *** [Makefile:24685: tests/misc/usage_vs_refs.log] Error 129
...
This happened sometimes when the tests were being run non interactively.
For example when run like:
setsid make TESTS="tests/timeout/timeout.sh \
tests/tail/overlay-headers.sh" SUBDIRS=. -j2 check
Note the race window can be made bigger by adding a sleep
after tail is stopped in overlay-headers.sh
The race can trigger the kernel to induce its job control
mechanism to prevent stuck processes.
I.e. where it sends SIGHUP + SIGCONT to a process group
when it determines that group may become orphaned,
and there are stopped processes in that group.
* tests/tail/overlay-headers.sh: Use setsid(1) to keep the stopped
tail process in a separate process group, thus avoiding any kernel
job control protection mechanism.
* tests/timeout/timeout.sh: Use setsid(1) to avoid the kernel
checking the main process group when sleep(1) is reparented.
Fixes https://bugs.gnu.org/80477
---
tests/tail/overlay-headers.sh | 8 +++++++-
tests/timeout/timeout.sh | 11 ++++++++---
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/tests/tail/overlay-headers.sh b/tests/tail/overlay-headers.sh
index be9b6a7df..1e6da0a3f 100755
--- a/tests/tail/overlay-headers.sh
+++ b/tests/tail/overlay-headers.sh
@@ -20,6 +20,8 @@
. "${srcdir=.}/tests/init.sh"; path_prepend_ ./src
print_ver_ tail sleep
+setsid true || skip_ 'setsid required to control groups'
+
# Function to count number of lines from tail
# while ignoring transient errors due to resource limits
countlines_ ()
@@ -54,7 +56,11 @@ echo start > file2 || framework_failure_
env sleep 60 & sleep=$!
# Note don't use timeout(1) here as it currently
-# does not propagate SIGCONT
+# does not propagate SIGCONT.
+# Note use setsid here to ensure we're in a separate process group
+# as we're going to STOP this tail process, and this can trigger
+# the kernel to send SIGHUP to a group if other tests have
+# processes that are reparented. (See tests/timeout/timeout.sh).
tail $fastpoll --pid=$sleep -f file1 file2 > out & pid=$!
# Ensure tail is running
diff --git a/tests/timeout/timeout.sh b/tests/timeout/timeout.sh
index 9a395416b..fbb043312 100755
--- a/tests/timeout/timeout.sh
+++ b/tests/timeout/timeout.sh
@@ -56,9 +56,14 @@ returns_ 124 timeout --foreground -s0 -k1 .1 sleep 10 && fail=1
) || fail=1
# Don't be confused when starting off with a child (Bug#9098).
-out=$(sleep .1 & exec timeout .5 sh -c 'sleep 2; echo foo')
-status=$?
-test "$out" = "" && test $status = 124 || fail=1
+# Use setsid to avoid sleep being in the test's process group, as
+# upon reparenting it can trigger an orphaned process group SIGHUP
+# (if there were stopped processes in other tests).
+if setsid true; then
+ out=$(setsid sleep .1 & exec timeout .5 sh -c 'sleep 2; echo foo')
+ status=$?
+ test "$out" = "" && test $status = 124 || fail=1
+fi
# Verify --verbose output
cat > exp <<\EOF
--
2.53.0
`},
},
Flag: TEarly, Flag: TEarly,
}, &MakeHelper{ }, &MakeHelper{
Configure: [][2]string{ Configure: [][2]string{
@@ -378,13 +502,15 @@ func init() {
Name: "coreutils", Name: "coreutils",
Description: "the basic file, shell and text manipulation utilities", Description: "the basic file, shell and text manipulation utilities",
Website: "https://www.gnu.org/software/coreutils/", Website: "https://www.gnu.org/software/coreutils/",
ID: 343,
} }
} }
func (t Toolchain) newTexinfo() (pkg.Artifact, string) { func (t Toolchain) newTexinfo() (pkg.Artifact, string) {
const ( const (
version = "7.2" version = "7.3"
checksum = "9EelM5b7QGMAY5DKrAm_El8lofBGuFWlaBPSBhh7l_VQE8054MBmC0KBvGrABqjv" checksum = "RRmC8Xwdof7JuZJeWGAQ_GeASIHAuJFQMbNONXBz5InooKIQGmqmWRjGNGEr5n4-"
) )
return t.NewPackage("texinfo", version, pkg.NewHTTPGetTar( return t.NewPackage("texinfo", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/texinfo/texinfo-"+version+".tar.gz", nil, "https://ftpmirror.gnu.org/gnu/texinfo/texinfo-"+version+".tar.gz",
@@ -404,6 +530,13 @@ func init() {
Name: "texinfo", Name: "texinfo",
Description: "the GNU square-wheel-reinvension of man pages", Description: "the GNU square-wheel-reinvension of man pages",
Website: "https://www.gnu.org/software/texinfo/", Website: "https://www.gnu.org/software/texinfo/",
Dependencies: P{
Perl,
Gawk,
},
ID: 4958,
} }
} }
@@ -427,13 +560,15 @@ func init() {
Name: "gperf", Name: "gperf",
Description: "a perfect hash function generator", Description: "a perfect hash function generator",
Website: "https://www.gnu.org/software/gperf/", Website: "https://www.gnu.org/software/gperf/",
ID: 1237,
} }
} }
func (t Toolchain) newGawk() (pkg.Artifact, string) { func (t Toolchain) newGawk() (pkg.Artifact, string) {
const ( const (
version = "5.3.2" version = "5.4.0"
checksum = "uIs0d14h_d2DgMGYwrPtegGNyt_bxzG3D6Fe-MmExx_pVoVkQaHzrtmiXVr6NHKk" checksum = "m0RkIolC-PI7EY5q8pcx5Y-0twlIW0Yp3wXXmV-QaHorSdf8BhZ7kW9F8iWomz0C"
) )
return t.NewPackage("gawk", version, pkg.NewHTTPGetTar( return t.NewPackage("gawk", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/gawk/gawk-"+version+".tar.gz", nil, "https://ftpmirror.gnu.org/gnu/gawk/gawk-"+version+".tar.gz",
@@ -453,6 +588,8 @@ func init() {
Name: "gawk", Name: "gawk",
Description: "an implementation of awk with GNU extensions", Description: "an implementation of awk with GNU extensions",
Website: "https://www.gnu.org/software/gawk/", Website: "https://www.gnu.org/software/gawk/",
ID: 868,
} }
} }
@@ -484,6 +621,8 @@ func init() {
Name: "grep", Name: "grep",
Description: "searches input for lines containing a match to a pattern", Description: "searches input for lines containing a match to a pattern",
Website: "https://www.gnu.org/software/grep/", Website: "https://www.gnu.org/software/grep/",
ID: 1251,
} }
} }
@@ -514,6 +653,8 @@ func init() {
Name: "findutils", Name: "findutils",
Description: "the basic directory searching utilities", Description: "the basic directory searching utilities",
Website: "https://www.gnu.org/software/findutils/", Website: "https://www.gnu.org/software/findutils/",
ID: 812,
} }
} }
@@ -531,7 +672,6 @@ func (t Toolchain) newBC() (pkg.Artifact, string) {
Writable: true, Writable: true,
Chmod: true, Chmod: true,
}, (*MakeHelper)(nil), }, (*MakeHelper)(nil),
Perl,
Texinfo, Texinfo,
), version ), version
} }
@@ -542,13 +682,15 @@ func init() {
Name: "bc", Name: "bc",
Description: "an arbitrary precision numeric processing language", Description: "an arbitrary precision numeric processing language",
Website: "https://www.gnu.org/software/bc/", Website: "https://www.gnu.org/software/bc/",
ID: 170,
} }
} }
func (t Toolchain) newLibiconv() (pkg.Artifact, string) { func (t Toolchain) newLibiconv() (pkg.Artifact, string) {
const ( const (
version = "1.18" version = "1.19"
checksum = "iV5q3VxP5VPdJ-X7O5OQI4fGm8VjeYb5viLd1L3eAHg26bbHb2_Qn63XPF3ucVZr" checksum = "UibB6E23y4MksNqYmCCrA3zTFO6vJugD1DEDqqWYFZNuBsUWMVMcncb_5pPAr88x"
) )
return t.NewPackage("libiconv", version, pkg.NewHTTPGetTar( return t.NewPackage("libiconv", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/libiconv/libiconv-"+version+".tar.gz", nil, "https://ftpmirror.gnu.org/gnu/libiconv/libiconv-"+version+".tar.gz",
@@ -563,6 +705,8 @@ func init() {
Name: "libiconv", Name: "libiconv",
Description: "iconv implementation independent of glibc", Description: "iconv implementation independent of glibc",
Website: "https://www.gnu.org/software/libiconv/", Website: "https://www.gnu.org/software/libiconv/",
ID: 10656,
} }
} }
@@ -603,13 +747,44 @@ func init() {
Name: "tar", Name: "tar",
Description: "provides the ability to create tar archives", Description: "provides the ability to create tar archives",
Website: "https://www.gnu.org/software/tar/", Website: "https://www.gnu.org/software/tar/",
ID: 4939,
}
}
func (t Toolchain) newParallel() (pkg.Artifact, string) {
const (
version = "20260222"
checksum = "4wxjMi3G2zMxr9hvLcIn6D7_12A3e5UNObeTPhzn7mDAYwsZApmmkxfGPyllQQ7E"
)
return t.NewPackage("parallel", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/parallel/parallel-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
), nil, (*MakeHelper)(nil),
Perl,
), version
}
func init() {
artifactsM[Parallel] = Metadata{
f: Toolchain.newParallel,
Name: "parallel",
Description: "a shell tool for executing jobs in parallel using one or more computers",
Website: "https://www.gnu.org/software/parallel/",
Dependencies: P{
Perl,
},
ID: 5448,
} }
} }
func (t Toolchain) newBinutils() (pkg.Artifact, string) { func (t Toolchain) newBinutils() (pkg.Artifact, string) {
const ( const (
version = "2.45" version = "2.46.0"
checksum = "hlLtqqHDmzAT2OQVHaKEd_io2DGFvJkaeS-igBuK8bRRir7LUKGHgHYNkDVKaHTT" checksum = "4kK1_EXQipxSqqyvwD4LbiMLFKCUApjq6PeG4XJP4dzxYGqDeqXfh8zLuTyOuOVR"
) )
return t.NewPackage("binutils", version, pkg.NewHTTPGetTar( return t.NewPackage("binutils", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/binutils/binutils-"+version+".tar.bz2", nil, "https://ftpmirror.gnu.org/gnu/binutils/binutils-"+version+".tar.bz2",
@@ -626,6 +801,8 @@ func init() {
Name: "binutils", Name: "binutils",
Description: "a collection of binary tools", Description: "a collection of binary tools",
Website: "https://www.gnu.org/software/binutils/", Website: "https://www.gnu.org/software/binutils/",
ID: 7981,
} }
} }
@@ -650,6 +827,8 @@ func init() {
Name: "gmp", Name: "gmp",
Description: "a free library for arbitrary precision arithmetic", Description: "a free library for arbitrary precision arithmetic",
Website: "https://gmplib.org/", Website: "https://gmplib.org/",
ID: 1186,
} }
} }
@@ -674,6 +853,12 @@ func init() {
Name: "mpfr", Name: "mpfr",
Description: "a C library for multiple-precision floating-point computations", Description: "a C library for multiple-precision floating-point computations",
Website: "https://www.mpfr.org/", Website: "https://www.mpfr.org/",
Dependencies: P{
GMP,
},
ID: 2019,
} }
} }
@@ -688,7 +873,6 @@ func (t Toolchain) newMPC() (pkg.Artifact, string) {
mustDecode(checksum), mustDecode(checksum),
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
GMP,
MPFR, MPFR,
), version ), version
} }
@@ -699,6 +883,12 @@ func init() {
Name: "mpc", Name: "mpc",
Description: "a C library for the arithmetic of complex numbers", Description: "a C library for the arithmetic of complex numbers",
Website: "https://www.multiprecision.org/", Website: "https://www.multiprecision.org/",
Dependencies: P{
MPFR,
},
ID: 1667,
} }
} }
@@ -895,10 +1085,7 @@ ln -s system/lib /work/
}, },
Binutils, Binutils,
GMP,
MPFR,
MPC, MPC,
Zlib, Zlib,
Libucontext, Libucontext,
KernelHeaders, KernelHeaders,
@@ -911,5 +1098,15 @@ func init() {
Name: "gcc", Name: "gcc",
Description: "The GNU Compiler Collection", Description: "The GNU Compiler Collection",
Website: "https://www.gnu.org/software/gcc/", Website: "https://www.gnu.org/software/gcc/",
Dependencies: P{
Binutils,
MPC,
Zlib,
Libucontext,
},
ID: 6502,
} }
} }

View File

@@ -74,22 +74,8 @@ func (t Toolchain) newGoLatest() (pkg.Artifact, string) {
bootstrapExtra = append(bootstrapExtra, t.newGoBootstrap()) bootstrapExtra = append(bootstrapExtra, t.newGoBootstrap())
case "arm64": case "arm64":
bootstrapEnv = append(bootstrapEnv, bootstrapEnv = append(bootstrapEnv, "GOROOT_BOOTSTRAP=/system")
"GOROOT_BOOTSTRAP=/system", bootstrapExtra = t.AppendPresets(bootstrapExtra, gcc)
)
bootstrapExtra = append(bootstrapExtra,
t.Load(Binutils),
t.Load(GMP),
t.Load(MPFR),
t.Load(MPC),
t.Load(Zlib),
t.Load(Libucontext),
t.Load(gcc),
)
finalEnv = append(finalEnv, "CGO_ENABLED=0") finalEnv = append(finalEnv, "CGO_ENABLED=0")
default: default:
@@ -154,8 +140,8 @@ rm \
) )
const ( const (
version = "1.26.0" version = "1.26.1"
checksum = "uHLcrgBc0NMcyTMDLRNAZIcOx0RyQlyekSl9xbWSwj3esEFWJysYLfLa3S8p39Nh" checksum = "DdC5Ea-aCYPUHNObQh_09uWU0vn4e-8Ben850Vq-5OoamDRrXhuYI4YQ_BOFgaT0"
) )
return t.newGo( return t.newGo(
version, version,
@@ -177,5 +163,7 @@ func init() {
Name: "go", Name: "go",
Description: "the Go programming language toolchain", Description: "the Go programming language toolchain",
Website: "https://go.dev/", Website: "https://go.dev/",
ID: 1227,
} }
} }

View File

@@ -9,8 +9,8 @@ import (
func (t Toolchain) newGLib() (pkg.Artifact, string) { func (t Toolchain) newGLib() (pkg.Artifact, string) {
const ( const (
version = "2.86.4" version = "2.87.5"
checksum = "AfTjBrrxtXXPL6dFa1LfTe40PyPSth62CoIkM5m_VJTUngGLOFHw6I4XE7RGQE8G" checksum = "L5jurSfyCTlcSTfx-1RBHbNZPL0HnNQakmFXidgAV1JFu0lbytowCCBAALTp-WGc"
) )
return t.NewPackage("glib", version, pkg.NewHTTPGet( return t.NewPackage("glib", version, pkg.NewHTTPGet(
nil, "https://download.gnome.org/sources/glib/"+ nil, "https://download.gnome.org/sources/glib/"+
@@ -40,7 +40,7 @@ func (t Toolchain) newGLib() (pkg.Artifact, string) {
}, },
}, },
XZ, XZ,
Packaging, PythonPackaging,
Bash, Bash,
PCRE2, PCRE2,
@@ -54,6 +54,14 @@ func init() {
Name: "glib", Name: "glib",
Description: "the GNU library of miscellaneous stuff", Description: "the GNU library of miscellaneous stuff",
Website: "https://gitlab.gnome.org/GNOME/glib/", Website: "https://developer.gnome.org/glib/",
Dependencies: P{
PCRE2,
Libffi,
Zlib,
},
ID: 10024,
} }
} }

View File

@@ -2,44 +2,45 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
func (t Toolchain) newHakurei(suffix, script string) pkg.Artifact { func (t Toolchain) newHakurei(
return t.New("hakurei"+suffix+"-"+hakureiVersion, 0, []pkg.Artifact{ suffix, script string,
t.Load(Go), withHostname bool,
) pkg.Artifact {
t.Load(Gzip), hostname := `
t.Load(PkgConfig),
t.Load(KernelHeaders),
t.Load(Libseccomp),
t.Load(ACL),
t.Load(Attr),
t.Load(Fuse),
t.Load(Xproto),
t.Load(LibXau),
t.Load(XCBProto),
t.Load(XCB),
t.Load(Libffi),
t.Load(Libexpat),
t.Load(Libxml2),
t.Load(Wayland),
t.Load(WaylandProtocols),
}, nil, []string{
"CGO_ENABLED=1",
"GOCACHE=/tmp/gocache",
"CC=clang -O3 -Werror",
}, `
echo '# Building test helper (hostname).' echo '# Building test helper (hostname).'
go build -v -o /bin/hostname /usr/src/hostname/main.go go build -v -o /bin/hostname /usr/src/hostname/main.go
echo echo
`
if !withHostname {
hostname = ""
}
chmod -R +w /usr/src/hakurei return t.New("hakurei"+suffix+"-"+hakureiVersion, 0, t.AppendPresets(nil,
Go,
PkgConfig,
// dist tarball
Gzip,
// statically linked
Libseccomp,
ACL,
Fuse,
XCB,
Wayland,
WaylandProtocols,
KernelHeaders,
), nil, []string{
"CGO_ENABLED=1",
"GOCACHE=/tmp/gocache",
"CC=clang -O3 -Werror",
}, hostname+`
cd /usr/src/hakurei cd /usr/src/hakurei
HAKUREI_VERSION='v`+hakureiVersion+`' HAKUREI_VERSION='v`+hakureiVersion+`'
`+script, pkg.Path(AbsUsrSrc.Append("hakurei"), true, t.NewPatchedSource( `+script, pkg.Path(AbsUsrSrc.Append("hakurei"), true, t.NewPatchedSource(
"hakurei", hakureiVersion, hakureiSource, true, hakureiPatches..., "hakurei", hakureiVersion, hakureiSource, false, hakureiPatches...,
)), pkg.Path(AbsUsrSrc.Append("hostname", "main.go"), false, pkg.NewFile( )), pkg.Path(AbsUsrSrc.Append("hostname", "main.go"), false, pkg.NewFile(
"hostname.go", "hostname.go",
[]byte(` []byte(`
@@ -69,10 +70,11 @@ go build -trimpath -v -o /work/system/libexec/hakurei -ldflags="-s -w
-buildid= -buildid=
-linkmode external -linkmode external
-extldflags=-static -extldflags=-static
-X hakurei.app/internal/info.buildVersion="$HAKUREI_VERSION" -X hakurei.app/internal/info.buildVersion=${HAKUREI_VERSION}
-X hakurei.app/internal/info.hakureiPath=/system/bin/hakurei -X hakurei.app/internal/info.hakureiPath=/system/bin/hakurei
-X hakurei.app/internal/info.hsuPath=/system/bin/hsu -X hakurei.app/internal/info.hsuPath=/system/bin/hsu
-X main.hakureiPath=/system/bin/hakurei" ./... -X main.hakureiPath=/system/bin/hakurei
" ./...
echo echo
echo '# Testing hakurei.' echo '# Testing hakurei.'
@@ -84,19 +86,21 @@ mkdir -p /work/system/bin/
hakurei \ hakurei \
sharefs \ sharefs \
../../bin/) ../../bin/)
`), hakureiVersion `, true), hakureiVersion
}, },
Name: "hakurei", Name: "hakurei",
Description: "low-level userspace tooling for Rosa OS", Description: "low-level userspace tooling for Rosa OS",
Website: "https://hakurei.app/", Website: "https://hakurei.app/",
ID: 388834,
} }
artifactsM[HakureiDist] = Metadata{ artifactsM[HakureiDist] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) { f: func(t Toolchain) (pkg.Artifact, string) {
return t.newHakurei("-dist", ` return t.newHakurei("-dist", `
export HAKUREI_VERSION export HAKUREI_VERSION
DESTDIR=/work /usr/src/hakurei/dist/release.sh DESTDIR=/work /usr/src/hakurei/dist/release.sh
`), hakureiVersion `, true), hakureiVersion
}, },
Name: "hakurei-dist", Name: "hakurei-dist",

View File

@@ -4,48 +4,15 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
const hakureiVersion = "0.3.5" const hakureiVersion = "0.3.6"
// hakureiSource is the source code of a hakurei release. // hakureiSource is the source code of a hakurei release.
var hakureiSource = pkg.NewHTTPGetTar( var hakureiSource = pkg.NewHTTPGetTar(
nil, "https://git.gensokyo.uk/security/hakurei/archive/"+ nil, "https://git.gensokyo.uk/security/hakurei/archive/"+
"v"+hakureiVersion+".tar.gz", "v"+hakureiVersion+".tar.gz",
mustDecode("6Tn38NLezRD2d3aGdFg5qFfqn8_KvC6HwMKwJMPvaHmVw8xRgxn8B0PObswl2mOk"), mustDecode("Yul9J2yV0x453lQP9KUnG_wEJo_DbKMNM7xHJGt4rITCSeX9VRK2J4kzAxcv_0-b"),
pkg.TarGzip, pkg.TarGzip,
) )
// hakureiPatches are patches applied against a hakurei release. // hakureiPatches are patches applied against a hakurei release.
var hakureiPatches = [][2]string{ var hakureiPatches [][2]string
{"createTemp-error-injection", `diff --git a/container/dispatcher_test.go b/container/dispatcher_test.go
index 5de37fc..fe0c4db 100644
--- a/container/dispatcher_test.go
+++ b/container/dispatcher_test.go
@@ -238,8 +238,11 @@ func sliceAddr[S any](s []S) *[]S { return &s }
func newCheckedFile(t *testing.T, name, wantData string, closeErr error) osFile {
f := &checkedOsFile{t: t, name: name, want: wantData, closeErr: closeErr}
- // check happens in Close, and cleanup is not guaranteed to run, so relying on it for sloppy implementations will cause sporadic test results
- f.cleanup = runtime.AddCleanup(f, func(name string) { f.t.Fatalf("checkedOsFile %s became unreachable without a call to Close", name) }, f.name)
+ // check happens in Close, and cleanup is not guaranteed to run, so relying
+ // on it for sloppy implementations will cause sporadic test results
+ f.cleanup = runtime.AddCleanup(f, func(name string) {
+ panic("checkedOsFile " + name + " became unreachable without a call to Close")
+ }, name)
return f
}
diff --git a/container/initplace_test.go b/container/initplace_test.go
index afeddbe..1c2f20b 100644
--- a/container/initplace_test.go
+++ b/container/initplace_test.go
@@ -21,7 +21,7 @@ func TestTmpfileOp(t *testing.T) {
Path: samplePath,
Data: sampleData,
}, nil, nil, []stub.Call{
- call("createTemp", stub.ExpectArgs{"/", "tmp.*"}, newCheckedFile(t, "tmp.32768", sampleDataString, nil), stub.UniqueError(5)),
+ call("createTemp", stub.ExpectArgs{"/", "tmp.*"}, (*checkedOsFile)(nil), stub.UniqueError(5)),
}, stub.UniqueError(5)},
{"Write", &Params{ParentPerm: 0700}, &TmpfileOp{
`},
}

View File

@@ -1,13 +1,62 @@
package rosa package rosa
import "hakurei.app/internal/pkg" import (
"hakurei.app/container/fhs"
"hakurei.app/internal/pkg"
)
func init() {
artifactsM[EarlyInit] = Metadata{
Name: "earlyinit",
Description: "Rosa OS initramfs init program",
f: func(t Toolchain) (pkg.Artifact, string) {
return t.newHakurei("-early-init", `
mkdir -p /work/system/libexec/hakurei/
echo '# Building earlyinit.'
go build -trimpath -v -o /work/system/libexec/hakurei -ldflags="-s -w
-buildid=
-linkmode external
-extldflags=-static
-X hakurei.app/internal/info.buildVersion=${HAKUREI_VERSION}
" ./cmd/earlyinit
echo
`, false), Unversioned
},
}
}
func (t Toolchain) newImageSystem() (pkg.Artifact, string) {
return t.New("system.img", TNoToolchain, t.AppendPresets(nil,
SquashfsTools,
), nil, nil, `
mksquashfs /mnt/system /work/system.img
`, pkg.Path(fhs.AbsRoot.Append("mnt"), false, t.AppendPresets(nil,
Musl,
Mksh,
Toybox,
Kmod,
Kernel,
Firmware,
)...)), Unversioned
}
func init() {
artifactsM[ImageSystem] = Metadata{
Name: "system-image",
Description: "Rosa OS system image",
f: Toolchain.newImageSystem,
}
}
func (t Toolchain) newImageInitramfs() (pkg.Artifact, string) { func (t Toolchain) newImageInitramfs() (pkg.Artifact, string) {
return t.New("initramfs", TNoToolchain, []pkg.Artifact{ return t.New("initramfs", TNoToolchain, t.AppendPresets(nil,
t.Load(Zstd), Zstd,
t.Load(Hakurei), EarlyInit,
t.Load(GenInitCPIO), GenInitCPIO,
}, nil, nil, ` ), nil, nil, `
gen_init_cpio -t 4294967295 -c /usr/src/initramfs | zstd > /work/initramfs.zst gen_init_cpio -t 4294967295 -c /usr/src/initramfs | zstd > /work/initramfs.zst
`, pkg.Path(AbsUsrSrc.Append("initramfs"), false, pkg.NewFile("initramfs", []byte(` `, pkg.Path(AbsUsrSrc.Append("initramfs"), false, pkg.NewFile("initramfs", []byte(`
dir /dev 0755 0 0 dir /dev 0755 0 0

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,16 @@
# #
# Automatically generated file; DO NOT EDIT. # Automatically generated file; DO NOT EDIT.
# Linux/x86 6.12.73 Kernel Configuration # Linux/x86 6.12.76 Kernel Configuration
# #
CONFIG_CC_VERSION_TEXT="clang version 21.1.8" CONFIG_CC_VERSION_TEXT="clang version 22.1.1"
CONFIG_GCC_VERSION=0 CONFIG_GCC_VERSION=0
CONFIG_CC_IS_CLANG=y CONFIG_CC_IS_CLANG=y
CONFIG_CLANG_VERSION=210108 CONFIG_CLANG_VERSION=220101
CONFIG_AS_IS_LLVM=y CONFIG_AS_IS_LLVM=y
CONFIG_AS_VERSION=210108 CONFIG_AS_VERSION=220101
CONFIG_LD_VERSION=0 CONFIG_LD_VERSION=0
CONFIG_LD_IS_LLD=y CONFIG_LD_IS_LLD=y
CONFIG_LLD_VERSION=210108 CONFIG_LLD_VERSION=220101
CONFIG_RUSTC_VERSION=0 CONFIG_RUSTC_VERSION=0
CONFIG_RUSTC_LLVM_VERSION=0 CONFIG_RUSTC_LLVM_VERSION=0
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
@@ -2402,7 +2402,7 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
# #
# Firmware loader # Firmware loader
# #
CONFIG_FW_LOADER=m CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_DEBUG=y CONFIG_FW_LOADER_DEBUG=y
CONFIG_FW_LOADER_PAGED_BUF=y CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_FW_LOADER_SYSFS=y CONFIG_FW_LOADER_SYSFS=y
@@ -2749,7 +2749,7 @@ CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_FD=m CONFIG_BLK_DEV_FD=m
# CONFIG_BLK_DEV_FD_RAWCMD is not set # CONFIG_BLK_DEV_FD_RAWCMD is not set
CONFIG_CDROM=m CONFIG_CDROM=m
CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m CONFIG_BLK_DEV_PCIESSD_MTIP32XX=y
CONFIG_ZRAM=m CONFIG_ZRAM=m
# CONFIG_ZRAM_BACKEND_LZ4 is not set # CONFIG_ZRAM_BACKEND_LZ4 is not set
# CONFIG_ZRAM_BACKEND_LZ4HC is not set # CONFIG_ZRAM_BACKEND_LZ4HC is not set
@@ -2775,9 +2775,9 @@ CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8 CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set # CONFIG_CDROM_PKTCDVD_WCACHE is not set
CONFIG_ATA_OVER_ETH=m CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=m CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=m # CONFIG_XEN_BLKDEV_BACKEND is not set
CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_BLK=y
CONFIG_BLK_DEV_RBD=m CONFIG_BLK_DEV_RBD=m
CONFIG_BLK_DEV_UBLK=m CONFIG_BLK_DEV_UBLK=m
CONFIG_BLKDEV_UBLK_LEGACY_OPCODES=y CONFIG_BLKDEV_UBLK_LEGACY_OPCODES=y
@@ -2788,13 +2788,12 @@ CONFIG_BLK_DEV_RNBD_SERVER=m
# #
# NVME Support # NVME Support
# #
CONFIG_NVME_KEYRING=m CONFIG_NVME_KEYRING=y
CONFIG_NVME_AUTH=m CONFIG_NVME_AUTH=y
CONFIG_NVME_CORE=m CONFIG_NVME_CORE=y
CONFIG_BLK_DEV_NVME=m CONFIG_BLK_DEV_NVME=y
CONFIG_NVME_MULTIPATH=y CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_VERBOSE_ERRORS is not set # CONFIG_NVME_VERBOSE_ERRORS is not set
CONFIG_NVME_HWMON=y
CONFIG_NVME_FABRICS=m CONFIG_NVME_FABRICS=m
CONFIG_NVME_RDMA=m CONFIG_NVME_RDMA=m
CONFIG_NVME_FC=m CONFIG_NVME_FC=m
@@ -2911,10 +2910,10 @@ CONFIG_KEBA_CP500=m
# #
# SCSI device support # SCSI device support
# #
CONFIG_SCSI_MOD=m CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m CONFIG_RAID_ATTRS=m
CONFIG_SCSI_COMMON=m CONFIG_SCSI_COMMON=y
CONFIG_SCSI=m CONFIG_SCSI=y
CONFIG_SCSI_DMA=y CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y CONFIG_SCSI_PROC_FS=y
@@ -2922,7 +2921,7 @@ CONFIG_SCSI_PROC_FS=y
# #
# SCSI support type (disk, tape, CD-ROM) # SCSI support type (disk, tape, CD-ROM)
# #
CONFIG_BLK_DEV_SD=m CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m CONFIG_CHR_DEV_SG=m
@@ -3042,7 +3041,7 @@ CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_PMCRAID=m CONFIG_SCSI_PMCRAID=m
CONFIG_SCSI_PM8001=m CONFIG_SCSI_PM8001=m
CONFIG_SCSI_BFA_FC=m CONFIG_SCSI_BFA_FC=m
CONFIG_SCSI_VIRTIO=m CONFIG_SCSI_VIRTIO=y
CONFIG_SCSI_CHELSIO_FCOE=m CONFIG_SCSI_CHELSIO_FCOE=m
CONFIG_SCSI_LOWLEVEL_PCMCIA=y CONFIG_SCSI_LOWLEVEL_PCMCIA=y
CONFIG_PCMCIA_AHA152X=m CONFIG_PCMCIA_AHA152X=m
@@ -3052,7 +3051,7 @@ CONFIG_PCMCIA_SYM53C500=m
# CONFIG_SCSI_DH is not set # CONFIG_SCSI_DH is not set
# end of SCSI device support # end of SCSI device support
CONFIG_ATA=m CONFIG_ATA=y
CONFIG_SATA_HOST=y CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y CONFIG_ATA_VERBOSE_ERROR=y
@@ -3064,39 +3063,39 @@ CONFIG_SATA_PMP=y
# #
# Controllers with non-SFF native interface # Controllers with non-SFF native interface
# #
CONFIG_SATA_AHCI=m CONFIG_SATA_AHCI=y
CONFIG_SATA_MOBILE_LPM_POLICY=3 CONFIG_SATA_MOBILE_LPM_POLICY=3
CONFIG_SATA_AHCI_PLATFORM=m CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_DWC=m CONFIG_AHCI_DWC=y
CONFIG_AHCI_CEVA=m CONFIG_AHCI_CEVA=y
CONFIG_SATA_INIC162X=m CONFIG_SATA_INIC162X=m
CONFIG_SATA_ACARD_AHCI=m CONFIG_SATA_ACARD_AHCI=y
CONFIG_SATA_SIL24=m CONFIG_SATA_SIL24=y
CONFIG_ATA_SFF=y CONFIG_ATA_SFF=y
# #
# SFF controllers with custom DMA interface # SFF controllers with custom DMA interface
# #
CONFIG_PDC_ADMA=m CONFIG_PDC_ADMA=y
CONFIG_SATA_QSTOR=m CONFIG_SATA_QSTOR=y
CONFIG_SATA_SX4=m CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y CONFIG_ATA_BMDMA=y
# #
# SATA SFF controllers with BMDMA # SATA SFF controllers with BMDMA
# #
CONFIG_ATA_PIIX=m CONFIG_ATA_PIIX=y
CONFIG_SATA_DWC=m CONFIG_SATA_DWC=y
# CONFIG_SATA_DWC_OLD_DMA is not set # CONFIG_SATA_DWC_OLD_DMA is not set
CONFIG_SATA_MV=m CONFIG_SATA_MV=y
CONFIG_SATA_NV=m CONFIG_SATA_NV=y
CONFIG_SATA_PROMISE=m CONFIG_SATA_PROMISE=y
CONFIG_SATA_SIL=m CONFIG_SATA_SIL=y
CONFIG_SATA_SIS=m CONFIG_SATA_SIS=y
CONFIG_SATA_SVW=m CONFIG_SATA_SVW=y
CONFIG_SATA_ULI=m CONFIG_SATA_ULI=y
CONFIG_SATA_VIA=m CONFIG_SATA_VIA=y
CONFIG_SATA_VITESSE=m CONFIG_SATA_VITESSE=y
# #
# PATA SFF controllers with BMDMA # PATA SFF controllers with BMDMA
@@ -3130,7 +3129,7 @@ CONFIG_PATA_RDC=m
CONFIG_PATA_SCH=m CONFIG_PATA_SCH=m
CONFIG_PATA_SERVERWORKS=m CONFIG_PATA_SERVERWORKS=m
CONFIG_PATA_SIL680=m CONFIG_PATA_SIL680=m
CONFIG_PATA_SIS=m CONFIG_PATA_SIS=y
CONFIG_PATA_TOSHIBA=m CONFIG_PATA_TOSHIBA=m
CONFIG_PATA_TRIFLEX=m CONFIG_PATA_TRIFLEX=m
CONFIG_PATA_VIA=m CONFIG_PATA_VIA=m
@@ -3172,8 +3171,8 @@ CONFIG_PATA_PARPORT_ON26=m
# #
# Generic fallback / legacy drivers # Generic fallback / legacy drivers
# #
CONFIG_PATA_ACPI=m CONFIG_PATA_ACPI=y
CONFIG_ATA_GENERIC=m CONFIG_ATA_GENERIC=y
CONFIG_PATA_LEGACY=m CONFIG_PATA_LEGACY=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=m CONFIG_BLK_DEV_MD=m
@@ -9621,11 +9620,11 @@ CONFIG_EFI_SECRET=m
CONFIG_SEV_GUEST=m CONFIG_SEV_GUEST=m
CONFIG_TDX_GUEST_DRIVER=m CONFIG_TDX_GUEST_DRIVER=m
CONFIG_VIRTIO_ANCHOR=y CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=m CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=m CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=m CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_ADMIN_LEGACY=y CONFIG_VIRTIO_PCI_ADMIN_LEGACY=y
CONFIG_VIRTIO_PCI_LEGACY=y CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=m CONFIG_VIRTIO_VDPA=m
@@ -12308,7 +12307,6 @@ CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_NMI_CHECK_CPU is not set # CONFIG_NMI_CHECK_CPU is not set
# CONFIG_DEBUG_IRQFLAGS is not set # CONFIG_DEBUG_IRQFLAGS is not set
CONFIG_STACKTRACE=y CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set # CONFIG_DEBUG_KOBJECT is not set
# #
@@ -12345,7 +12343,7 @@ CONFIG_HAVE_RETHOOK=y
CONFIG_RETHOOK=y CONFIG_RETHOOK=y
CONFIG_HAVE_FUNCTION_TRACER=y CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y CONFIG_HAVE_FUNCTION_GRAPH_FREGS=y
CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y

View File

@@ -1,16 +1,16 @@
# #
# Automatically generated file; DO NOT EDIT. # Automatically generated file; DO NOT EDIT.
# Linux/arm64 6.12.73 Kernel Configuration # Linux/arm64 6.12.76 Kernel Configuration
# #
CONFIG_CC_VERSION_TEXT="clang version 21.1.8" CONFIG_CC_VERSION_TEXT="clang version 22.1.1"
CONFIG_GCC_VERSION=0 CONFIG_GCC_VERSION=0
CONFIG_CC_IS_CLANG=y CONFIG_CC_IS_CLANG=y
CONFIG_CLANG_VERSION=210108 CONFIG_CLANG_VERSION=220101
CONFIG_AS_IS_LLVM=y CONFIG_AS_IS_LLVM=y
CONFIG_AS_VERSION=210108 CONFIG_AS_VERSION=220101
CONFIG_LD_VERSION=0 CONFIG_LD_VERSION=0
CONFIG_LD_IS_LLD=y CONFIG_LD_IS_LLD=y
CONFIG_LLD_VERSION=210108 CONFIG_LLD_VERSION=220101
CONFIG_RUSTC_VERSION=0 CONFIG_RUSTC_VERSION=0
CONFIG_RUSTC_LLVM_VERSION=0 CONFIG_RUSTC_LLVM_VERSION=0
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
@@ -2384,7 +2384,7 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
# #
# Firmware loader # Firmware loader
# #
CONFIG_FW_LOADER=m CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_DEBUG=y CONFIG_FW_LOADER_DEBUG=y
CONFIG_FW_LOADER_PAGED_BUF=y CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_FW_LOADER_SYSFS=y CONFIG_FW_LOADER_SYSFS=y
@@ -2849,8 +2849,8 @@ CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8 CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set # CONFIG_CDROM_PKTCDVD_WCACHE is not set
CONFIG_ATA_OVER_ETH=m CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=m CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=m # CONFIG_XEN_BLKDEV_BACKEND is not set
CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m CONFIG_BLK_DEV_RBD=m
CONFIG_BLK_DEV_UBLK=m CONFIG_BLK_DEV_UBLK=m
@@ -2862,13 +2862,12 @@ CONFIG_BLK_DEV_RNBD_SERVER=m
# #
# NVME Support # NVME Support
# #
CONFIG_NVME_KEYRING=m CONFIG_NVME_KEYRING=y
CONFIG_NVME_AUTH=m CONFIG_NVME_AUTH=y
CONFIG_NVME_CORE=m CONFIG_NVME_CORE=y
CONFIG_BLK_DEV_NVME=m CONFIG_BLK_DEV_NVME=y
CONFIG_NVME_MULTIPATH=y CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_VERBOSE_ERRORS is not set # CONFIG_NVME_VERBOSE_ERRORS is not set
CONFIG_NVME_HWMON=y
CONFIG_NVME_FABRICS=m CONFIG_NVME_FABRICS=m
CONFIG_NVME_RDMA=m CONFIG_NVME_RDMA=m
CONFIG_NVME_FC=m CONFIG_NVME_FC=m
@@ -2977,10 +2976,10 @@ CONFIG_KEBA_CP500=m
# #
# SCSI device support # SCSI device support
# #
CONFIG_SCSI_MOD=m CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m CONFIG_RAID_ATTRS=m
CONFIG_SCSI_COMMON=m CONFIG_SCSI_COMMON=y
CONFIG_SCSI=m CONFIG_SCSI=y
CONFIG_SCSI_DMA=y CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y CONFIG_SCSI_PROC_FS=y
@@ -2988,7 +2987,7 @@ CONFIG_SCSI_PROC_FS=y
# #
# SCSI support type (disk, tape, CD-ROM) # SCSI support type (disk, tape, CD-ROM)
# #
CONFIG_BLK_DEV_SD=m CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m CONFIG_CHR_DEV_SG=m
@@ -3108,7 +3107,7 @@ CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_PMCRAID=m CONFIG_SCSI_PMCRAID=m
CONFIG_SCSI_PM8001=m CONFIG_SCSI_PM8001=m
CONFIG_SCSI_BFA_FC=m CONFIG_SCSI_BFA_FC=m
CONFIG_SCSI_VIRTIO=m CONFIG_SCSI_VIRTIO=y
CONFIG_SCSI_CHELSIO_FCOE=m CONFIG_SCSI_CHELSIO_FCOE=m
CONFIG_SCSI_LOWLEVEL_PCMCIA=y CONFIG_SCSI_LOWLEVEL_PCMCIA=y
CONFIG_PCMCIA_AHA152X=m CONFIG_PCMCIA_AHA152X=m
@@ -3118,7 +3117,7 @@ CONFIG_PCMCIA_SYM53C500=m
# CONFIG_SCSI_DH is not set # CONFIG_SCSI_DH is not set
# end of SCSI device support # end of SCSI device support
CONFIG_ATA=m CONFIG_ATA=y
CONFIG_SATA_HOST=y CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y CONFIG_ATA_VERBOSE_ERROR=y
@@ -3130,23 +3129,23 @@ CONFIG_SATA_PMP=y
# #
# Controllers with non-SFF native interface # Controllers with non-SFF native interface
# #
CONFIG_SATA_AHCI=m CONFIG_SATA_AHCI=y
CONFIG_SATA_MOBILE_LPM_POLICY=3 CONFIG_SATA_MOBILE_LPM_POLICY=3
CONFIG_SATA_AHCI_PLATFORM=m CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_BRCM=m CONFIG_AHCI_BRCM=y
CONFIG_AHCI_DWC=m CONFIG_AHCI_DWC=y
CONFIG_AHCI_IMX=m CONFIG_AHCI_IMX=m
CONFIG_AHCI_CEVA=m CONFIG_AHCI_CEVA=y
CONFIG_AHCI_MTK=m CONFIG_AHCI_MTK=y
CONFIG_AHCI_MVEBU=m CONFIG_AHCI_MVEBU=y
CONFIG_AHCI_SUNXI=m CONFIG_AHCI_SUNXI=y
CONFIG_AHCI_TEGRA=m CONFIG_AHCI_TEGRA=y
CONFIG_AHCI_XGENE=m CONFIG_AHCI_XGENE=m
CONFIG_AHCI_QORIQ=m CONFIG_AHCI_QORIQ=y
CONFIG_SATA_AHCI_SEATTLE=m CONFIG_SATA_AHCI_SEATTLE=y
CONFIG_SATA_INIC162X=m CONFIG_SATA_INIC162X=m
CONFIG_SATA_ACARD_AHCI=m CONFIG_SATA_ACARD_AHCI=y
CONFIG_SATA_SIL24=m CONFIG_SATA_SIL24=y
CONFIG_ATA_SFF=y CONFIG_ATA_SFF=y
# #
@@ -3160,19 +3159,19 @@ CONFIG_ATA_BMDMA=y
# #
# SATA SFF controllers with BMDMA # SATA SFF controllers with BMDMA
# #
CONFIG_ATA_PIIX=m CONFIG_ATA_PIIX=y
CONFIG_SATA_DWC=m CONFIG_SATA_DWC=y
# CONFIG_SATA_DWC_OLD_DMA is not set # CONFIG_SATA_DWC_OLD_DMA is not set
CONFIG_SATA_MV=m CONFIG_SATA_MV=y
CONFIG_SATA_NV=m CONFIG_SATA_NV=y
CONFIG_SATA_PROMISE=m CONFIG_SATA_PROMISE=y
CONFIG_SATA_RCAR=m CONFIG_SATA_RCAR=y
CONFIG_SATA_SIL=m CONFIG_SATA_SIL=y
CONFIG_SATA_SIS=m CONFIG_SATA_SIS=y
CONFIG_SATA_SVW=m CONFIG_SATA_SVW=y
CONFIG_SATA_ULI=m CONFIG_SATA_ULI=y
CONFIG_SATA_VIA=m CONFIG_SATA_VIA=y
CONFIG_SATA_VITESSE=m CONFIG_SATA_VITESSE=y
# #
# PATA SFF controllers with BMDMA # PATA SFF controllers with BMDMA
@@ -3207,7 +3206,7 @@ CONFIG_PATA_RDC=m
CONFIG_PATA_SCH=m CONFIG_PATA_SCH=m
CONFIG_PATA_SERVERWORKS=m CONFIG_PATA_SERVERWORKS=m
CONFIG_PATA_SIL680=m CONFIG_PATA_SIL680=m
CONFIG_PATA_SIS=m CONFIG_PATA_SIS=y
CONFIG_PATA_TOSHIBA=m CONFIG_PATA_TOSHIBA=m
CONFIG_PATA_TRIFLEX=m CONFIG_PATA_TRIFLEX=m
CONFIG_PATA_VIA=m CONFIG_PATA_VIA=m
@@ -3249,8 +3248,8 @@ CONFIG_PATA_PARPORT_ON26=m
# #
# Generic fallback / legacy drivers # Generic fallback / legacy drivers
# #
CONFIG_PATA_ACPI=m CONFIG_PATA_ACPI=y
CONFIG_ATA_GENERIC=m CONFIG_ATA_GENERIC=y
CONFIG_PATA_LEGACY=m CONFIG_PATA_LEGACY=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=m CONFIG_BLK_DEV_MD=m
@@ -4984,7 +4983,7 @@ CONFIG_SERIAL_TEGRA_TCU=m
CONFIG_SERIAL_MAX3100=m CONFIG_SERIAL_MAX3100=m
CONFIG_SERIAL_MAX310X=m CONFIG_SERIAL_MAX310X=m
CONFIG_SERIAL_IMX=m CONFIG_SERIAL_IMX=m
CONFIG_SERIAL_IMX_CONSOLE=m # CONFIG_SERIAL_IMX_CONSOLE is not set
CONFIG_SERIAL_IMX_EARLYCON=y CONFIG_SERIAL_IMX_EARLYCON=y
CONFIG_SERIAL_UARTLITE=m CONFIG_SERIAL_UARTLITE=m
CONFIG_SERIAL_UARTLITE_NR_UARTS=1 CONFIG_SERIAL_UARTLITE_NR_UARTS=1
@@ -5772,6 +5771,7 @@ CONFIG_GPIO_MADERA=m
CONFIG_GPIO_MAX77650=m CONFIG_GPIO_MAX77650=m
CONFIG_GPIO_PMIC_EIC_SPRD=m CONFIG_GPIO_PMIC_EIC_SPRD=m
CONFIG_GPIO_SL28CPLD=m CONFIG_GPIO_SL28CPLD=m
CONFIG_GPIO_TN48M_CPLD=m
CONFIG_GPIO_TPS65086=m CONFIG_GPIO_TPS65086=m
CONFIG_GPIO_TPS65218=m CONFIG_GPIO_TPS65218=m
CONFIG_GPIO_TPS65219=m CONFIG_GPIO_TPS65219=m
@@ -6471,6 +6471,7 @@ CONFIG_MFD_MAX5970=m
# CONFIG_MFD_CS47L85 is not set # CONFIG_MFD_CS47L85 is not set
# CONFIG_MFD_CS47L90 is not set # CONFIG_MFD_CS47L90 is not set
# CONFIG_MFD_CS47L92 is not set # CONFIG_MFD_CS47L92 is not set
CONFIG_MFD_TN48M_CPLD=m
# CONFIG_MFD_DA9052_SPI is not set # CONFIG_MFD_DA9052_SPI is not set
CONFIG_MFD_DA9062=m CONFIG_MFD_DA9062=m
CONFIG_MFD_DA9063=m CONFIG_MFD_DA9063=m
@@ -10434,11 +10435,11 @@ CONFIG_VMGENID=m
CONFIG_NITRO_ENCLAVES=m CONFIG_NITRO_ENCLAVES=m
CONFIG_ARM_PKVM_GUEST=y CONFIG_ARM_PKVM_GUEST=y
CONFIG_VIRTIO_ANCHOR=y CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=m CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=m CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=m CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=m CONFIG_VIRTIO_VDPA=m
CONFIG_VIRTIO_PMEM=m CONFIG_VIRTIO_PMEM=m
@@ -12532,6 +12533,7 @@ CONFIG_RESET_SUNXI=y
CONFIG_RESET_TI_SCI=m CONFIG_RESET_TI_SCI=m
CONFIG_RESET_TI_SYSCON=m CONFIG_RESET_TI_SYSCON=m
CONFIG_RESET_TI_TPS380X=m CONFIG_RESET_TI_TPS380X=m
CONFIG_RESET_TN48M_CPLD=m
CONFIG_RESET_UNIPHIER=m CONFIG_RESET_UNIPHIER=m
CONFIG_RESET_UNIPHIER_GLUE=m CONFIG_RESET_UNIPHIER_GLUE=m
CONFIG_RESET_ZYNQMP=y CONFIG_RESET_ZYNQMP=y
@@ -14022,7 +14024,6 @@ CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_DEBUG_IRQFLAGS is not set # CONFIG_DEBUG_IRQFLAGS is not set
CONFIG_STACKTRACE=y CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set # CONFIG_DEBUG_KOBJECT is not set
# #
@@ -14057,7 +14058,7 @@ CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y CONFIG_HAVE_FUNCTION_GRAPH_FREGS=y
CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y

View File

@@ -14,6 +14,7 @@ func (t Toolchain) newKmod() (pkg.Artifact, string) {
pkg.TarGzip, pkg.TarGzip,
), nil, &MesonHelper{ ), nil, &MesonHelper{
Setup: [][2]string{ Setup: [][2]string{
{"Dmoduledir", "/system/lib/modules"},
{"Dsysconfdir", "/system/etc"}, {"Dsysconfdir", "/system/etc"},
{"Dbashcompletiondir", "no"}, {"Dbashcompletiondir", "no"},
{"Dfishcompletiondir", "no"}, {"Dfishcompletiondir", "no"},
@@ -37,5 +38,13 @@ func init() {
Name: "kmod", Name: "kmod",
Description: "a set of tools to handle common tasks with Linux kernel modules", Description: "a set of tools to handle common tasks with Linux kernel modules",
Website: "https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git", Website: "https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git",
Dependencies: P{
Zlib,
Zstd,
OpenSSL,
},
ID: 1517,
} }
} }

View File

@@ -48,5 +48,7 @@ func init() {
Name: "libcap", Name: "libcap",
Description: "a library for getting and setting POSIX.1e draft 15 capabilities", Description: "a library for getting and setting POSIX.1e draft 15 capabilities",
Website: "https://sites.google.com/site/fullycapable/", Website: "https://sites.google.com/site/fullycapable/",
ID: 1569,
} }
} }

View File

@@ -8,8 +8,8 @@ import (
func (t Toolchain) newLibexpat() (pkg.Artifact, string) { func (t Toolchain) newLibexpat() (pkg.Artifact, string) {
const ( const (
version = "2.7.3" version = "2.7.4"
checksum = "GmkoD23nRi9cMT0cgG1XRMrZWD82UcOMzkkvP1gkwSFWCBgeSXMuoLpa8-v8kxW-" checksum = "W6NI2FESBjrTqRPcvs15fK5c3nwF6f9RT8U-XHKQKblXVzJB3nt_ez5B5jO0ZVDG"
) )
return t.NewPackage("libexpat", version, pkg.NewHTTPGetTar( return t.NewPackage("libexpat", version, pkg.NewHTTPGetTar(
nil, "https://github.com/libexpat/libexpat/releases/download/"+ nil, "https://github.com/libexpat/libexpat/releases/download/"+
@@ -28,5 +28,7 @@ func init() {
Name: "libexpat", Name: "libexpat",
Description: "a stream-oriented XML parser library", Description: "a stream-oriented XML parser library",
Website: "https://libexpat.github.io/", Website: "https://libexpat.github.io/",
ID: 770,
} }
} }

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newLibffi() (pkg.Artifact, string) { func (t Toolchain) newLibffi() (pkg.Artifact, string) {
const ( const (
version = "3.4.5" version = "3.5.2"
checksum = "apIJzypF4rDudeRoI_n3K7N-zCeBLTbQlHRn9NSAZqdLAWA80mR0gXPTpHsL7oMl" checksum = "2_Q-ZNBBbVhltfL5zEr0wljxPegUimTK4VeMSiwJEGksls3n4gj3lV0Ly3vviSFH"
) )
return t.NewPackage("libffi", version, pkg.NewHTTPGetTar( return t.NewPackage("libffi", version, pkg.NewHTTPGetTar(
nil, "https://github.com/libffi/libffi/releases/download/"+ nil, "https://github.com/libffi/libffi/releases/download/"+
@@ -23,5 +23,7 @@ func init() {
Name: "libffi", Name: "libffi",
Description: "a portable, high level programming interface to various calling conventions", Description: "a portable, high level programming interface to various calling conventions",
Website: "https://sourceware.org/libffi/", Website: "https://sourceware.org/libffi/",
ID: 1611,
} }
} }

View File

@@ -30,5 +30,11 @@ func init() {
Name: "libgd", Name: "libgd",
Description: "an open source code library for the dynamic creation of images", Description: "an open source code library for the dynamic creation of images",
Website: "https://libgd.github.io/", Website: "https://libgd.github.io/",
Dependencies: P{
Zlib,
},
ID: 880,
} }
} }

View File

@@ -30,5 +30,7 @@ func init() {
Name: "libpsl", Name: "libpsl",
Description: "provides functions to work with the Mozilla Public Suffix List", Description: "provides functions to work with the Mozilla Public Suffix List",
Website: "https://rockdaboot.github.io/libpsl/", Website: "https://rockdaboot.github.io/libpsl/",
ID: 7305,
} }
} }

View File

@@ -31,5 +31,7 @@ func init() {
Name: "libseccomp", Name: "libseccomp",
Description: "an interface to the Linux Kernel's syscall filtering mechanism", Description: "an interface to the Linux Kernel's syscall filtering mechanism",
Website: "https://github.com/seccomp/libseccomp/", Website: "https://github.com/seccomp/libseccomp/",
ID: 13823,
} }
} }

View File

@@ -34,5 +34,7 @@ func init() {
Name: "libucontext", Name: "libucontext",
Description: "ucontext implementation featuring glibc-compatible ABI", Description: "ucontext implementation featuring glibc-compatible ABI",
Website: "https://github.com/kaniini/libucontext/", Website: "https://github.com/kaniini/libucontext/",
ID: 17085,
} }
} }

View File

@@ -8,8 +8,8 @@ import (
func (t Toolchain) newLibxml2() (pkg.Artifact, string) { func (t Toolchain) newLibxml2() (pkg.Artifact, string) {
const ( const (
version = "2.15.1" version = "2.15.2"
checksum = "pYzAR3cNrEHezhEMirgiq7jbboLzwMj5GD7SQp0jhSIMdgoU4G9oU9Gxun3zzUIU" checksum = "xba8VCofMsbWmQypA2__M9_RXNq9HDEuccjib6-tOni6OPngplRoAsYdY3NdYf8o"
) )
return t.NewPackage("libxml2", version, pkg.NewHTTPGet( return t.NewPackage("libxml2", version, pkg.NewHTTPGet(
nil, "https://download.gnome.org/sources/libxml2/"+ nil, "https://download.gnome.org/sources/libxml2/"+
@@ -30,5 +30,7 @@ func init() {
Name: "libxml2", Name: "libxml2",
Description: "an XML toolkit implemented in C", Description: "an XML toolkit implemented in C",
Website: "https://gitlab.gnome.org/GNOME/libxml2/", Website: "https://gitlab.gnome.org/GNOME/libxml2/",
ID: 1783,
} }
} }

View File

@@ -36,5 +36,11 @@ func init() {
Name: "libxslt", Name: "libxslt",
Description: "an XSLT processor based on libxml2", Description: "an XSLT processor based on libxml2",
Website: "https://gitlab.gnome.org/GNOME/libxslt/", Website: "https://gitlab.gnome.org/GNOME/libxslt/",
Dependencies: P{
Libxml2,
},
ID: 13301,
} }
} }

View File

@@ -75,10 +75,7 @@ func llvmFlagName(flag int) string {
// newLLVMVariant returns a [pkg.Artifact] containing a LLVM variant. // newLLVMVariant returns a [pkg.Artifact] containing a LLVM variant.
func (t Toolchain) newLLVMVariant(variant string, attr *llvmAttr) pkg.Artifact { func (t Toolchain) newLLVMVariant(variant string, attr *llvmAttr) pkg.Artifact {
const (
version = "21.1.8"
checksum = "8SUpqDkcgwOPsqHVtmf9kXfFeVmjVxl4LMn-qSE1AI_Xoeju-9HaoPNGtidyxyka"
)
if attr == nil { if attr == nil {
panic("LLVM attr must be non-nil") panic("LLVM attr must be non-nil")
} }
@@ -122,6 +119,8 @@ func (t Toolchain) newLLVMVariant(variant string, attr *llvmAttr) pkg.Artifact {
[2]string{"LLVM_INSTALL_BINUTILS_SYMLINKS", "ON"}, [2]string{"LLVM_INSTALL_BINUTILS_SYMLINKS", "ON"},
[2]string{"LLVM_INSTALL_CCTOOLS_SYMLINKS", "ON"}, [2]string{"LLVM_INSTALL_CCTOOLS_SYMLINKS", "ON"},
[2]string{"LLVM_LIT_ARGS", "'--verbose'"},
) )
} }
@@ -161,10 +160,10 @@ ln -s ld.lld /work/system/bin/ld
) )
} }
return t.NewPackage("llvm", version, pkg.NewHTTPGetTar( return t.NewPackage("llvm", llvmVersion, pkg.NewHTTPGetTar(
nil, "https://github.com/llvm/llvm-project/archive/refs/tags/"+ nil, "https://github.com/llvm/llvm-project/archive/refs/tags/"+
"llvmorg-"+version+".tar.gz", "llvmorg-"+llvmVersion+".tar.gz",
mustDecode(checksum), mustDecode(llvmChecksum),
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: attr.patches, Patches: attr.patches,
@@ -184,7 +183,6 @@ ln -s ld.lld /work/system/bin/ld
Append: cmakeAppend, Append: cmakeAppend,
Script: script + attr.script, Script: script + attr.script,
}, },
Libffi,
Python, Python,
Perl, Perl,
Diffutils, Diffutils,
@@ -245,10 +243,10 @@ func (t Toolchain) newLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
muslHeaders, muslHeaders,
}, },
script: ` script: `
mkdir -p "/work/system/lib/clang/21/lib/" mkdir -p "/work/system/lib/clang/` + llvmVersionMajor + `/lib/"
ln -s \ ln -s \
"../../../${ROSA_TRIPLE}" \ "../../../${ROSA_TRIPLE}" \
"/work/system/lib/clang/21/lib/" "/work/system/lib/clang/` + llvmVersionMajor + `/lib/"
ln -s \ ln -s \
"clang_rt.crtbegin-` + linuxArch() + `.o" \ "clang_rt.crtbegin-` + linuxArch() + `.o" \
@@ -261,7 +259,7 @@ ln -s \
musl, _ = t.newMusl(false, stage0ExclConcat(t, []string{ musl, _ = t.newMusl(false, stage0ExclConcat(t, []string{
"CC=clang", "CC=clang",
"LIBCC=/system/lib/clang/21/lib/" + "LIBCC=/system/lib/clang/" + llvmVersionMajor + "/lib/" +
triplet() + "/libclang_rt.builtins.a", triplet() + "/libclang_rt.builtins.a",
"AR=ar", "AR=ar",
"RANLIB=ranlib", "RANLIB=ranlib",
@@ -312,12 +310,12 @@ ln -s clang++ /work/system/bin/c++
ninja check-all ninja check-all
`, `,
patches: [][2]string{ patches: slices.Concat([][2]string{
{"add-rosa-vendor", `diff --git a/llvm/include/llvm/TargetParser/Triple.h b/llvm/include/llvm/TargetParser/Triple.h {"add-rosa-vendor", `diff --git a/llvm/include/llvm/TargetParser/Triple.h b/llvm/include/llvm/TargetParser/Triple.h
index 657f4230379e..12c305756184 100644 index 9c83abeeb3b1..5acfe5836a23 100644
--- a/llvm/include/llvm/TargetParser/Triple.h --- a/llvm/include/llvm/TargetParser/Triple.h
+++ b/llvm/include/llvm/TargetParser/Triple.h +++ b/llvm/include/llvm/TargetParser/Triple.h
@@ -185,6 +185,7 @@ public: @@ -190,6 +190,7 @@ public:
Apple, Apple,
PC, PC,
@@ -326,25 +324,25 @@ index 657f4230379e..12c305756184 100644
Freescale, Freescale,
IBM, IBM,
diff --git a/llvm/lib/TargetParser/Triple.cpp b/llvm/lib/TargetParser/Triple.cpp diff --git a/llvm/lib/TargetParser/Triple.cpp b/llvm/lib/TargetParser/Triple.cpp
index 0584c941d2e6..e4d6ef963cc7 100644 index a4f9dd42c0fe..cb5a12387034 100644
--- a/llvm/lib/TargetParser/Triple.cpp --- a/llvm/lib/TargetParser/Triple.cpp
+++ b/llvm/lib/TargetParser/Triple.cpp +++ b/llvm/lib/TargetParser/Triple.cpp
@@ -269,6 +269,7 @@ StringRef Triple::getVendorTypeName(VendorType Kind) { @@ -279,6 +279,7 @@ StringRef Triple::getVendorTypeName(VendorType Kind) {
case NVIDIA: return "nvidia"; case NVIDIA: return "nvidia";
case OpenEmbedded: return "oe"; case OpenEmbedded: return "oe";
case PC: return "pc"; case PC: return "pc";
+ case Rosa: return "rosa"; + case Rosa: return "rosa";
case SCEI: return "scei"; case SCEI: return "scei";
case SUSE: return "suse"; case SUSE: return "suse";
} case Meta:
@@ -669,6 +670,7 @@ static Triple::VendorType parseVendor(StringRef VendorName) { @@ -689,6 +690,7 @@ static Triple::VendorType parseVendor(StringRef VendorName) {
.Case("suse", Triple::SUSE) return StringSwitch<Triple::VendorType>(VendorName)
.Case("oe", Triple::OpenEmbedded) .Case("apple", Triple::Apple)
.Case("intel", Triple::Intel) .Case("pc", Triple::PC)
+ .Case("rosa", Triple::Rosa) + .Case("rosa", Triple::Rosa)
.Default(Triple::UnknownVendor); .Case("scei", Triple::SCEI)
} .Case("sie", Triple::SCEI)
.Case("fsl", Triple::Freescale)
`}, `},
{"xfail-broken-tests", `diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c {"xfail-broken-tests", `diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c
@@ -484,11 +482,47 @@ index 64324a3f8b01..15ce70b68217 100644
"/System/Library/Frameworks"}; "/System/Library/Frameworks"};
`}, `},
}, }, clangPatches),
}) })
return return
} }
func init() {
artifactsM[LLVMCompilerRT] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
_, compilerRT, _, _ := t.newLLVM()
return compilerRT, llvmVersion
},
Name: "llvm-compiler-rt",
Description: "LLVM runtime: compiler-rt",
Website: "https://llvm.org/",
}
artifactsM[LLVMRuntimes] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
_, _, runtimes, _ := t.newLLVM()
return runtimes, llvmVersion
},
Name: "llvm-runtimes",
Description: "LLVM runtimes: libunwind, libcxx, libcxxabi",
Website: "https://llvm.org/",
}
artifactsM[LLVMClang] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
_, _, _, clang := t.newLLVM()
return clang, llvmVersion
},
Name: "clang",
Description: `an "LLVM native" C/C++/Objective-C compiler`,
Website: "https://llvm.org/",
ID: 1830,
}
}
var ( var (
// llvm stores the result of Toolchain.newLLVM. // llvm stores the result of Toolchain.newLLVM.

View File

@@ -0,0 +1,4 @@
package rosa
// clangPatches are patches applied to the LLVM source tree for building clang.
var clangPatches [][2]string

View File

@@ -0,0 +1,12 @@
package rosa
// clangPatches are patches applied to the LLVM source tree for building clang.
var clangPatches [][2]string
// one version behind, latest fails 5 tests with 2 flaky on arm64
const (
llvmVersionMajor = "21"
llvmVersion = llvmVersionMajor + ".1.8"
llvmChecksum = "8SUpqDkcgwOPsqHVtmf9kXfFeVmjVxl4LMn-qSE1AI_Xoeju-9HaoPNGtidyxyka"
)

View File

@@ -0,0 +1,11 @@
//go:build !arm64
package rosa
// latest version of LLVM, conditional to temporarily avoid broken new releases
const (
llvmVersionMajor = "22"
llvmVersion = llvmVersionMajor + ".1.1"
llvmChecksum = "bQvV6D8AZvQykg7-uQb_saTbVavnSo1ykNJ3g57F5iE-evU3HuOYtcRnVIXTK76e"
)

View File

@@ -33,6 +33,8 @@ func init() {
Name: "make", Name: "make",
Description: "a tool which controls the generation of executables and other non-source files", Description: "a tool which controls the generation of executables and other non-source files",
Website: "https://www.gnu.org/software/make/", Website: "https://www.gnu.org/software/make/",
ID: 1877,
} }
} }

View File

@@ -13,6 +13,7 @@ func (t Toolchain) newMeson() (pkg.Artifact, string) {
checksum = "w895BXF_icncnXatT_OLCFe2PYEtg4KrKooMgUYdN-nQVvbFX3PvYWHGEpogsHtd" checksum = "w895BXF_icncnXatT_OLCFe2PYEtg4KrKooMgUYdN-nQVvbFX3PvYWHGEpogsHtd"
) )
return t.New("meson-"+version, 0, []pkg.Artifact{ return t.New("meson-"+version, 0, []pkg.Artifact{
t.Load(Zlib),
t.Load(Python), t.Load(Python),
t.Load(Setuptools), t.Load(Setuptools),
}, nil, nil, ` }, nil, nil, `
@@ -36,6 +37,15 @@ func init() {
Name: "meson", Name: "meson",
Description: "an open source build system", Description: "an open source build system",
Website: "https://mesonbuild.com/", Website: "https://mesonbuild.com/",
Dependencies: P{
Python,
PkgConfig,
CMake,
Ninja,
},
ID: 6472,
} }
} }
@@ -63,14 +73,7 @@ func (*MesonHelper) name(name, version string) string {
// extra returns hardcoded meson runtime dependencies. // extra returns hardcoded meson runtime dependencies.
func (*MesonHelper) extra(int) []PArtifact { func (*MesonHelper) extra(int) []PArtifact {
return []PArtifact{ return []PArtifact{Meson}
Python,
Meson,
Ninja,
PkgConfig,
CMake,
}
} }
// wantsChmod returns false. // wantsChmod returns false.

View File

@@ -40,5 +40,7 @@ func init() {
Name: "mksh", Name: "mksh",
Description: "MirBSD Korn Shell", Description: "MirBSD Korn Shell",
Website: "https://www.mirbsd.org/mksh", Website: "https://www.mirbsd.org/mksh",
ID: 5590,
} }
} }

View File

@@ -19,9 +19,6 @@ func (t Toolchain) newMuslFts() (pkg.Artifact, string) {
}, &MakeHelper{ }, &MakeHelper{
Generate: "./bootstrap.sh", Generate: "./bootstrap.sh",
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
Libtool, Libtool,
PkgConfig, PkgConfig,
@@ -34,5 +31,7 @@ func init() {
Name: "musl-fts", Name: "musl-fts",
Description: "implementation of fts(3) functions which are missing in musl libc", Description: "implementation of fts(3) functions which are missing in musl libc",
Website: "https://github.com/void-linux/musl-fts", Website: "https://github.com/void-linux/musl-fts",
ID: 26980,
} }
} }

View File

@@ -19,9 +19,6 @@ func (t Toolchain) newMuslObstack() (pkg.Artifact, string) {
}, &MakeHelper{ }, &MakeHelper{
Generate: "./bootstrap.sh", Generate: "./bootstrap.sh",
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
Libtool, Libtool,
PkgConfig, PkgConfig,
@@ -34,5 +31,7 @@ func init() {
Name: "musl-obstack", Name: "musl-obstack",
Description: "obstack functions and macros separated from glibc", Description: "obstack functions and macros separated from glibc",
Website: "https://github.com/void-linux/musl-obstack", Website: "https://github.com/void-linux/musl-obstack",
ID: 146206,
} }
} }

View File

@@ -61,5 +61,7 @@ func init() {
Name: "musl", Name: "musl",
Description: "an implementation of the C standard library", Description: "an implementation of the C standard library",
Website: "https://musl.libc.org/", Website: "https://musl.libc.org/",
ID: 11688,
} }
} }

View File

@@ -30,5 +30,7 @@ func init() {
Name: "ncurses", Name: "ncurses",
Description: "a free software emulation of curses in System V Release 4.0 (SVr4)", Description: "a free software emulation of curses in System V Release 4.0 (SVr4)",
Website: "https://invisible-island.net/ncurses/", Website: "https://invisible-island.net/ncurses/",
ID: 373226,
} }
} }

35
internal/rosa/nettle.go Normal file
View File

@@ -0,0 +1,35 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newNettle() (pkg.Artifact, string) {
const (
version = "4.0"
checksum = "6agC-vHzzoqAlaX3K9tX8yHgrm03HLqPZzVzq8jh_ePbuPMIvpxereu_uRJFmQK7"
)
return t.NewPackage("nettle", version, pkg.NewHTTPGetTar(
nil, "https://ftpmirror.gnu.org/gnu/nettle/nettle-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
), nil, (*MakeHelper)(nil),
M4,
Diffutils,
GMP,
), version
}
func init() {
artifactsM[Nettle] = Metadata{
f: Toolchain.newNettle,
Name: "nettle",
Description: "a low-level cryptographic library",
Website: "https://www.lysator.liu.se/~nisse/nettle/",
Dependencies: P{
GMP,
},
ID: 2073,
}
}

View File

@@ -42,5 +42,7 @@ func init() {
Name: "ninja", Name: "ninja",
Description: "a small build system with a focus on speed", Description: "a small build system with a focus on speed",
Website: "https://ninja-build.org/", Website: "https://ninja-build.org/",
ID: 2089,
} }
} }

View File

@@ -1,20 +1,22 @@
package rosa package rosa
import ( import (
"strings"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
func (t Toolchain) newNSS() (pkg.Artifact, string) { func (t Toolchain) newNSS() (pkg.Artifact, string) {
const ( const (
version = "3_120" version = "3.121"
checksum = "9M0SNMrj9BJp6RH2rQnMm6bZWtP0Kgj64D5JNPHF7Cxr2_8kfy3msubIcvEPwC35" checksum = "MTS4Eg-1vBN3T7gdUAdNO0y_e9x9BE3f_k_DHdM_BIovc7y57vhsZTfB5f6BeQfi"
version0 = "4_38_2" version0 = "4_38_2"
checksum0 = "25x2uJeQnOHIiq_zj17b4sYqKgeoU8-IsySUptoPcdHZ52PohFZfGuIisBreWzx0" checksum0 = "25x2uJeQnOHIiq_zj17b4sYqKgeoU8-IsySUptoPcdHZ52PohFZfGuIisBreWzx0"
) )
return t.NewPackage("nss", version, pkg.NewHTTPGetTar( return t.NewPackage("nss", version, pkg.NewHTTPGetTar(
nil, "https://github.com/nss-dev/nss/archive/refs/tags/"+ nil, "https://github.com/nss-dev/nss/archive/refs/tags/"+
"NSS_"+version+"_RTM.tar.gz", "NSS_"+strings.Join(strings.SplitN(version, ".", 2), "_")+"_RTM.tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
@@ -72,6 +74,12 @@ func init() {
Name: "nss", Name: "nss",
Description: "Network Security Services", Description: "Network Security Services",
Website: "https://firefox-source-docs.mozilla.org/security/nss/index.html", Website: "https://firefox-source-docs.mozilla.org/security/nss/index.html",
Dependencies: P{
Zlib,
},
ID: 2503,
} }
} }
@@ -80,7 +88,7 @@ func init() {
artifactsM[buildcatrust] = newViaPip( artifactsM[buildcatrust] = newViaPip(
"buildcatrust", "buildcatrust",
"transform certificate stores between formats", "transform certificate stores between formats",
version, "none", "any", version, "py3", "none", "any",
"k_FGzkRCLjbTWBkuBLzQJ1S8FPAz19neJZlMHm0t10F2Y0hElmvVwdSBRc03Rjo1", "k_FGzkRCLjbTWBkuBLzQJ1S8FPAz19neJZlMHm0t10F2Y0hElmvVwdSBRc03Rjo1",
"https://github.com/nix-community/buildcatrust/"+ "https://github.com/nix-community/buildcatrust/"+
"releases/download/v"+version+"/", "releases/download/v"+version+"/",
@@ -88,13 +96,12 @@ func init() {
} }
func (t Toolchain) newNSSCACert() (pkg.Artifact, string) { func (t Toolchain) newNSSCACert() (pkg.Artifact, string) {
return t.New("nss-cacert", 0, []pkg.Artifact{ return t.New("nss-cacert", 0, t.AppendPresets(nil,
t.Load(Bash), Bash,
t.Load(Python),
t.Load(NSS), NSS,
t.Load(buildcatrust), buildcatrust,
}, nil, nil, ` ), nil, nil, `
mkdir -p /work/system/etc/ssl/{certs/unbundled,certs/hashed,trust-source} mkdir -p /work/system/etc/ssl/{certs/unbundled,certs/hashed,trust-source}
buildcatrust \ buildcatrust \
--certdata_input /system/nss/certdata.txt \ --certdata_input /system/nss/certdata.txt \

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newOpenSSL() (pkg.Artifact, string) { func (t Toolchain) newOpenSSL() (pkg.Artifact, string) {
const ( const (
version = "3.5.5" version = "3.6.1"
checksum = "I2Hp1LxcTR8j4G6LFEQMVy6EJH-Na1byI9Ti-ThBot6EMLNRnjGXGq-WXrim3Fkz" checksum = "boMAj2SIVIFXHswZva3qHJuFEpc32rxCCu07wjMPsVe9nn_976BGMmW_5P1zthgg"
) )
return t.NewPackage("openssl", version, pkg.NewHTTPGetTar( return t.NewPackage("openssl", version, pkg.NewHTTPGetTar(
nil, "https://github.com/openssl/openssl/releases/download/"+ nil, "https://github.com/openssl/openssl/releases/download/"+
@@ -44,5 +44,10 @@ func init() {
Name: "openssl", Name: "openssl",
Description: "TLS/SSL and crypto library", Description: "TLS/SSL and crypto library",
Website: "https://www.openssl.org/", Website: "https://www.openssl.org/",
ID: 2566,
// strange malformed tags treated as pre-releases in Anitya
latest: (*Versions).getStable,
} }
} }

View File

@@ -6,8 +6,8 @@ import (
func (t Toolchain) newPCRE2() (pkg.Artifact, string) { func (t Toolchain) newPCRE2() (pkg.Artifact, string) {
const ( const (
version = "10.43" version = "10.47"
checksum = "iyNw-POPSJwiZVJfUK5qACA6q2uMzP-84WieimN_CskaEkuw5fRnRTZhEv6ry2Yo" checksum = "IbC24vVayju6nB9EhrBPSDexk22wDecdpyrjgC3nCZXkwTnUjq4CD2q5sopqu6CW"
) )
return t.NewPackage("pcre2", version, pkg.NewHTTPGetTar( return t.NewPackage("pcre2", version, pkg.NewHTTPGetTar(
nil, "https://github.com/PCRE2Project/pcre2/releases/download/"+ nil, "https://github.com/PCRE2Project/pcre2/releases/download/"+
@@ -37,5 +37,7 @@ func init() {
Name: "pcre2", Name: "pcre2",
Description: "a set of C functions that implement regular expression pattern matching", Description: "a set of C functions that implement regular expression pattern matching",
Website: "https://pcre2project.github.io/pcre2/", Website: "https://pcre2project.github.io/pcre2/",
ID: 5832,
} }
} }

View File

@@ -8,8 +8,8 @@ import (
func (t Toolchain) newPerl() (pkg.Artifact, string) { func (t Toolchain) newPerl() (pkg.Artifact, string) {
const ( const (
version = "5.42.0" version = "5.42.1"
checksum = "2KR7Jbpk-ZVn1a30LQRwbgUvg2AXlPQZfzrqCr31qD5-yEsTwVQ_W76eZH-EdxM9" checksum = "FsJVq5CZFA7nZklfUl1eC6z2ECEu02XaB1pqfHSKtRLZWpnaBjlB55QOhjKpjkQ2"
) )
return t.NewPackage("perl", version, pkg.NewHTTPGetTar( return t.NewPackage("perl", version, pkg.NewHTTPGetTar(
nil, "https://www.cpan.org/src/5.0/perl-"+version+".tar.gz", nil, "https://www.cpan.org/src/5.0/perl-"+version+".tar.gz",
@@ -39,12 +39,13 @@ rm -f /system/bin/ps # perl does not like toybox ps
{"Dldflags", `"${LDFLAGS:-''}"`}, {"Dldflags", `"${LDFLAGS:-''}"`},
{"Doptimize", "'-O2 -fno-strict-aliasing'"}, {"Doptimize", "'-O2 -fno-strict-aliasing'"},
{"Duseithreads"}, {"Duseithreads"},
{"Duseshrplib"},
}, },
Check: []string{ Check: []string{
"TEST_JOBS=256", "TEST_JOBS=256",
"test_harness", "test_harness",
}, },
Install: "./perl -Ilib -I. installperl --destdir=/work", Install: `LD_LIBRARY_PATH="$PWD" ./perl -Ilib -I. installperl --destdir=/work`,
}), version }), version
} }
func init() { func init() {
@@ -54,6 +55,11 @@ func init() {
Name: "perl", Name: "perl",
Description: "The Perl Programming language", Description: "The Perl Programming language",
Website: "https://www.perl.org/", Website: "https://www.perl.org/",
ID: 13599,
// odd-even versioning
latest: (*Versions).getStable,
} }
} }
@@ -62,14 +68,14 @@ func (t Toolchain) newViaPerlModuleBuild(
name, version string, name, version string,
source pkg.Artifact, source pkg.Artifact,
patches [][2]string, patches [][2]string,
extra ...pkg.Artifact, extra ...PArtifact,
) pkg.Artifact { ) pkg.Artifact {
if name == "" || version == "" { if name == "" || version == "" {
panic("names must be non-empty") panic("names must be non-empty")
} }
return t.New("perl-"+name, 0, slices.Concat(extra, []pkg.Artifact{ return t.New("perl-"+name, 0, t.AppendPresets(nil,
t.Load(Perl), slices.Concat(P{Perl}, extra)...,
}), nil, nil, ` ), nil, nil, `
cd /usr/src/`+name+` cd /usr/src/`+name+`
perl Build.PL --prefix=/system perl Build.PL --prefix=/system
./Build build ./Build build
@@ -99,6 +105,10 @@ func init() {
Name: "perl-Module::Build", Name: "perl-Module::Build",
Description: "build and install Perl modules", Description: "build and install Perl modules",
Website: "https://metacpan.org/release/Module-Build", Website: "https://metacpan.org/release/Module-Build",
Dependencies: P{
Perl,
},
} }
} }
@@ -261,6 +271,10 @@ func init() {
Name: "perl-Text::WrapI18N", Name: "perl-Text::WrapI18N",
Description: "line wrapping module", Description: "line wrapping module",
Website: "https://metacpan.org/release/Text-WrapI18N", Website: "https://metacpan.org/release/Text-WrapI18N",
Dependencies: P{
PerlTextCharWidth,
},
} }
} }
@@ -307,6 +321,10 @@ func init() {
Name: "perl-Unicode::GCString", Name: "perl-Unicode::GCString",
Description: "String as Sequence of UAX #29 Grapheme Clusters", Description: "String as Sequence of UAX #29 Grapheme Clusters",
Website: "https://metacpan.org/release/Unicode-LineBreak", Website: "https://metacpan.org/release/Unicode-LineBreak",
Dependencies: P{
PerlMIMECharset,
},
} }
} }

View File

@@ -26,5 +26,7 @@ func init() {
Name: "pkg-config", Name: "pkg-config",
Description: "a helper tool used when compiling applications and libraries", Description: "a helper tool used when compiling applications and libraries",
Website: "https://pkgconfig.freedesktop.org/", Website: "https://pkgconfig.freedesktop.org/",
ID: 3649,
} }
} }

View File

@@ -18,9 +18,6 @@ func (t Toolchain) newProcps() (pkg.Artifact, string) {
{"without-ncurses"}, {"without-ncurses"},
}, },
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
Gettext, Gettext,
Libtool, Libtool,
@@ -36,5 +33,7 @@ func init() {
Name: "procps", Name: "procps",
Description: "command line and full screen utilities for browsing procfs", Description: "command line and full screen utilities for browsing procfs",
Website: "https://gitlab.com/procps-ng/procps", Website: "https://gitlab.com/procps-ng/procps",
ID: 3708,
} }
} }

View File

@@ -9,8 +9,8 @@ import (
func (t Toolchain) newPython() (pkg.Artifact, string) { func (t Toolchain) newPython() (pkg.Artifact, string) {
const ( const (
version = "3.14.2" version = "3.14.3"
checksum = "7nZunVMGj0viB-CnxpcRego2C90X5wFsMTgsoewd5z-KSZY2zLuqaBwG-14zmKys" checksum = "ajEC32WPmn9Jvll0n4gGvlTvhMPUHb2H_j5_h9jf_esHmkZBRfAumDcKY7nTTsCH"
) )
return t.NewPackage("python", version, pkg.NewHTTPGetTar( return t.NewPackage("python", version, pkg.NewHTTPGetTar(
nil, "https://www.python.org/ftp/python/"+version+ nil, "https://www.python.org/ftp/python/"+version+
@@ -53,11 +53,11 @@ func (t Toolchain) newPython() (pkg.Artifact, string) {
Check: []string{"test"}, Check: []string{"test"},
}, },
Zlib, Zlib,
Bzip2,
Libffi, Libffi,
OpenSSL,
PkgConfig, PkgConfig,
OpenSSL,
Bzip2,
XZ, XZ,
), version ), version
} }
@@ -68,25 +68,29 @@ func init() {
Name: "python", Name: "python",
Description: "the Python programming language interpreter", Description: "the Python programming language interpreter",
Website: "https://www.python.org/", Website: "https://www.python.org/",
Dependencies: P{
Zlib,
Bzip2,
Libffi,
OpenSSL,
},
ID: 13254,
} }
} }
// newViaPip is a helper for installing python dependencies via pip. // newViaPip is a helper for installing python dependencies via pip.
func newViaPip( func newViaPip(
name, description, version, abi, platform, checksum, prefix string, name, description, version, interpreter, abi, platform, checksum, prefix string,
extra ...PArtifact, extra ...PArtifact,
) Metadata { ) Metadata {
wname := name + "-" + version + "-py3-" + abi + "-" + platform + ".whl" wname := name + "-" + version + "-" + interpreter + "-" + abi + "-" + platform + ".whl"
return Metadata{ return Metadata{
f: func(t Toolchain) (pkg.Artifact, string) { f: func(t Toolchain) (pkg.Artifact, string) {
extraRes := make([]pkg.Artifact, len(extra)) return t.New(name+"-"+version, 0, t.AppendPresets(nil,
for i, p := range extra { slices.Concat(P{Python}, extra)...,
extraRes[i] = t.Load(p) ), nil, nil, `
}
return t.New(name+"-"+version, 0, slices.Concat([]pkg.Artifact{
t.Load(Python),
}, extraRes), nil, nil, `
pip3 install \ pip3 install \
--no-index \ --no-index \
--prefix=/system \ --prefix=/system \
@@ -101,17 +105,19 @@ pip3 install \
Name: "python-" + name, Name: "python-" + name,
Description: description, Description: description,
Website: "https://pypi.org/project/" + name + "/", Website: "https://pypi.org/project/" + name + "/",
Dependencies: slices.Concat(P{Python}, extra),
} }
} }
func (t Toolchain) newSetuptools() (pkg.Artifact, string) { func (t Toolchain) newSetuptools() (pkg.Artifact, string) {
const ( const (
version = "80.10.1" version = "82.0.1"
checksum = "p3rlwEmy1krcUH1KabprQz1TCYjJ8ZUjOQknQsWh3q-XEqLGEd3P4VrCc7ouHGXU" checksum = "nznP46Tj539yqswtOrIM4nQgwLA1h-ApKX7z7ghazROCpyF5swtQGwsZoI93wkhc"
) )
return t.New("setuptools-"+version, 0, []pkg.Artifact{ return t.New("setuptools-"+version, 0, t.AppendPresets(nil,
t.Load(Python), Python,
}, nil, nil, ` ), nil, nil, `
pip3 install \ pip3 install \
--no-index \ --no-index \
--prefix=/system \ --prefix=/system \
@@ -128,58 +134,164 @@ func init() {
artifactsM[Setuptools] = Metadata{ artifactsM[Setuptools] = Metadata{
f: Toolchain.newSetuptools, f: Toolchain.newSetuptools,
Name: "setuptools", Name: "python-setuptools",
Description: "the autotools of the Python ecosystem", Description: "the autotools of the Python ecosystem",
Website: "https://pypi.org/project/setuptools/", Website: "https://pypi.org/project/setuptools/",
Dependencies: P{
Python,
},
ID: 4021,
} }
} }
func init() { func init() {
artifactsM[Pygments] = newViaPip( artifactsM[PythonPygments] = newViaPip(
"pygments", "pygments",
" a syntax highlighting package written in Python", " a syntax highlighting package written in Python",
"2.19.2", "none", "any", "2.19.2", "py3", "none", "any",
"ak_lwTalmSr7W4Mjy2XBZPG9I6a0gwSy2pS87N8x4QEuZYif0ie9z0OcfRfi9msd", "ak_lwTalmSr7W4Mjy2XBZPG9I6a0gwSy2pS87N8x4QEuZYif0ie9z0OcfRfi9msd",
"https://files.pythonhosted.org/packages/"+ "https://files.pythonhosted.org/packages/"+
"c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/", "c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/",
) )
artifactsM[Pluggy] = newViaPip( artifactsM[PythonPluggy] = newViaPip(
"pluggy", "pluggy",
"the core framework used by the pytest, tox, and devpi projects", "the core framework used by the pytest, tox, and devpi projects",
"1.6.0", "none", "any", "1.6.0", "py3", "none", "any",
"2HWYBaEwM66-y1hSUcWI1MyE7dVVuNNRW24XD6iJBey4YaUdAK8WeXdtFMQGC-4J", "2HWYBaEwM66-y1hSUcWI1MyE7dVVuNNRW24XD6iJBey4YaUdAK8WeXdtFMQGC-4J",
"https://files.pythonhosted.org/packages/"+ "https://files.pythonhosted.org/packages/"+
"54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/", "54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/",
) )
artifactsM[Packaging] = newViaPip( artifactsM[PythonPackaging] = newViaPip(
"packaging", "packaging",
"reusable core utilities for various Python Packaging interoperability specifications", "reusable core utilities for various Python Packaging interoperability specifications",
"26.0", "none", "any", "26.0", "py3", "none", "any",
"iVVXcqdwHDskPKoCFUlh2x8J0Gyq-bhO4ns9DvUJ7oJjeOegRYtSIvLV33Bki-pP", "iVVXcqdwHDskPKoCFUlh2x8J0Gyq-bhO4ns9DvUJ7oJjeOegRYtSIvLV33Bki-pP",
"https://files.pythonhosted.org/packages/"+ "https://files.pythonhosted.org/packages/"+
"b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/", "b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/",
) )
artifactsM[IniConfig] = newViaPip( artifactsM[PythonIniConfig] = newViaPip(
"iniconfig", "iniconfig",
"a small and simple INI-file parser module", "a small and simple INI-file parser module",
"2.3.0", "none", "any", "2.3.0", "py3", "none", "any",
"SDgs4S5bXi77aVOeKTPv2TUrS3M9rduiK4DpU0hCmDsSBWqnZcWInq9lsx6INxut", "SDgs4S5bXi77aVOeKTPv2TUrS3M9rduiK4DpU0hCmDsSBWqnZcWInq9lsx6INxut",
"https://files.pythonhosted.org/packages/"+ "https://files.pythonhosted.org/packages/"+
"cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/", "cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/",
) )
artifactsM[PyTest] = newViaPip(
artifactsM[PythonPyTest] = newViaPip(
"pytest", "pytest",
"the pytest framework", "the pytest framework",
"9.0.2", "none", "any", "9.0.2", "py3", "none", "any",
"IM2wDbLke1EtZhF92zvAjUl_Hms1uKDtM7U8Dt4acOaChMnDg1pW7ib8U0wYGDLH", "IM2wDbLke1EtZhF92zvAjUl_Hms1uKDtM7U8Dt4acOaChMnDg1pW7ib8U0wYGDLH",
"https://files.pythonhosted.org/packages/"+ "https://files.pythonhosted.org/packages/"+
"3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/", "3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/",
IniConfig, PythonIniConfig,
Packaging, PythonPackaging,
Pluggy, PythonPluggy,
Pygments, PythonPygments,
)
artifactsM[PythonCfgv] = newViaPip(
"cfgv",
"validate configuration and produce human readable error messages",
"3.5.0", "py2.py3", "none", "any",
"yFKTyVRlmnLKAxvvge15kAd_GOP1Xh3fZ0NFImO5pBdD5e0zj3GRmA6Q1HdtLTYO",
"https://files.pythonhosted.org/packages/"+
"db/3c/33bac158f8ab7f89b2e59426d5fe2e4f63f7ed25df84c036890172b412b5/",
)
artifactsM[PythonIdentify] = newViaPip(
"identify",
"file identification library for Python",
"2.6.17", "py2.py3", "none", "any",
"9RxK3igO-Pxxof5AuCAGiF_L1SWi4SpuSF1fWNXCzE2D4oTRSob-9VpFMLlybrSv",
"https://files.pythonhosted.org/packages/"+
"40/66/71c1227dff78aaeb942fed29dd5651f2aec166cc7c9aeea3e8b26a539b7d/",
)
artifactsM[PythonNodeenv] = newViaPip(
"nodeenv",
"a tool to create isolated node.js environments",
"1.10.0", "py2.py3", "none", "any",
"ihUb4-WQXYIhYOOKSsXlKIzjzQieOYl6ojro9H-0DFzGheaRTtuyZgsCmriq58sq",
"https://files.pythonhosted.org/packages/"+
"88/b2/d0896bdcdc8d28a7fc5717c305f1a861c26e18c05047949fb371034d98bd/",
)
artifactsM[PythonPyYAML] = newViaPip(
"pyyaml",
"a complete YAML 1.1 parser",
"6.0.3", "cp314", "cp314", "musllinux_1_2_x86_64",
"4_jhCFpUNtyrFp2HOMqUisR005u90MHId53eS7rkUbcGXkoaJ7JRsY21dREHEfGN",
"https://files.pythonhosted.org/packages/"+
"d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/",
)
artifactsM[PythonDistlib] = newViaPip(
"distlib",
"used as the basis for third-party packaging tools",
"0.4.0", "py2.py3", "none", "any",
"lGLLfYVhUhXOTw_84zULaH2K8n6pk1OOVXmJfGavev7N42msbtHoq-XY5D_xULI_",
"https://files.pythonhosted.org/packages/"+
"33/6b/e0547afaf41bf2c42e52430072fa5658766e3d65bd4b03a563d1b6336f57/",
)
artifactsM[PythonFilelock] = newViaPip(
"filelock",
"a platform-independent file locking library for Python",
"3.25.0", "py3", "none", "any",
"0gSQIYNUEjOs1JBxXjGwfLnwFPFINwqyU_Zqgj7fT_EGafv_HaD5h3Xv2Rq_qQ44",
"https://files.pythonhosted.org/packages/"+
"f9/0b/de6f54d4a8bedfe8645c41497f3c18d749f0bd3218170c667bf4b81d0cdd/",
)
artifactsM[PythonPlatformdirs] = newViaPip(
"platformdirs",
"a Python package for determining platform-specific directories",
"4.9.4", "py3", "none", "any",
"JGNpMCX2JMn-7c9bk3QzOSNDgJRR_5lH-jIqfy0zXMZppRCdLsTNbdp4V7QFwxOI",
"https://files.pythonhosted.org/packages/"+
"63/d7/97f7e3a6abb67d8080dd406fd4df842c2be0efaf712d1c899c32a075027c/",
)
artifactsM[PythonDiscovery] = newViaPip(
"python_discovery",
"looks for a python installation",
"1.1.1", "py3", "none", "any",
"Jk_qGMfZYm0fdNOSvMdVQZuQbJlqu3NWRm7T2fRtiBXmHLQyOdJE3ypI_it1OJR0",
"https://files.pythonhosted.org/packages/"+
"75/0f/2bf7e3b5a4a65f623cb820feb5793e243fad58ae561015ee15a6152f67a2/",
PythonFilelock,
PythonPlatformdirs,
)
artifactsM[PythonVirtualenv] = newViaPip(
"virtualenv",
"a tool for creating isolated virtual python environments",
"21.1.0", "py3", "none", "any",
"SLvdr3gJZ7GTS-kiRyq2RvJdrQ8SZYC1pglbViWCMLCuAIcbLNjVEUJZ4hDtKUxm",
"https://files.pythonhosted.org/packages/"+
"78/55/896b06bf93a49bec0f4ae2a6f1ed12bd05c8860744ac3a70eda041064e4d/",
PythonDistlib,
PythonDiscovery,
)
artifactsM[PythonPreCommit] = newViaPip(
"pre_commit",
"a framework for managing and maintaining multi-language pre-commit hooks",
"4.5.1", "py2.py3", "none", "any",
"9G2Hv5JpvXFZVfw4pv_KAsmHD6bvot9Z0YBDmW6JeJizqTA4xEQCKel-pCERqQFK",
"https://files.pythonhosted.org/packages/"+
"5d/19/fd3ef348460c80af7bb4669ea7926651d1f95c23ff2df18b9d24bab4f3fa/",
PythonCfgv,
PythonIdentify,
PythonNodeenv,
PythonPyYAML,
PythonVirtualenv,
) )
} }

View File

@@ -74,21 +74,16 @@ EOF
Bash, Bash,
Python, Python,
Ninja, Ninja,
Bzip2,
PkgConfig, PkgConfig,
Diffutils, Diffutils,
OpenSSL, OpenSSL,
Bzip2,
XZ, XZ,
Flex, Flex,
Bison, Bison,
M4, M4,
PCRE2,
Libffi,
Zlib,
GLib, GLib,
Zstd, Zstd,
DTC, DTC,
@@ -102,5 +97,12 @@ func init() {
Name: "qemu", Name: "qemu",
Description: "a generic and open source machine emulator and virtualizer", Description: "a generic and open source machine emulator and virtualizer",
Website: "https://www.qemu.org/", Website: "https://www.qemu.org/",
Dependencies: P{
GLib,
Zstd,
},
ID: 13607,
} }
} }

37
internal/rosa/rdfind.go Normal file
View File

@@ -0,0 +1,37 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newRdfind() (pkg.Artifact, string) {
const (
version = "1.8.0"
checksum = "PoaeJ2WIG6yyfe5VAYZlOdAQiR3mb3WhAUMj2ziTCx_IIEal4640HMJUb4SzU9U3"
)
return t.NewPackage("rdfind", version, pkg.NewHTTPGetTar(
nil, "https://rdfind.pauldreik.se/rdfind-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
), nil, &MakeHelper{
// test suite hard codes /bin/echo
ScriptCheckEarly: `
ln -s ../system/bin/toybox /bin/echo
`,
},
Nettle,
), version
}
func init() {
artifactsM[Rdfind] = Metadata{
f: Toolchain.newRdfind,
Name: "rdfind",
Description: "a program that finds duplicate files",
Website: "https://rdfind.pauldreik.se/",
Dependencies: P{
Nettle,
},
ID: 231641,
}
}

235
internal/rosa/report.go Normal file
View File

@@ -0,0 +1,235 @@
package rosa
import (
"encoding/binary"
"errors"
"fmt"
"io"
"os"
"runtime"
"runtime/debug"
"strconv"
"sync"
"syscall"
"unique"
"unsafe"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
// wordSize is the boundary which binary segments are always aligned to.
const wordSize = 8
// padSize returns the padding size for aligning sz.
func padSize[T int | int64](sz T) T {
return (wordSize - (sz)%wordSize) % wordSize
}
// WriteReport writes a report of all available [PArtifact] to w.
func WriteReport(msg message.Msg, w io.Writer, c *pkg.Cache) error {
var (
zero [wordSize]byte
buf [len(pkg.ID{}) + wordSize]byte
)
for i := range PresetEnd {
a := Std.Load(PArtifact(i))
if _, ok := a.(pkg.FileArtifact); ok {
msg.Verbosef("skipping file artifact %s", artifactsM[i].Name)
continue
}
id := c.Ident(a)
var f *os.File
if r, err := c.OpenStatus(a); err != nil {
if errors.Is(err, os.ErrNotExist) {
msg.Verbosef("artifact %s unavailable", artifactsM[i].Name)
continue
}
return err
} else {
f = r.(*os.File)
}
msg.Verbosef("writing artifact %s...", artifactsM[i].Name)
var sz int64
if fi, err := f.Stat(); err != nil {
_ = f.Close()
return err
} else {
sz = fi.Size()
}
*(*pkg.ID)(buf[:]) = id.Value()
binary.LittleEndian.PutUint64(buf[len(pkg.ID{}):], uint64(sz))
if _, err := w.Write(buf[:]); err != nil {
_ = f.Close()
return err
}
if n, err := io.Copy(w, f); err != nil {
_ = f.Close()
return err
} else if n != sz {
_ = f.Close()
return fmt.Errorf("strange status file copy: %d != %d", n, sz)
} else if err = f.Close(); err != nil {
return err
}
if psz := padSize(sz); psz > 0 {
if _, err := w.Write(zero[:psz]); err != nil {
return err
}
}
// existence of status implies cured artifact
var n int
if pathname, _, err := c.Cure(a); err != nil {
return err
} else if n, err = pkg.Flatten(
os.DirFS(pathname.String()), ".",
io.Discard,
); err != nil {
return err
}
binary.LittleEndian.PutUint64(buf[:], uint64(n))
if _, err := w.Write(buf[:wordSize]); err != nil {
return err
}
}
return nil
}
// Report provides efficient access to a report file populated by [WriteReport].
type Report struct {
// Slice backed by the underlying file.
//
// Access must be prepared by HandleAccess.
data []byte
// Offsets into data for each identifier.
offsets map[unique.Handle[pkg.ID]]int
// Outcome of a call to Close.
closeErr error
// Synchronises calls to Close.
closeOnce sync.Once
}
// OpenReport opens a file populated by [WriteReport]
func OpenReport(pathname string) (rp *Report, err error) {
var f *os.File
if f, err = os.Open(pathname); err != nil {
return
}
var fi os.FileInfo
if fi, err = f.Stat(); err != nil {
_ = f.Close()
return
}
var r Report
if r.data, err = syscall.Mmap(
int(f.Fd()),
0,
int(fi.Size()),
syscall.PROT_READ,
syscall.MAP_PRIVATE,
); err != nil {
_ = f.Close()
return
}
if err = f.Close(); err != nil {
_ = r.Close()
return
}
defer r.HandleAccess(&err)()
var offset int
r.offsets = make(map[unique.Handle[pkg.ID]]int)
for offset < len(r.data) {
id := unique.Make((pkg.ID)(r.data[offset:]))
offset += len(pkg.ID{})
r.offsets[id] = offset
offset += int(binary.LittleEndian.Uint64(r.data[offset:])) + wordSize
offset += padSize(offset)
offset += wordSize
}
return &r, nil
}
// ReportIOError describes an I/O error while accessing a [Report].
type ReportIOError struct {
Offset int
Err error
}
// Unwrap returns the underlying runtime error.
func (e *ReportIOError) Unwrap() error { return e.Err }
// Error returns a description of the error offset.
func (e *ReportIOError) Error() string {
return "report I/O error at offset " + strconv.Itoa(e.Offset)
}
// HandleAccess prepares for accessing memory returned by a method of [Report]
// and returns a function that must be deferred by the caller.
func (r *Report) HandleAccess(errP *error) func() {
pof := debug.SetPanicOnFault(true)
return func() {
debug.SetPanicOnFault(pof)
v := recover()
if v == nil {
return
}
if err, ok := v.(error); !ok {
panic(v)
} else if *errP != nil {
return
} else {
*errP = err
}
var runtimeError interface {
Addr() uintptr
runtime.Error
}
if errors.As(*errP, &runtimeError) {
offset := int(runtimeError.Addr() - uintptr(unsafe.Pointer(unsafe.SliceData(r.data))))
// best effort for fragile uintptr
if offset >= 0 {
*errP = &ReportIOError{offset, *errP}
}
}
}
}
// ArtifactOf returns information of a [pkg.Artifact] corresponding to id.
func (r *Report) ArtifactOf(id unique.Handle[pkg.ID]) (status []byte, n int64) {
if offset, ok := r.offsets[id]; !ok {
n = -1
} else {
sz := int(binary.LittleEndian.Uint64(r.data[offset:]))
offset += wordSize
status = r.data[offset : offset+sz]
offset += sz + padSize(sz)
n = int64(binary.LittleEndian.Uint64(r.data[offset:]))
}
return
}
// Close closes the underlying file and releases all associated resources.
func (r *Report) Close() error {
r.closeOnce.Do(func() { r.closeErr = syscall.Munmap(r.data) })
return r.closeErr
}

View File

@@ -0,0 +1,57 @@
package rosa_test
import (
"errors"
"os"
"path"
"syscall"
"testing"
"unique"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
)
func TestReportZeroLength(t *testing.T) {
report := path.Join(t.TempDir(), "report")
if err := os.WriteFile(report, nil, 0400); err != nil {
t.Fatal(err)
}
if _, err := rosa.OpenReport(report); !errors.Is(err, syscall.EINVAL) {
t.Fatalf("OpenReport: error = %v", err)
}
}
func TestReportSIGSEGV(t *testing.T) {
report := path.Join(t.TempDir(), "report")
if err := os.WriteFile(report, make([]byte, 64), 0400); err != nil {
t.Fatal(err)
}
if r, err := rosa.OpenReport(report); err != nil {
t.Fatalf("OpenReport: error = %v", err)
} else {
status, n := r.ArtifactOf(unique.Make(pkg.ID{}))
if len(status) != 0 {
t.Errorf("ArtifactsOf: status = %#v", status)
}
if n != 0 {
t.Errorf("ArtifactsOf: n = %d", n)
}
if err = r.Close(); err != nil {
t.Fatalf("Close: error = %v", err)
}
defer func() {
ioErr := err.(*rosa.ReportIOError)
if ioErr.Offset != 48 {
panic(ioErr)
}
}()
defer r.HandleAccess(&err)()
r.ArtifactOf(unique.Make(pkg.ID{}))
}
}

View File

@@ -8,6 +8,7 @@ import (
"slices" "slices"
"strconv" "strconv"
"strings" "strings"
"sync"
"hakurei.app/container/fhs" "hakurei.app/container/fhs"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -19,6 +20,9 @@ const (
// kindBusyboxBin is the kind of [pkg.Artifact] of busyboxBin. // kindBusyboxBin is the kind of [pkg.Artifact] of busyboxBin.
kindBusyboxBin kindBusyboxBin
// kindCollection is the kind of [Collect]. It never cures successfully.
kindCollection
) )
// mustDecode is like [pkg.MustDecode], but replaces the zero value and prints // mustDecode is like [pkg.MustDecode], but replaces the zero value and prints
@@ -329,7 +333,7 @@ mkdir -vp /work/system/bin
"AR=ar", "AR=ar",
"RANLIB=ranlib", "RANLIB=ranlib",
"LIBCC=/system/lib/clang/21/lib/" + triplet() + "LIBCC=/system/lib/clang/" + llvmVersionMajor + "/lib/" + triplet() +
"/libclang_rt.builtins.a", "/libclang_rt.builtins.a",
}, "/system/bin", "/bin") }, "/system/bin", "/bin")
@@ -454,6 +458,48 @@ type PackageAttr struct {
Flag int Flag int
} }
// pa holds whether a [PArtifact] is present.
type pa = [PresetEnd]bool
// paPool holds addresses of pa.
var paPool = sync.Pool{New: func() any { return new(pa) }}
// paGet returns the address of a new pa.
func paGet() *pa { return paPool.Get().(*pa) }
// paPut returns a pa to paPool.
func paPut(pv *pa) { *pv = pa{}; paPool.Put(pv) }
// appendPreset recursively appends a [PArtifact] and its runtime dependencies.
func (t Toolchain) appendPreset(
a []pkg.Artifact,
pv *pa, p PArtifact,
) []pkg.Artifact {
if pv[p] {
return a
}
pv[p] = true
for _, d := range GetMetadata(p).Dependencies {
a = t.appendPreset(a, pv, d)
}
return append(a, t.Load(p))
}
// AppendPresets recursively appends multiple [PArtifact] and their runtime
// dependencies.
func (t Toolchain) AppendPresets(
a []pkg.Artifact,
presets ...PArtifact,
) []pkg.Artifact {
pv := paGet()
for _, p := range presets {
a = t.appendPreset(a, pv, p)
}
paPut(pv)
return a
}
// NewPackage constructs a [pkg.Artifact] via a build system helper. // NewPackage constructs a [pkg.Artifact] via a build system helper.
func (t Toolchain) NewPackage( func (t Toolchain) NewPackage(
name, version string, name, version string,
@@ -486,12 +532,14 @@ func (t Toolchain) NewPackage(
extraRes := make([]pkg.Artifact, 0, dc) extraRes := make([]pkg.Artifact, 0, dc)
extraRes = append(extraRes, attr.NonStage0...) extraRes = append(extraRes, attr.NonStage0...)
if !t.isStage0() { if !t.isStage0() {
pv := paGet()
for _, p := range helper.extra(attr.Flag) { for _, p := range helper.extra(attr.Flag) {
extraRes = append(extraRes, t.Load(p)) extraRes = t.appendPreset(extraRes, pv, p)
} }
for _, p := range extra { for _, p := range extra {
extraRes = append(extraRes, t.Load(p)) extraRes = t.appendPreset(extraRes, pv, p)
} }
paPut(pv)
} }
var scriptEarly string var scriptEarly string
@@ -543,3 +591,29 @@ cd '/usr/src/` + name + `/'
})..., })...,
) )
} }
// Collected is returned by [Collect.Cure] to indicate a successful collection.
type Collected struct{}
// Error returns a constant string to satisfy error, but should never be seen
// by the user.
func (Collected) Error() string { return "artifacts successfully collected" }
// Collect implements [pkg.FloodArtifact] to concurrently cure multiple
// [pkg.Artifact]. It returns [Collected].
type Collect []pkg.Artifact
// Cure returns [Collected].
func (*Collect) Cure(*pkg.FContext) error { return Collected{} }
// Kind returns the hardcoded [pkg.Kind] value.
func (*Collect) Kind() pkg.Kind { return kindCollection }
// Params does not write anything, dependencies are already represented in the header.
func (*Collect) Params(*pkg.IContext) {}
// Dependencies returns [Collect] as is.
func (c *Collect) Dependencies() []pkg.Artifact { return *c }
// IsExclusive returns false: Cure is a noop.
func (*Collect) IsExclusive() bool { return false }

View File

@@ -33,5 +33,7 @@ func init() {
Name: "rsync", Name: "rsync",
Description: "an open source utility that provides fast incremental file transfer", Description: "an open source utility that provides fast incremental file transfer",
Website: "https://rsync.samba.org/", Website: "https://rsync.samba.org/",
ID: 4217,
} }
} }

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newSquashfsTools() (pkg.Artifact, string) { func (t Toolchain) newSquashfsTools() (pkg.Artifact, string) {
const ( const (
version = "4.7.4" version = "4.7.5"
checksum = "pG0E_wkRJFS6bvPYF-hTKZT-cWnvo5BbIzCDZrJZVQDgJOx2Vc3ZfNSEV7Di7cSW" checksum = "rF52wLQP-jeAmcD-48wqJcck8ZWRFwkax3T-7snaRf5EBnCQQh0YypMY9lwcivLz"
) )
return t.NewPackage("squashfs-tools", version, pkg.NewHTTPGetTar( return t.NewPackage("squashfs-tools", version, pkg.NewHTTPGetTar(
nil, "https://github.com/plougher/squashfs-tools/releases/"+ nil, "https://github.com/plougher/squashfs-tools/releases/"+
@@ -47,5 +47,13 @@ func init() {
Name: "squashfs-tools", Name: "squashfs-tools",
Description: "tools to create and extract Squashfs filesystems", Description: "tools to create and extract Squashfs filesystems",
Website: "https://github.com/plougher/squashfs-tools", Website: "https://github.com/plougher/squashfs-tools",
Dependencies: P{
Zstd,
Gzip,
Zlib,
},
ID: 4879,
} }
} }

View File

@@ -15,6 +15,7 @@ func (t Toolchain) newStage0() (pkg.Artifact, string) {
runtimes, runtimes,
clang, clang,
t.Load(Zlib),
t.Load(Bzip2), t.Load(Bzip2),
t.Load(Patch), t.Load(Patch),

View File

@@ -8,13 +8,13 @@ import (
func (t Toolchain) newTamaGo() (pkg.Artifact, string) { func (t Toolchain) newTamaGo() (pkg.Artifact, string) {
const ( const (
version = "1.26.0" version = "1.26.1"
checksum = "5XkfbpTpSdPJfwtTfUegfdu4LUy8nuZ7sCondiRIxTJI9eQONi8z_O_dq9yDkjw8" checksum = "fimZnklQcYWGsTQU8KepLn-yCYaTfNdMI9DCg6NJVQv-3gOJnUEO9mqRCMAHnEXZ"
) )
return t.New("tamago-go"+version, 0, []pkg.Artifact{ return t.New("tamago-go"+version, 0, t.AppendPresets(nil,
t.Load(Bash), Bash,
t.Load(Go), Go,
}, nil, []string{ ), nil, []string{
"CC=cc", "CC=cc",
"GOCACHE=/tmp/gocache", "GOCACHE=/tmp/gocache",
}, ` }, `
@@ -44,5 +44,7 @@ func init() {
Name: "tamago", Name: "tamago",
Description: "a Go toolchain extended with support for bare metal execution", Description: "a Go toolchain extended with support for bare metal execution",
Website: "https://github.com/usbarmory/tamago-go", Website: "https://github.com/usbarmory/tamago-go",
ID: 388872,
} }
} }

View File

@@ -67,6 +67,8 @@ func init() {
Name: "toybox", Name: "toybox",
Description: "many common Linux command line utilities", Description: "many common Linux command line utilities",
Website: "https://landley.net/toybox/", Website: "https://landley.net/toybox/",
ID: 13818,
} }
artifactsM[toyboxEarly] = Metadata{ artifactsM[toyboxEarly] = Metadata{

View File

@@ -11,10 +11,10 @@ func (t Toolchain) newUnzip() (pkg.Artifact, string) {
version = "6.0" version = "6.0"
checksum = "fcqjB1IOVRNJ16K5gTGEDt3zCJDVBc7EDSra9w3H93stqkNwH1vaPQs_QGOpQZu1" checksum = "fcqjB1IOVRNJ16K5gTGEDt3zCJDVBc7EDSra9w3H93stqkNwH1vaPQs_QGOpQZu1"
) )
return t.New("unzip-"+version, 0, []pkg.Artifact{ return t.New("unzip-"+version, 0, t.AppendPresets(nil,
t.Load(Make), Make,
t.Load(Coreutils), Coreutils,
}, nil, nil, ` ), nil, nil, `
cd /usr/src/unzip/ cd /usr/src/unzip/
unix/configure unix/configure
make -f unix/Makefile generic1 make -f unix/Makefile generic1
@@ -38,5 +38,7 @@ func init() {
Name: "unzip", Name: "unzip",
Description: "portable compression/archiver utilities", Description: "portable compression/archiver utilities",
Website: "https://infozip.sourceforge.net/", Website: "https://infozip.sourceforge.net/",
ID: 8684,
} }
} }

View File

@@ -53,5 +53,10 @@ func init() {
Name: "util-linux", Name: "util-linux",
Description: "a random collection of Linux utilities", Description: "a random collection of Linux utilities",
Website: "https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git", Website: "https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git",
ID: 8179,
// release candidates confuse Anitya
latest: (*Versions).getStable,
} }
} }

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newWayland() (pkg.Artifact, string) { func (t Toolchain) newWayland() (pkg.Artifact, string) {
const ( const (
version = "1.24.0" version = "1.24.91"
checksum = "JxgLiFRRGw2D3uhVw8ZeDbs3V7K_d4z_ypDog2LBqiA_5y2vVbUAk5NT6D5ozm0m" checksum = "SQkjYShk2TutoBOfmeJcdLU9iDExVKOg0DZhLeL8U_qjc9olLTC7h3vuUBvVtx9w"
) )
return t.NewPackage("wayland", version, pkg.NewHTTPGetTar( return t.NewPackage("wayland", version, pkg.NewHTTPGetTar(
nil, "https://gitlab.freedesktop.org/wayland/wayland/"+ nil, "https://gitlab.freedesktop.org/wayland/wayland/"+
@@ -41,6 +41,14 @@ func init() {
Name: "wayland", Name: "wayland",
Description: "core Wayland window system code and protocol", Description: "core Wayland window system code and protocol",
Website: "https://wayland.freedesktop.org/", Website: "https://wayland.freedesktop.org/",
Dependencies: P{
Libffi,
Libexpat,
Libxml2,
},
ID: 10061,
} }
} }
@@ -110,9 +118,6 @@ GitLab
}, },
}, (*MesonHelper)(nil), }, (*MesonHelper)(nil),
Wayland, Wayland,
Libffi,
Libexpat,
Libxml2,
), version ), version
} }
func init() { func init() {
@@ -122,5 +127,7 @@ func init() {
Name: "wayland-protocols", Name: "wayland-protocols",
Description: "Additional standard Wayland protocols", Description: "Additional standard Wayland protocols",
Website: "https://wayland.freedesktop.org/", Website: "https://wayland.freedesktop.org/",
ID: 13997,
} }
} }

View File

@@ -4,14 +4,14 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newUtilMacros() (pkg.Artifact, string) { func (t Toolchain) newUtilMacros() (pkg.Artifact, string) {
const ( const (
version = "1.17" version = "1.20.2"
checksum = "vYPO4Qq3B_WGcsBjG0-lfwZ6DZ7ayyrOLqfDrVOgTDcyLChuMGOAAVAa_UXLu5tD" checksum = "Ze8QH3Z3emC0pWFP-0nUYeMy7aBW3L_dxBBmVgcumIHNzEKc1iGTR-yUFR3JcM1G"
) )
return t.NewPackage("util-macros", version, pkg.NewHTTPGetTar( return t.NewPackage("util-macros", version, pkg.NewHTTPGetTar(
nil, "https://www.x.org/releases/X11R7.7/src/util/"+ nil, "https://www.x.org/releases/individual/util/"+
"util-macros-"+version+".tar.bz2", "util-macros-"+version+".tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarBzip2, pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version ), nil, (*MakeHelper)(nil)), version
} }
func init() { func init() {
@@ -21,16 +21,18 @@ func init() {
Name: "util-macros", Name: "util-macros",
Description: "X.Org Autotools macros", Description: "X.Org Autotools macros",
Website: "https://xorg.freedesktop.org/", Website: "https://xorg.freedesktop.org/",
ID: 5252,
} }
} }
func (t Toolchain) newXproto() (pkg.Artifact, string) { func (t Toolchain) newXproto() (pkg.Artifact, string) {
const ( const (
version = "7.0.23" version = "7.0.31"
checksum = "goxwWxV0jZ_3pNczXFltZWHAhq92x-aEreUGyp5Ns8dBOoOmgbpeNIu1nv0Zx07z" checksum = "Cm69urWY5RctKpR78eGzuwrjDEfXGkvHRdodj6sjypOGy5FF4-lmnUttVHYV1ydg"
) )
return t.NewPackage("xproto", version, pkg.NewHTTPGetTar( return t.NewPackage("xproto", version, pkg.NewHTTPGetTar(
nil, "https://www.x.org/releases/X11R7.7/src/proto/"+ nil, "https://www.x.org/releases/individual/proto/"+
"xproto-"+version+".tar.bz2", "xproto-"+version+".tar.bz2",
mustDecode(checksum), mustDecode(checksum),
pkg.TarBzip2, pkg.TarBzip2,
@@ -38,9 +40,6 @@ func (t Toolchain) newXproto() (pkg.Artifact, string) {
// ancient configure script // ancient configure script
Generate: "autoreconf -if", Generate: "autoreconf -if",
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
PkgConfig, PkgConfig,
@@ -54,26 +53,25 @@ func init() {
Name: "xproto", Name: "xproto",
Description: "X Window System unified protocol definitions", Description: "X Window System unified protocol definitions",
Website: "https://gitlab.freedesktop.org/xorg/proto/xorgproto", Website: "https://gitlab.freedesktop.org/xorg/proto/xorgproto",
ID: 13650,
} }
} }
func (t Toolchain) newLibXau() (pkg.Artifact, string) { func (t Toolchain) newLibXau() (pkg.Artifact, string) {
const ( const (
version = "1.0.7" version = "1.0.12"
checksum = "bm768RoZZnHRe9VjNU1Dw3BhfE60DyS9D_bgSR-JLkEEyUWT_Hb_lQripxrXto8j" checksum = "G9AjnU_C160q814MCdjFOVt_mQz_pIt4wf4GNOQmGJS3UuuyMw53sfPvJ7WOqwXN"
) )
return t.NewPackage("libXau", version, pkg.NewHTTPGetTar( return t.NewPackage("libXau", version, pkg.NewHTTPGetTar(
nil, "https://www.x.org/releases/X11R7.7/src/lib/"+ nil, "https://www.x.org/releases/individual/lib/"+
"libXau-"+version+".tar.bz2", "libXau-"+version+".tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarBzip2, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// ancient configure script // ancient configure script
Generate: "autoreconf -if", Generate: "autoreconf -if",
}, },
M4,
Perl,
Autoconf,
Automake, Automake,
Libtool, Libtool,
PkgConfig, PkgConfig,
@@ -89,5 +87,11 @@ func init() {
Name: "libXau", Name: "libXau",
Description: "functions for handling Xauthority files and entries", Description: "functions for handling Xauthority files and entries",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxau", Website: "https://gitlab.freedesktop.org/xorg/lib/libxau",
Dependencies: P{
Xproto,
},
ID: 1765,
} }
} }

Some files were not shown because too many files have changed in this diff Show More