44 Commits

Author SHA1 Message Date
d61faa09eb release: 0.3.4
All checks were successful
Release / Create release (push) Successful in 58s
Test / Hakurei (push) Successful in 52s
Test / Create distribution (push) Successful in 30s
Test / Hakurei (race detector) (push) Successful in 49s
Test / ShareFS (push) Successful in 38s
Test / Sandbox (push) Successful in 45s
Test / Sandbox (race detector) (push) Successful in 45s
Test / Hpkg (push) Successful in 47s
Test / Flake checks (push) Successful in 1m47s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-27 03:56:06 +09:00
50153788ef internal/rosa: hakurei artifact
All checks were successful
Test / Create distribution (push) Successful in 29s
Test / ShareFS (push) Successful in 36s
Test / Sandbox (push) Successful in 44s
Test / Sandbox (race detector) (push) Successful in 44s
Test / Hakurei (push) Successful in 48s
Test / Hakurei (race detector) (push) Successful in 49s
Test / Hpkg (push) Successful in 46s
Test / Flake checks (push) Successful in 1m45s
This does not yet have fuse from staging. Everything else works perfectly, though.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-27 02:24:49 +09:00
c84fe63217 internal/rosa: various X artifacts
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m37s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m25s
Test / Sandbox (race detector) (push) Successful in 5m4s
Test / Hakurei (race detector) (push) Successful in 6m8s
Test / Flake checks (push) Successful in 1m44s
Required by xcb which is required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-27 02:02:49 +09:00
eb67e5e0a8 internal/pkg: exclusive artifacts
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m34s
Test / Hakurei (push) Successful in 3m46s
Test / ShareFS (push) Successful in 3m59s
Test / Hpkg (push) Successful in 4m32s
Test / Sandbox (race detector) (push) Successful in 5m0s
Test / Hakurei (race detector) (push) Successful in 6m8s
Test / Flake checks (push) Successful in 1m36s
This alleviates scheduler overhead when curing many artifacts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-27 01:23:50 +09:00
948afe33e5 internal/rosa/acl: use patch helper
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m37s
Test / Hakurei (push) Successful in 3m49s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m25s
Test / Sandbox (race detector) (push) Successful in 4m58s
Test / Hakurei (race detector) (push) Successful in 6m14s
Test / Flake checks (push) Successful in 1m45s
This is significantly less ugly.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-27 00:30:50 +09:00
76c657177d internal/rosa: patch ignore whitespace
All checks were successful
Test / Create distribution (push) Successful in 51s
Test / Sandbox (push) Successful in 2m47s
Test / Hakurei (push) Successful in 4m39s
Test / ShareFS (push) Successful in 4m39s
Test / Hpkg (push) Successful in 5m16s
Test / Hakurei (race detector) (push) Successful in 6m25s
Test / Sandbox (race detector) (push) Successful in 3m14s
Test / Flake checks (push) Successful in 1m49s
This makes it work better with patches emitted by git.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 21:56:36 +09:00
4356f978aa internal/rosa: kernel patching
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m44s
Test / Hakurei (push) Successful in 3m56s
Test / ShareFS (push) Successful in 4m8s
Test / Hpkg (push) Successful in 4m43s
Test / Sandbox (race detector) (push) Successful in 5m12s
Test / Hakurei (race detector) (push) Successful in 6m5s
Test / Flake checks (push) Successful in 1m57s
The side effect of this is to work around zfs performance issue with chmod on overlay mount.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 21:20:52 +09:00
4f17dad645 internal/rosa: isolate patching helper
All checks were successful
Test / Create distribution (push) Successful in 51s
Test / Sandbox (push) Successful in 3m10s
Test / Hakurei (push) Successful in 4m33s
Test / ShareFS (push) Successful in 4m33s
Test / Sandbox (race detector) (push) Successful in 5m16s
Test / Hpkg (push) Successful in 5m26s
Test / Hakurei (race detector) (push) Successful in 4m59s
Test / Flake checks (push) Successful in 1m52s
This is useful outside llvm as well.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 21:00:29 +09:00
68b7d41c65 internal/rosa: parallel autoconf tests
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 3m0s
Test / Hakurei (push) Successful in 4m23s
Test / ShareFS (push) Successful in 4m21s
Test / Hpkg (push) Successful in 5m0s
Test / Sandbox (race detector) (push) Successful in 5m17s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 2m3s
These take forever and run sequentially by default for some reason.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 19:52:59 +09:00
e48f303e38 internal/rosa: parallel perl tests
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Hakurei (push) Successful in 4m4s
Test / ShareFS (push) Successful in 4m11s
Test / Hpkg (push) Successful in 4m50s
Test / Sandbox (race detector) (push) Successful in 5m0s
Test / Hakurei (race detector) (push) Successful in 6m2s
Test / Sandbox (push) Successful in 1m34s
Test / Flake checks (push) Successful in 3m50s
This is found in the github action, the test target does not appear to support parallelisation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 19:45:50 +09:00
f1fd406b82 internal/rosa: link libc ldd
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m36s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m24s
Test / Sandbox (race detector) (push) Successful in 4m59s
Test / Hakurei (race detector) (push) Successful in 5m53s
Test / Flake checks (push) Successful in 3m51s
Musl appears to implement this behaviour but does not install the symlink by default.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 08:00:03 +09:00
53b1de3395 internal/rosa: enable static on various artifacts
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m41s
Test / Hakurei (push) Successful in 4m0s
Test / ShareFS (push) Successful in 4m1s
Test / Hpkg (push) Successful in 4m41s
Test / Sandbox (race detector) (push) Successful in 5m1s
Test / Hakurei (race detector) (push) Successful in 6m13s
Test / Flake checks (push) Successful in 1m55s
This is implicitly enabled sometimes, but better to be explicit.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 07:56:14 +09:00
92dcadbf27 internal/acl: connect getfacl stderr
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m42s
Test / Hakurei (push) Successful in 3m50s
Test / ShareFS (push) Successful in 3m58s
Test / Hpkg (push) Successful in 4m26s
Test / Sandbox (race detector) (push) Successful in 4m55s
Test / Hakurei (race detector) (push) Successful in 3m22s
Test / Flake checks (push) Successful in 1m52s
This shows whatever failure is happening in the cure container.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 07:51:16 +09:00
0bd6a18326 internal/rosa: acl artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m33s
Test / ShareFS (push) Successful in 3m55s
Test / Hpkg (push) Successful in 4m36s
Test / Sandbox (race detector) (push) Successful in 4m55s
Test / Hakurei (race detector) (push) Successful in 5m52s
Test / Hakurei (push) Successful in 2m33s
Test / Flake checks (push) Successful in 1m38s
Required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 07:38:56 +09:00
67d592c337 internal/pkg: close gzip reader on success
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m32s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m33s
Test / Sandbox (race detector) (push) Successful in 4m58s
Test / Hakurei (race detector) (push) Successful in 6m9s
Test / Hakurei (push) Successful in 2m33s
Test / Flake checks (push) Successful in 1m36s
The Close method panics otherwise.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 07:06:38 +09:00
fdc8a8419b internal/rosa: static libwayland
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m40s
Test / ShareFS (push) Successful in 3m56s
Test / Hakurei (push) Successful in 4m0s
Test / Hpkg (push) Successful in 4m37s
Test / Sandbox (race detector) (push) Successful in 5m4s
Test / Hakurei (race detector) (push) Successful in 3m22s
Test / Flake checks (push) Successful in 1m40s
Required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 06:49:08 +09:00
122cfbf63a internal/rosa: run wayland tests
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m47s
Test / Hakurei (push) Successful in 3m59s
Test / ShareFS (push) Successful in 3m59s
Test / Hpkg (push) Successful in 4m40s
Test / Sandbox (race detector) (push) Successful in 5m5s
Test / Hakurei (race detector) (push) Successful in 6m9s
Test / Flake checks (push) Successful in 1m38s
Broken test is disabled for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 06:39:45 +09:00
504f5d28fe internal/rosa: libseccomp artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m42s
Test / Hakurei (push) Successful in 3m57s
Test / ShareFS (push) Successful in 3m59s
Test / Hpkg (push) Successful in 4m27s
Test / Sandbox (race detector) (push) Successful in 5m5s
Test / Hakurei (race detector) (push) Successful in 3m8s
Test / Flake checks (push) Successful in 1m41s
Required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 05:28:36 +09:00
3eadd5c580 internal/rosa: gperf artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m32s
Test / Hakurei (push) Successful in 3m49s
Test / ShareFS (push) Successful in 3m56s
Test / Hpkg (push) Successful in 4m32s
Test / Sandbox (race detector) (push) Successful in 5m4s
Test / Hakurei (race detector) (push) Successful in 5m54s
Test / Flake checks (push) Successful in 1m56s
Required by libseccomp.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 05:25:39 +09:00
4d29333807 internal/rosa: wayland-protocols artifact
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m33s
Test / Hakurei (push) Successful in 3m52s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m29s
Test / Sandbox (race detector) (push) Successful in 4m59s
Test / Hakurei (race detector) (push) Successful in 6m8s
Test / Flake checks (push) Successful in 1m45s
Required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 05:13:30 +09:00
e1533fa4c6 internal/rosa: wayland artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m51s
Test / Hakurei (push) Successful in 3m50s
Test / ShareFS (push) Successful in 3m58s
Test / Hpkg (push) Successful in 4m23s
Test / Sandbox (race detector) (push) Successful in 5m0s
Test / Hakurei (race detector) (push) Successful in 5m54s
Test / Flake checks (push) Successful in 1m56s
Required by hakurei.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 05:10:35 +09:00
9a74d5273d internal/rosa: libgd artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m33s
Test / Hakurei (push) Successful in 3m45s
Test / ShareFS (push) Successful in 4m0s
Test / Hpkg (push) Successful in 4m24s
Test / Sandbox (race detector) (push) Successful in 5m0s
Test / Hakurei (race detector) (push) Successful in 6m7s
Test / Flake checks (push) Successful in 1m45s
Required by graphviz which is required by wayland.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 04:20:11 +09:00
2abc8c454e internal/pkg: absolute hard link
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m36s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m58s
Test / Hpkg (push) Successful in 4m20s
Test / Sandbox (race detector) (push) Successful in 4m59s
Test / Hakurei (race detector) (push) Successful in 5m51s
Test / Flake checks (push) Successful in 1m45s
This cannot be relative since the curing process is not in the temp directory.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 04:03:05 +09:00
fecb963e85 internal/rosa: libxml2 artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m40s
Test / Hakurei (push) Successful in 3m49s
Test / ShareFS (push) Successful in 3m54s
Test / Hpkg (push) Successful in 4m16s
Test / Sandbox (race detector) (push) Successful in 4m56s
Test / Hakurei (race detector) (push) Successful in 5m48s
Test / Flake checks (push) Successful in 1m42s
Required by wayland. Release tarball is xz only, unfortunately.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 03:47:42 +09:00
cd9da57f20 internal/rosa: libexpat artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m40s
Test / Hakurei (push) Successful in 3m53s
Test / ShareFS (push) Successful in 4m1s
Test / Hpkg (push) Successful in 4m30s
Test / Sandbox (race detector) (push) Successful in 5m1s
Test / Hakurei (race detector) (push) Successful in 5m58s
Test / Flake checks (push) Successful in 1m50s
Required by wayland.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 03:15:25 +09:00
c6a95f5a6a internal/rosa: meson artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m38s
Test / Hakurei (push) Successful in 4m0s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m29s
Test / Sandbox (race detector) (push) Successful in 4m53s
Test / Hakurei (race detector) (push) Successful in 5m52s
Test / Flake checks (push) Successful in 1m45s
Required by wayland and pipewire.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 03:03:21 +09:00
228489371d internal/rosa: setuptools artifact
All checks were successful
Test / Create distribution (push) Successful in 1m14s
Test / Sandbox (push) Successful in 3m5s
Test / ShareFS (push) Successful in 4m25s
Test / Hpkg (push) Successful in 4m55s
Test / Sandbox (race detector) (push) Successful in 5m28s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Hakurei (push) Successful in 2m47s
Test / Flake checks (push) Successful in 1m51s
Apparently the only way to install python stuff offline.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 02:28:47 +09:00
490471d22b cmd/mbf: verbose by default
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m41s
Test / Hakurei (push) Successful in 3m54s
Test / ShareFS (push) Successful in 3m54s
Test / Hpkg (push) Successful in 4m47s
Test / Sandbox (race detector) (push) Successful in 5m6s
Test / Hakurei (race detector) (push) Successful in 6m18s
Test / Flake checks (push) Successful in 1m43s
It usually does not make sense to use this without verbose.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 02:12:56 +09:00
763d2572fe internal/rosa: pkg-config artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m34s
Test / Hakurei (push) Successful in 3m52s
Test / ShareFS (push) Successful in 4m0s
Test / Hpkg (push) Successful in 4m41s
Test / Sandbox (race detector) (push) Successful in 5m1s
Test / Hakurei (race detector) (push) Successful in 5m53s
Test / Flake checks (push) Successful in 1m45s
Used by hakurei and many other programs.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 01:26:54 +09:00
bb1b6beb87 internal/rosa: name suffix by toolchain
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m35s
Test / Hakurei (push) Successful in 3m54s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m32s
Test / Sandbox (race detector) (push) Successful in 5m3s
Test / Hakurei (race detector) (push) Successful in 6m8s
Test / Flake checks (push) Successful in 1m44s
This makes output more useful during bootstrap.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 00:57:03 +09:00
3224a7da63 cmd/mbf: disable threshold by default
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m44s
Test / ShareFS (push) Successful in 4m0s
Test / Hpkg (push) Successful in 4m30s
Test / Sandbox (race detector) (push) Successful in 5m4s
Test / Hakurei (race detector) (push) Successful in 6m11s
Test / Hakurei (push) Successful in 2m36s
Test / Flake checks (push) Successful in 1m43s
This is not very useful.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-26 00:05:59 +09:00
8a86cf74ee internal/rosa/go: symlink executables
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m36s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m54s
Test / Hpkg (push) Successful in 4m24s
Test / Sandbox (race detector) (push) Successful in 4m53s
Test / Hakurei (race detector) (push) Successful in 6m7s
Test / Flake checks (push) Successful in 1m58s
This avoids having to fix up $PATH for every artifact.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 23:59:08 +09:00
e34a59e332 internal/rosa/go: run toolchain tests
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m41s
Test / ShareFS (push) Successful in 3m58s
Test / Hpkg (push) Successful in 4m33s
Test / Sandbox (race detector) (push) Successful in 4m56s
Test / Hakurei (race detector) (push) Successful in 6m5s
Test / Hakurei (push) Successful in 2m33s
Test / Flake checks (push) Successful in 1m38s
LLVM patches and a TMPDIR backed by tmpfs fixed most tests. Broken tests in older versions are disabled.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 21:21:53 +09:00
861801597d internal/pkg: expose response body
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m37s
Test / Hakurei (push) Successful in 3m52s
Test / ShareFS (push) Successful in 3m56s
Test / Hpkg (push) Successful in 4m26s
Test / Sandbox (race detector) (push) Successful in 4m56s
Test / Hakurei (race detector) (push) Successful in 5m52s
Test / Flake checks (push) Successful in 1m39s
This uses the new measured reader provided by Cache. This should make httpArtifact zero-copy.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 16:10:34 +09:00
334578fdde internal/pkg: expose underlying reader
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m42s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m37s
Test / Sandbox (race detector) (push) Successful in 5m0s
Test / Hakurei (race detector) (push) Successful in 5m54s
Test / Hakurei (push) Successful in 2m41s
Test / Flake checks (push) Successful in 1m41s
This will be fully implemented in httpArtifact in a future commit.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 14:48:25 +09:00
20790af71e internal/rosa: lazy initialise all artifacts
All checks were successful
Test / Create distribution (push) Successful in 48s
Test / Sandbox (push) Successful in 2m37s
Test / Hakurei (push) Successful in 4m5s
Test / ShareFS (push) Successful in 4m2s
Test / Hpkg (push) Successful in 4m33s
Test / Sandbox (race detector) (push) Successful in 4m59s
Test / Hakurei (race detector) (push) Successful in 5m57s
Test / Flake checks (push) Successful in 1m44s
This improves performance, though not as drastically as lazy initialising llvm.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 01:43:18 +09:00
43b8a40fc0 internal/rosa: lazy initialise llvm
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m36s
Test / ShareFS (push) Successful in 3m57s
Test / Hakurei (push) Successful in 4m3s
Test / Hpkg (push) Successful in 4m48s
Test / Sandbox (race detector) (push) Successful in 5m1s
Test / Hakurei (race detector) (push) Successful in 5m56s
Test / Flake checks (push) Successful in 1m46s
This significantly improves performance.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-25 00:29:46 +09:00
87c3059214 internal/rosa: run perl tests
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m32s
Test / Hakurei (push) Successful in 3m54s
Test / ShareFS (push) Successful in 3m53s
Test / Hpkg (push) Successful in 4m27s
Test / Sandbox (race detector) (push) Successful in 4m55s
Test / Hakurei (race detector) (push) Successful in 6m9s
Test / Flake checks (push) Successful in 1m48s
A broken test with unexplainable failure is disabled.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 18:58:09 +09:00
6956dfc31a internal/pkg: block on implementation entry
All checks were successful
Test / Create distribution (push) Successful in 50s
Test / Sandbox (push) Successful in 2m38s
Test / Hakurei (push) Successful in 3m50s
Test / ShareFS (push) Successful in 3m59s
Test / Hpkg (push) Successful in 4m30s
Test / Sandbox (race detector) (push) Successful in 4m58s
Test / Hakurei (race detector) (push) Successful in 3m7s
Test / Flake checks (push) Successful in 1m39s
This avoids blocking while not in Cure method of the implementation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 16:02:50 +09:00
d9ebaf20f8 internal/rosa: stage3 special case helper
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m42s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m58s
Test / Hpkg (push) Successful in 4m30s
Test / Sandbox (race detector) (push) Successful in 5m1s
Test / Hakurei (race detector) (push) Successful in 5m54s
Test / Flake checks (push) Successful in 1m40s
This makes it cleaner to specify non-stage3 and stage3-exclusive dependencies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 12:23:35 +09:00
acee0b3632 internal/pkg: increase output buffer size
All checks were successful
Test / Create distribution (push) Successful in 1m56s
Test / Sandbox (push) Successful in 1m46s
Test / Sandbox (race detector) (push) Successful in 2m36s
Test / ShareFS (push) Successful in 2m56s
Test / Hakurei (push) Successful in 3m11s
Test / Hakurei (race detector) (push) Successful in 3m31s
Test / Hpkg (push) Successful in 4m44s
Test / Flake checks (push) Successful in 1m39s
This avoids truncating unreasonably long lines from llvm.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 11:45:44 +09:00
5e55a796df internal/rosa: gnu patch artifact
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m41s
Test / Hakurei (push) Successful in 3m53s
Test / ShareFS (push) Successful in 4m3s
Test / Hpkg (push) Successful in 4m34s
Test / Sandbox (race detector) (push) Successful in 4m59s
Test / Hakurei (race detector) (push) Successful in 6m14s
Test / Flake checks (push) Successful in 1m38s
This is more robust than the busybox implementation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 11:32:27 +09:00
f6eaf76ec9 internal/rosa: patch library paths
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m41s
Test / Hakurei (push) Successful in 3m54s
Test / ShareFS (push) Successful in 3m57s
Test / Hpkg (push) Successful in 4m36s
Test / Sandbox (race detector) (push) Successful in 5m5s
Test / Hakurei (race detector) (push) Successful in 5m59s
Test / Flake checks (push) Successful in 1m40s
This removes the need for reference LDFLAGS in the standard toolchain.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-24 11:22:25 +09:00
5c127a7035 internal/rosa: patch header search paths
All checks were successful
Test / Create distribution (push) Successful in 49s
Test / Sandbox (push) Successful in 2m35s
Test / ShareFS (push) Successful in 4m0s
Test / Hpkg (push) Successful in 4m41s
Test / Sandbox (race detector) (push) Successful in 5m6s
Test / Hakurei (race detector) (push) Successful in 6m11s
Test / Hakurei (push) Successful in 2m39s
Test / Flake checks (push) Successful in 3m50s
This removes the need for reference CFLAGS in the standard toolchain.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-01-23 01:56:52 +09:00
41 changed files with 1701 additions and 491 deletions

View File

@@ -47,13 +47,13 @@ func main() {
}() }()
var ( var (
flagVerbose bool flagQuiet bool
flagCures int flagCures int
flagBase string flagBase string
flagTShift int flagTShift int
) )
c := command.New(os.Stderr, log.Printf, "mbf", func([]string) (err error) { c := command.New(os.Stderr, log.Printf, "mbf", func([]string) (err error) {
msg.SwapVerbose(flagVerbose) msg.SwapVerbose(!flagQuiet)
var base *check.Absolute var base *check.Absolute
if flagBase, err = filepath.Abs(flagBase); err != nil { if flagBase, err = filepath.Abs(flagBase); err != nil {
@@ -62,16 +62,19 @@ func main() {
return return
} }
if cache, err = pkg.Open(ctx, msg, flagCures, base); err == nil { if cache, err = pkg.Open(ctx, msg, flagCures, base); err == nil {
if flagTShift < 0 || flagTShift > 31 { if flagTShift < 0 {
flagTShift = 31 cache.SetThreshold(0)
} } else if flagTShift > 31 {
cache.SetThreshold(1 << 31)
} else {
cache.SetThreshold(1 << flagTShift) cache.SetThreshold(1 << flagTShift)
} }
}
return return
}).Flag( }).Flag(
&flagVerbose, &flagQuiet,
"v", command.BoolFlag(false), "q", command.BoolFlag(false),
"Print cure messages to the console", "Do not print cure messages",
).Flag( ).Flag(
&flagCures, &flagCures,
"cures", command.IntFlag(0), "cures", command.IntFlag(0),
@@ -82,7 +85,7 @@ func main() {
"Directory to store cured artifacts", "Directory to store cured artifacts",
).Flag( ).Flag(
&flagTShift, &flagTShift,
"tshift", command.IntFlag(31), "tshift", command.IntFlag(-1),
"Dependency graph size exponent, to the power of 2", "Dependency graph size exponent, to the power of 2",
) )
@@ -143,24 +146,86 @@ func main() {
if len(args) != 1 { if len(args) != 1 {
return errors.New("cure requires 1 argument") return errors.New("cure requires 1 argument")
} }
var a pkg.Artifact var p rosa.PArtifact
switch args[0] { switch args[0] {
case "acl":
p = rosa.ACL
case "attr":
p = rosa.Attr
case "autoconf":
p = rosa.Autoconf
case "bash":
p = rosa.Bash
case "busybox": case "busybox":
a = rosa.Std.NewBusybox() p = rosa.Busybox
case "musl": case "cmake":
a = rosa.Std.NewMusl(nil) p = rosa.CMake
case "coreutils":
p = rosa.Coreutils
case "diffutils":
p = rosa.Diffutils
case "gettext":
p = rosa.Gettext
case "git": case "git":
a = rosa.Std.NewGit() p = rosa.Git
case "go": case "go":
a = rosa.Std.NewGo() p = rosa.Go
case "gperf":
p = rosa.Gperf
case "hakurei":
p = rosa.Hakurei
case "kernel-headers":
p = rosa.KernelHeaders
case "libXau":
p = rosa.LibXau
case "libexpat":
p = rosa.Libexpat
case "libseccomp":
p = rosa.Libseccomp
case "libxml2":
p = rosa.Libxml2
case "libffi":
p = rosa.Libffi
case "libgd":
p = rosa.Libgd
case "m4":
p = rosa.M4
case "make":
p = rosa.Make
case "meson":
p = rosa.Meson
case "ninja":
p = rosa.Ninja
case "patch":
p = rosa.Patch
case "perl":
p = rosa.Perl
case "pkg-config":
p = rosa.PkgConfig
case "python":
p = rosa.Python
case "rsync": case "rsync":
a = rosa.Std.NewRsync() p = rosa.Rsync
case "setuptools":
p = rosa.Setuptools
case "wayland":
p = rosa.Wayland
case "wayland-protocols":
p = rosa.WaylandProtocols
case "xcb":
p = rosa.XCB
case "xcb-proto":
p = rosa.XCBProto
case "xproto":
p = rosa.Xproto
case "zlib":
p = rosa.Zlib
default: default:
return fmt.Errorf("unsupported artifact %q", args[0]) return fmt.Errorf("unsupported artifact %q", args[0])
} }
pathname, _, err := cache.Cure(a) pathname, _, err := cache.Cure(rosa.Std.Load(p))
if err == nil { if err == nil {
log.Println(pathname) log.Println(pathname)
} }

12
dist/install.sh vendored
View File

@@ -1,12 +1,12 @@
#!/bin/sh #!/bin/sh
cd "$(dirname -- "$0")" || exit 1 cd "$(dirname -- "$0")" || exit 1
install -vDm0755 "bin/hakurei" "${HAKUREI_INSTALL_PREFIX}/usr/bin/hakurei" install -vDm0755 "bin/hakurei" "${DESTDIR}/usr/bin/hakurei"
install -vDm0755 "bin/sharefs" "${HAKUREI_INSTALL_PREFIX}/usr/bin/sharefs" install -vDm0755 "bin/sharefs" "${DESTDIR}/usr/bin/sharefs"
install -vDm4511 "bin/hsu" "${HAKUREI_INSTALL_PREFIX}/usr/bin/hsu" install -vDm4511 "bin/hsu" "${DESTDIR}/usr/bin/hsu"
if [ ! -f "${HAKUREI_INSTALL_PREFIX}/etc/hsurc" ]; then if [ ! -f "${DESTDIR}/etc/hsurc" ]; then
install -vDm0400 "hsurc.default" "${HAKUREI_INSTALL_PREFIX}/etc/hsurc" install -vDm0400 "hsurc.default" "${DESTDIR}/etc/hsurc"
fi fi
install -vDm0644 "comp/_hakurei" "${HAKUREI_INSTALL_PREFIX}/usr/share/zsh/site-functions/_hakurei" install -vDm0644 "comp/_hakurei" "${DESTDIR}/usr/share/zsh/site-functions/_hakurei"

View File

@@ -25,8 +25,7 @@ var (
func TestUpdate(t *testing.T) { func TestUpdate(t *testing.T) {
if os.Getenv("GO_TEST_SKIP_ACL") == "1" { if os.Getenv("GO_TEST_SKIP_ACL") == "1" {
t.Log("acl test skipped") t.Skip("acl test skipped")
t.SkipNow()
} }
testFilePath := path.Join(t.TempDir(), testFileName) testFilePath := path.Join(t.TempDir(), testFileName)
@@ -143,6 +142,7 @@ func (c *getFAclInvocation) run(name string) error {
} }
c.cmd = exec.Command("getfacl", "--omit-header", "--absolute-names", "--numeric", name) c.cmd = exec.Command("getfacl", "--omit-header", "--absolute-names", "--numeric", name)
c.cmd.Stderr = os.Stderr
scanErr := make(chan error, 1) scanErr := make(chan error, 1)
if p, err := c.cmd.StdoutPipe(); err != nil { if p, err := c.cmd.StdoutPipe(); err != nil {
@@ -254,7 +254,7 @@ func getfacl(t *testing.T, name string) []*getFAclResp {
t.Fatalf("getfacl: error = %v", err) t.Fatalf("getfacl: error = %v", err)
} }
if len(c.pe) != 0 { if len(c.pe) != 0 {
t.Errorf("errors encountered parsing getfacl output\n%s", errors.Join(c.pe...).Error()) t.Errorf("errors encountered parsing getfacl output\n%s", errors.Join(c.pe...))
} }
return c.val return c.val
} }

View File

@@ -32,7 +32,7 @@ type ExecPath struct {
P *check.Absolute P *check.Absolute
// Artifacts to mount on the pathname, must contain at least one [Artifact]. // Artifacts to mount on the pathname, must contain at least one [Artifact].
// If there are multiple entries or W is true, P is set up as an overlay // If there are multiple entries or W is true, P is set up as an overlay
// mount, and entries of A must not implement [File]. // mount, and entries of A must not implement [FileArtifact].
A []Artifact A []Artifact
// Whether to make the mount point writable via the temp directory. // Whether to make the mount point writable via the temp directory.
W bool W bool
@@ -105,6 +105,9 @@ type execArtifact struct {
// equivalent to execTimeoutDefault. This value is never encoded in Params // equivalent to execTimeoutDefault. This value is never encoded in Params
// because it cannot affect outcome. // because it cannot affect outcome.
timeout time.Duration timeout time.Duration
// Caller-supplied exclusivity value, returned as is by IsExclusive.
exclusive bool
} }
var _ fmt.Stringer = new(execArtifact) var _ fmt.Stringer = new(execArtifact)
@@ -123,7 +126,7 @@ var _ KnownChecksum = new(execNetArtifact)
func (a *execNetArtifact) Checksum() Checksum { return a.checksum } func (a *execNetArtifact) Checksum() Checksum { return a.checksum }
// Kind returns the hardcoded [Kind] constant. // Kind returns the hardcoded [Kind] constant.
func (a *execNetArtifact) Kind() Kind { return KindExecNet } func (*execNetArtifact) Kind() Kind { return KindExecNet }
// Params is [Checksum] concatenated with [KindExec] params. // Params is [Checksum] concatenated with [KindExec] params.
func (a *execNetArtifact) Params(ctx *IContext) { func (a *execNetArtifact) Params(ctx *IContext) {
@@ -157,13 +160,14 @@ func (a *execNetArtifact) Cure(f *FContext) error {
// negative timeout value is equivalent tp [ExecTimeoutDefault], a timeout value // negative timeout value is equivalent tp [ExecTimeoutDefault], a timeout value
// greater than [ExecTimeoutMax] is equivalent to [ExecTimeoutMax]. // greater than [ExecTimeoutMax] is equivalent to [ExecTimeoutMax].
// //
// The user-facing name is not accessible from the container and does not // The user-facing name and exclusivity value are not accessible from the
// affect curing outcome. Because of this, it is omitted from parameter data // container and does not affect curing outcome. Because of this, it is omitted
// for computing identifier. // from parameter data for computing identifier.
func NewExec( func NewExec(
name string, name string,
checksum *Checksum, checksum *Checksum,
timeout time.Duration, timeout time.Duration,
exclusive bool,
dir *check.Absolute, dir *check.Absolute,
env []string, env []string,
@@ -181,7 +185,7 @@ func NewExec(
if timeout > ExecTimeoutMax { if timeout > ExecTimeoutMax {
timeout = ExecTimeoutMax timeout = ExecTimeoutMax
} }
a := execArtifact{name, paths, dir, env, pathname, args, timeout} a := execArtifact{name, paths, dir, env, pathname, args, timeout, exclusive}
if checksum == nil { if checksum == nil {
return &a return &a
} }
@@ -189,7 +193,7 @@ func NewExec(
} }
// Kind returns the hardcoded [Kind] constant. // Kind returns the hardcoded [Kind] constant.
func (a *execArtifact) Kind() Kind { return KindExec } func (*execArtifact) Kind() Kind { return KindExec }
// Params writes paths, executable pathname and args. // Params writes paths, executable pathname and args.
func (a *execArtifact) Params(ctx *IContext) { func (a *execArtifact) Params(ctx *IContext) {
@@ -237,6 +241,9 @@ func (a *execArtifact) Dependencies() []Artifact {
return slices.Concat(artifacts...) return slices.Concat(artifacts...)
} }
// IsExclusive returns the caller-supplied exclusivity value.
func (a *execArtifact) IsExclusive() bool { return a.exclusive }
// String returns the caller-supplied reporting name. // String returns the caller-supplied reporting name.
func (a *execArtifact) String() string { return a.name } func (a *execArtifact) String() string { return a.name }
@@ -259,6 +266,10 @@ func scanVerbose(
) { ) {
defer close(done) defer close(done)
s := bufio.NewScanner(r) s := bufio.NewScanner(r)
s.Buffer(
make([]byte, bufio.MaxScanTokenSize),
bufio.MaxScanTokenSize<<12,
)
for s.Scan() { for s.Scan() {
msg.Verbose(prefix, s.Text()) msg.Verbose(prefix, s.Text())
} }

View File

@@ -39,7 +39,7 @@ func TestExec(t *testing.T) {
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-offline", nil, 0, "exec-offline", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -62,7 +62,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantChecksumOffline, nil}, ), ignorePathname, wantChecksumOffline, nil},
{"error passthrough", pkg.NewExec( {"error passthrough", pkg.NewExec(
"", nil, 0, "", nil, 0, true,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -85,7 +85,7 @@ func TestExec(t *testing.T) {
}}, }},
{"invalid paths", pkg.NewExec( {"invalid paths", pkg.NewExec(
"", nil, 0, "", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -98,7 +98,7 @@ func TestExec(t *testing.T) {
// check init failure passthrough // check init failure passthrough
var exitError *exec.ExitError var exitError *exec.ExitError
if _, _, err := c.Cure(pkg.NewExec( if _, _, err := c.Cure(pkg.NewExec(
"", nil, 0, "", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
nil, nil,
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -120,7 +120,7 @@ func TestExec(t *testing.T) {
) )
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-net", &wantChecksum, 0, "exec-net", &wantChecksum, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -152,7 +152,7 @@ func TestExec(t *testing.T) {
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-overlay-root", nil, 0, "exec-overlay-root", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -178,7 +178,7 @@ func TestExec(t *testing.T) {
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-overlay-work", nil, 0, "exec-overlay-work", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/work/bin/testtool"), check.MustAbs("/work/bin/testtool"),
@@ -209,7 +209,7 @@ func TestExec(t *testing.T) {
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-multiple-layers", nil, 0, "exec-multiple-layers", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -239,7 +239,9 @@ func TestExec(t *testing.T) {
cure: func(t *pkg.TContext) error { cure: func(t *pkg.TContext) error {
return os.MkdirAll(t.GetWorkDir().String(), 0700) return os.MkdirAll(t.GetWorkDir().String(), 0700)
}, },
}}, 1<<5 /* concurrent cache hits */), cure: func(f *pkg.FContext) error { }}, 1<<5 /* concurrent cache hits */),
cure: func(f *pkg.FContext) error {
work := f.GetWorkDir() work := f.GetWorkDir()
if err := os.MkdirAll(work.String(), 0700); err != nil { if err := os.MkdirAll(work.String(), 0700); err != nil {
return err return err
@@ -260,7 +262,7 @@ func TestExec(t *testing.T) {
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-layer-promotion", nil, 0, "exec-layer-promotion", nil, 0, true,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),

View File

@@ -1,9 +1,10 @@
package pkg package pkg
import ( import (
"context" "bytes"
"crypto/sha512" "crypto/sha512"
"fmt" "fmt"
"io"
) )
// A fileArtifact is an [Artifact] that cures into data known ahead of time. // A fileArtifact is an [Artifact] that cures into data known ahead of time.
@@ -24,10 +25,10 @@ var _ KnownChecksum = new(fileArtifactNamed)
// String returns the caller-supplied reporting name. // String returns the caller-supplied reporting name.
func (a *fileArtifactNamed) String() string { return a.name } func (a *fileArtifactNamed) String() string { return a.name }
// NewFile returns a [File] that cures into a caller-supplied byte slice. // NewFile returns a [FileArtifact] that cures into a caller-supplied byte slice.
// //
// Caller must not modify data after NewFile returns. // Caller must not modify data after NewFile returns.
func NewFile(name string, data []byte) File { func NewFile(name string, data []byte) FileArtifact {
f := fileArtifact(data) f := fileArtifact(data)
if name != "" { if name != "" {
return &fileArtifactNamed{f, name} return &fileArtifactNamed{f, name}
@@ -36,13 +37,16 @@ func NewFile(name string, data []byte) File {
} }
// Kind returns the hardcoded [Kind] constant. // Kind returns the hardcoded [Kind] constant.
func (a *fileArtifact) Kind() Kind { return KindFile } func (*fileArtifact) Kind() Kind { return KindFile }
// Params writes the result of Cure. // Params writes the result of Cure.
func (a *fileArtifact) Params(ctx *IContext) { ctx.GetHash().Write(*a) } func (a *fileArtifact) Params(ctx *IContext) { ctx.GetHash().Write(*a) }
// Dependencies returns a nil slice. // Dependencies returns a nil slice.
func (a *fileArtifact) Dependencies() []Artifact { return nil } func (*fileArtifact) Dependencies() []Artifact { return nil }
// IsExclusive returns false: Cure returns a prepopulated buffer.
func (*fileArtifact) IsExclusive() bool { return false }
// Checksum computes and returns the checksum of caller-supplied data. // Checksum computes and returns the checksum of caller-supplied data.
func (a *fileArtifact) Checksum() Checksum { func (a *fileArtifact) Checksum() Checksum {
@@ -52,4 +56,6 @@ func (a *fileArtifact) Checksum() Checksum {
} }
// Cure returns the caller-supplied data. // Cure returns the caller-supplied data.
func (a *fileArtifact) Cure(context.Context) ([]byte, error) { return *a, nil } func (a *fileArtifact) Cure(*RContext) (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader(*a)), nil
}

View File

@@ -1,13 +1,11 @@
package pkg package pkg
import ( import (
"context"
"crypto/sha512"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"path" "path"
"sync" "unique"
) )
// An httpArtifact is an [Artifact] backed by a [http] url string. The method is // An httpArtifact is an [Artifact] backed by a [http] url string. The method is
@@ -17,38 +15,32 @@ type httpArtifact struct {
// Caller-supplied url string. // Caller-supplied url string.
url string url string
// Caller-supplied checksum of the response body. This is validated during // Caller-supplied checksum of the response body. This is validated when
// curing and the first call to Data. // closing the [io.ReadCloser] returned by Cure.
checksum Checksum checksum unique.Handle[Checksum]
// doFunc is the Do method of [http.Client] supplied by the caller. // doFunc is the Do method of [http.Client] supplied by the caller.
doFunc func(req *http.Request) (*http.Response, error) doFunc func(req *http.Request) (*http.Response, error)
// Response body read to EOF.
data []byte
// Synchronises access to data.
mu sync.Mutex
} }
var _ KnownChecksum = new(httpArtifact) var _ KnownChecksum = new(httpArtifact)
var _ fmt.Stringer = new(httpArtifact) var _ fmt.Stringer = new(httpArtifact)
// NewHTTPGet returns a new [File] backed by the supplied client. A GET request // NewHTTPGet returns a new [FileArtifact] backed by the supplied client. A GET
// is set up for url. If c is nil, [http.DefaultClient] is used instead. // request is set up for url. If c is nil, [http.DefaultClient] is used instead.
func NewHTTPGet( func NewHTTPGet(
c *http.Client, c *http.Client,
url string, url string,
checksum Checksum, checksum Checksum,
) File { ) FileArtifact {
if c == nil { if c == nil {
c = http.DefaultClient c = http.DefaultClient
} }
return &httpArtifact{url: url, checksum: checksum, doFunc: c.Do} return &httpArtifact{url: url, checksum: unique.Make(checksum), doFunc: c.Do}
} }
// Kind returns the hardcoded [Kind] constant. // Kind returns the hardcoded [Kind] constant.
func (a *httpArtifact) Kind() Kind { return KindHTTPGet } func (*httpArtifact) Kind() Kind { return KindHTTPGet }
// Params writes the backing url string. Client is not represented as it does // Params writes the backing url string. Client is not represented as it does
// not affect [Cache.Cure] outcome. // not affect [Cache.Cure] outcome.
@@ -57,10 +49,13 @@ func (a *httpArtifact) Params(ctx *IContext) {
} }
// Dependencies returns a nil slice. // Dependencies returns a nil slice.
func (a *httpArtifact) Dependencies() []Artifact { return nil } func (*httpArtifact) Dependencies() []Artifact { return nil }
// IsExclusive returns false: Cure returns as soon as a response is received.
func (*httpArtifact) IsExclusive() bool { return false }
// Checksum returns the caller-supplied checksum. // Checksum returns the caller-supplied checksum.
func (a *httpArtifact) Checksum() Checksum { return a.checksum } func (a *httpArtifact) Checksum() Checksum { return a.checksum.Value() }
// String returns [path.Base] over the backing url. // String returns [path.Base] over the backing url.
func (a *httpArtifact) String() string { return path.Base(a.url) } func (a *httpArtifact) String() string { return path.Base(a.url) }
@@ -73,11 +68,13 @@ func (e ResponseStatusError) Error() string {
return "the requested URL returned non-OK status: " + http.StatusText(int(e)) return "the requested URL returned non-OK status: " + http.StatusText(int(e))
} }
// do sends the caller-supplied request on the caller-supplied [http.Client] // Cure sends the http request and returns the resulting response body reader
// and reads its response body to EOF and returns the resulting bytes. // wrapped to perform checksum validation. It is valid but not encouraged to
func (a *httpArtifact) do(ctx context.Context) (data []byte, err error) { // close the resulting [io.ReadCloser] before it is read to EOF, as that causes
// Close to block until all remaining data is consumed and validated.
func (a *httpArtifact) Cure(r *RContext) (rc io.ReadCloser, err error) {
var req *http.Request var req *http.Request
req, err = http.NewRequestWithContext(ctx, http.MethodGet, a.url, nil) req, err = http.NewRequestWithContext(r.Unwrap(), http.MethodGet, a.url, nil)
if err != nil { if err != nil {
return return
} }
@@ -92,35 +89,6 @@ func (a *httpArtifact) do(ctx context.Context) (data []byte, err error) {
return nil, ResponseStatusError(resp.StatusCode) return nil, ResponseStatusError(resp.StatusCode)
} }
if data, err = io.ReadAll(resp.Body); err != nil { rc = r.NewMeasuredReader(resp.Body, a.checksum)
_ = resp.Body.Close()
return
}
err = resp.Body.Close()
return
}
// Cure completes the http request and returns the resulting response body read
// to EOF. Data does not interact with the filesystem.
func (a *httpArtifact) Cure(ctx context.Context) (data []byte, err error) {
a.mu.Lock()
defer a.mu.Unlock()
if a.data != nil {
// validated by cache or a previous call to Data
return a.data, nil
}
if data, err = a.do(ctx); err != nil {
return
}
h := sha512.New384()
h.Write(data)
if got := (Checksum)(h.Sum(nil)); got != a.checksum {
return nil, &ChecksumMismatchError{got, a.checksum}
}
a.data = data
return return
} }

View File

@@ -2,11 +2,13 @@ package pkg_test
import ( import (
"crypto/sha512" "crypto/sha512"
"io"
"net/http" "net/http"
"reflect" "reflect"
"testing" "testing"
"testing/fstest" "testing/fstest"
"unique" "unique"
"unsafe"
"hakurei.app/container/check" "hakurei.app/container/check"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -31,15 +33,27 @@ func TestHTTPGet(t *testing.T) {
checkWithCache(t, []cacheTestCase{ checkWithCache(t, []cacheTestCase{
{"direct", nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"direct", nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
if got, err := f.Cure(t.Context()); err != nil { var got []byte
if rc, err := f.Cure(&r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err)
} else if string(got) != testdata { } else if string(got) != testdata {
t.Fatalf("Cure: %x, want %x", got, testdata) t.Fatalf("Cure: %x, want %x", got, testdata)
} else if err = rc.Close(); err != nil {
t.Fatalf("Close: error = %v", err)
} }
// check direct validation // check direct validation
@@ -51,8 +65,21 @@ func TestHTTPGet(t *testing.T) {
wantErrMismatch := &pkg.ChecksumMismatchError{ wantErrMismatch := &pkg.ChecksumMismatchError{
Got: testdataChecksum.Value(), Got: testdataChecksum.Value(),
} }
if _, err := f.Cure(t.Context()); !reflect.DeepEqual(err, wantErrMismatch) { if rc, err := f.Cure(&r); err != nil {
t.Fatalf("Cure: error = %#v, want %#v", err, wantErrMismatch) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err)
} else if string(got) != testdata {
t.Fatalf("Cure: %x, want %x", got, testdata)
} else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) {
t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch)
}
// check fallback validation
if rc, err := f.Cure(&r); err != nil {
t.Fatalf("Cure: error = %v", err)
} else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) {
t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch)
} }
// check direct response error // check direct response error
@@ -62,12 +89,19 @@ func TestHTTPGet(t *testing.T) {
pkg.Checksum{}, pkg.Checksum{},
) )
wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound) wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound)
if _, err := f.Cure(t.Context()); !reflect.DeepEqual(err, wantErrNotFound) { if _, err := f.Cure(&r); !reflect.DeepEqual(err, wantErrNotFound) {
t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound) t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound)
} }
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")}, }, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"cure", nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"cure", nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
"file:///testdata", "file:///testdata",
@@ -85,10 +119,15 @@ func TestHTTPGet(t *testing.T) {
t.Fatalf("Cure: %x, want %x", checksum.Value(), testdataChecksum.Value()) t.Fatalf("Cure: %x, want %x", checksum.Value(), testdataChecksum.Value())
} }
if got, err := f.Cure(t.Context()); err != nil { var got []byte
if rc, err := f.Cure(&r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err)
} else if string(got) != testdata { } else if string(got) != testdata {
t.Fatalf("Cure: %x, want %x", got, testdata) t.Fatalf("Cure: %x, want %x", got, testdata)
} else if err = rc.Close(); err != nil {
t.Fatalf("Close: error = %v", err)
} }
// check load from cache // check load from cache
@@ -97,10 +136,14 @@ func TestHTTPGet(t *testing.T) {
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
if got, err := f.Cure(t.Context()); err != nil { if rc, err := f.Cure(&r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err)
} else if string(got) != testdata { } else if string(got) != testdata {
t.Fatalf("Cure: %x, want %x", got, testdata) t.Fatalf("Cure: %x, want %x", got, testdata)
} else if err = rc.Close(); err != nil {
t.Fatalf("Close: error = %v", err)
} }
// check error passthrough // check error passthrough

View File

@@ -2,6 +2,7 @@
package pkg package pkg
import ( import (
"bufio"
"bytes" "bytes"
"context" "context"
"crypto/sha512" "crypto/sha512"
@@ -147,14 +148,15 @@ func (t *TContext) GetWorkDir() *check.Absolute { return t.work }
// create it if they wish to use it, using [os.MkdirAll]. // create it if they wish to use it, using [os.MkdirAll].
func (t *TContext) GetTempDir() *check.Absolute { return t.temp } func (t *TContext) GetTempDir() *check.Absolute { return t.temp }
// Open tries to open [Artifact] for reading. If a implements [File], its data // Open tries to open [Artifact] for reading. If a implements [FileArtifact],
// might be used directly, eliminating the roundtrip to vfs. Otherwise, it must // its reader might be used directly, eliminating the roundtrip to vfs.
// cure into a directory containing a single regular file. // Otherwise, it must cure into a directory containing a single regular file.
// //
// If err is nil, the caller is responsible for closing the resulting // If err is nil, the caller must close the resulting [io.ReadCloser] and return
// [io.ReadCloser]. // its error, if any. Failure to read r to EOF may result in a spurious
// [ChecksumMismatchError], or the underlying implementation may block on Close.
func (t *TContext) Open(a Artifact) (r io.ReadCloser, err error) { func (t *TContext) Open(a Artifact) (r io.ReadCloser, err error) {
if f, ok := a.(File); ok { if f, ok := a.(FileArtifact); ok {
return t.cache.openFile(f) return t.cache.openFile(f)
} }
@@ -213,6 +215,20 @@ func (f *FContext) GetArtifact(a Artifact) (
panic(InvalidLookupError(f.cache.Ident(a).Value())) panic(InvalidLookupError(f.cache.Ident(a).Value()))
} }
// RContext is passed to [FileArtifact.Cure] and provides helper methods useful
// for curing the [FileArtifact].
//
// Methods of RContext are safe for concurrent use. RContext is valid
// until [FileArtifact.Cure] returns.
type RContext struct {
// Address of underlying [Cache], should be zeroed or made unusable after
// [FileArtifact.Cure] returns and must not be exposed directly.
cache *Cache
}
// Unwrap returns the underlying [context.Context].
func (r *RContext) Unwrap() context.Context { return r.cache.ctx }
// An Artifact is a read-only reference to a piece of data that may be created // An Artifact is a read-only reference to a piece of data that may be created
// deterministically but might not currently be available in memory or on the // deterministically but might not currently be available in memory or on the
// filesystem. // filesystem.
@@ -237,6 +253,24 @@ type Artifact interface {
// //
// Result must remain identical across multiple invocations. // Result must remain identical across multiple invocations.
Dependencies() []Artifact Dependencies() []Artifact
// IsExclusive returns whether the [Artifact] is exclusive. Exclusive
// artifacts might not run in parallel with each other, and are still
// subject to the cures limit.
//
// Some implementations may saturate the CPU for a nontrivial amount of
// time. Curing multiple such implementations simultaneously causes
// significant CPU scheduler overhead. An exclusive artifact will generally
// not be cured alongside another exclusive artifact, thus alleviating this
// overhead.
//
// Note that [Cache] reserves the right to still cure exclusive
// artifacts concurrently as this is not a synchronisation primitive but
// an optimisation one. Implementations are forbidden from accessing global
// state regardless of exclusivity.
//
// Result must remain identical across multiple invocations.
IsExclusive() bool
} }
// FloodArtifact refers to an [Artifact] requiring its entire dependency graph // FloodArtifact refers to an [Artifact] requiring its entire dependency graph
@@ -274,7 +308,7 @@ func Flood(a Artifact) iter.Seq[Artifact] {
// //
// TrivialArtifact is unable to cure any other [Artifact] and it cannot access // TrivialArtifact is unable to cure any other [Artifact] and it cannot access
// pathnames. This type of [Artifact] is primarily intended for dependency-less // pathnames. This type of [Artifact] is primarily intended for dependency-less
// artifacts or direct dependencies that only consists of [File]. // artifacts or direct dependencies that only consists of [FileArtifact].
type TrivialArtifact interface { type TrivialArtifact interface {
// Cure cures the current [Artifact] to the working directory obtained via // Cure cures the current [Artifact] to the working directory obtained via
// [TContext.GetWorkDir]. // [TContext.GetWorkDir].
@@ -309,16 +343,19 @@ type KnownChecksum interface {
Checksum() Checksum Checksum() Checksum
} }
// A File refers to an [Artifact] backed by a single file. // FileArtifact refers to an [Artifact] backed by a single file.
type File interface { type FileArtifact interface {
// Cure returns the full contents of [File]. If [File] implements // Cure returns [io.ReadCloser] of the full contents of [FileArtifact]. If
// [KnownChecksum], Cure is responsible for validating any data it produces // [FileArtifact] implements [KnownChecksum], Cure is responsible for
// and must return [ChecksumMismatchError] if validation fails. // validating any data it produces and must return [ChecksumMismatchError]
// if validation fails. This error is conventionally returned during the
// first call to Close, but may be returned during any call to Read before
// EOF, or by Cure itself.
// //
// Callers must not modify the returned byte slice. // Callers are responsible for closing the resulting [io.ReadCloser].
// //
// Result must remain identical across multiple invocations. // Result must remain identical across multiple invocations.
Cure(ctx context.Context) ([]byte, error) Cure(r *RContext) (io.ReadCloser, error)
Artifact Artifact
} }
@@ -413,9 +450,9 @@ type pendingArtifactDep struct {
// Cache is a support layer that implementations of [Artifact] can use to store // Cache is a support layer that implementations of [Artifact] can use to store
// cured [Artifact] data in a content addressed fashion. // cured [Artifact] data in a content addressed fashion.
type Cache struct { type Cache struct {
// Work for curing dependency [Artifact] is sent here and cured concurrently // Cures of any variant of [Artifact] sends to cures before entering the
// while subject to the cures limit. Invalid after the context is canceled. // implementation and receives an equal amount of elements after.
cureDep chan<- *pendingArtifactDep cures chan struct{}
// [context.WithCancel] over caller-supplied context, used by [Artifact] and // [context.WithCancel] over caller-supplied context, used by [Artifact] and
// all dependency curing goroutines. // all dependency curing goroutines.
@@ -430,7 +467,7 @@ type Cache struct {
// Directory where all [Cache] related files are placed. // Directory where all [Cache] related files are placed.
base *check.Absolute base *check.Absolute
// Whether to validate [File.Cure] for a [KnownChecksum] file. This // Whether to validate [FileArtifact.Cure] for a [KnownChecksum] file. This
// significantly reduces performance. // significantly reduces performance.
strict bool strict bool
// Maximum size of a dependency graph. // Maximum size of a dependency graph.
@@ -453,6 +490,11 @@ type Cache struct {
// Synchronises access to ident and corresponding filesystem entries. // Synchronises access to ident and corresponding filesystem entries.
identMu sync.RWMutex identMu sync.RWMutex
// Synchronises entry into exclusive artifacts for the cure method.
exclMu sync.Mutex
// Buffered I/O free list, must not be accessed directly.
bufioPool sync.Pool
// Unlocks the on-filesystem cache. Must only be called from Close. // Unlocks the on-filesystem cache. Must only be called from Close.
unlock func() unlock func()
// Synchronises calls to Close. // Synchronises calls to Close.
@@ -573,8 +615,8 @@ func (e *ChecksumMismatchError) Error() string {
// found and removed from the underlying storage of [Cache]. // found and removed from the underlying storage of [Cache].
type ScrubError struct { type ScrubError struct {
// Content-addressed entries not matching their checksum. This can happen // Content-addressed entries not matching their checksum. This can happen
// if an incorrect [File] implementation was cured against a non-strict // if an incorrect [FileArtifact] implementation was cured against
// [Cache]. // a non-strict [Cache].
ChecksumMismatches []ChecksumMismatchError ChecksumMismatches []ChecksumMismatchError
// Dangling identifier symlinks. This can happen if the content-addressed // Dangling identifier symlinks. This can happen if the content-addressed
// entry was removed while scrubbing due to a checksum mismatch. // entry was removed while scrubbing due to a checksum mismatch.
@@ -910,10 +952,11 @@ func (c *Cache) finaliseIdent(
close(done) close(done)
} }
// openFile tries to load [File] from [Cache], and if that fails, obtains it via // openFile tries to load [FileArtifact] from [Cache], and if that fails,
// [File.Cure] instead. Notably, it does not cure [File]. If err is nil, the // obtains it via [FileArtifact.Cure] instead. Notably, it does not cure
// caller is responsible for closing the resulting [io.ReadCloser]. // [FileArtifact] to the filesystem. If err is nil, the caller is responsible
func (c *Cache) openFile(f File) (r io.ReadCloser, err error) { // for closing the resulting [io.ReadCloser].
func (c *Cache) openFile(f FileArtifact) (r io.ReadCloser, err error) {
if kc, ok := f.(KnownChecksum); ok { if kc, ok := f.(KnownChecksum); ok {
c.checksumMu.RLock() c.checksumMu.RLock()
r, err = os.Open(c.base.Append( r, err = os.Open(c.base.Append(
@@ -943,11 +986,7 @@ func (c *Cache) openFile(f File) (r io.ReadCloser, err error) {
} }
}() }()
} }
var data []byte return f.Cure(&RContext{c})
if data, err = f.Cure(c.ctx); err != nil {
return
}
r = io.NopCloser(bytes.NewReader(data))
} }
return return
} }
@@ -1102,6 +1141,14 @@ func (c *Cache) Cure(a Artifact) (
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
) { ) {
select {
case <-c.ctx.Done():
err = c.ctx.Err()
return
default:
}
if c.threshold > 0 { if c.threshold > 0 {
var n uintptr var n uintptr
for range Flood(a) { for range Flood(a) {
@@ -1114,7 +1161,7 @@ func (c *Cache) Cure(a Artifact) (
c.msg.Verbosef("visited %d artifacts", n) c.msg.Verbosef("visited %d artifacts", n)
} }
return c.cure(a) return c.cure(a, true)
} }
// CureError wraps a non-nil error returned attempting to cure an [Artifact]. // CureError wraps a non-nil error returned attempting to cure an [Artifact].
@@ -1187,8 +1234,133 @@ func (e *DependencyCureError) Error() string {
return buf.String() return buf.String()
} }
// enterCure must be called before entering an [Artifact] implementation.
func (c *Cache) enterCure(a Artifact, curesExempt bool) error {
if a.IsExclusive() {
c.exclMu.Lock()
}
if curesExempt {
return nil
}
select {
case c.cures <- struct{}{}:
return nil
case <-c.ctx.Done():
if a.IsExclusive() {
c.exclMu.Unlock()
}
return c.ctx.Err()
}
}
// exitCure must be called after exiting an [Artifact] implementation.
func (c *Cache) exitCure(a Artifact, curesExempt bool) {
if a.IsExclusive() {
c.exclMu.Unlock()
}
if curesExempt {
return
}
<-c.cures
}
// getWriter is like [bufio.NewWriter] but for bufioPool.
func (c *Cache) getWriter(w io.Writer) *bufio.Writer {
bw := c.bufioPool.Get().(*bufio.Writer)
bw.Reset(w)
return bw
}
// measuredReader implements [io.ReadCloser] and measures the checksum during
// Close. If the underlying reader is not read to EOF, Close blocks until all
// remaining data is consumed and validated.
type measuredReader struct {
// Underlying reader. Never exposed directly.
r io.ReadCloser
// For validating checksum. Never exposed directly.
h hash.Hash
// Buffers writes to h, initialised by [Cache]. Never exposed directly.
hbw *bufio.Writer
// Expected checksum, compared during Close.
want unique.Handle[Checksum]
// For accessing free lists.
c *Cache
// Set up via [io.TeeReader] by [Cache].
io.Reader
}
// Close reads the underlying [io.ReadCloser] to EOF, closes it and measures its
// outcome. It returns a [ChecksumMismatchError] for an unexpected checksum.
func (mr *measuredReader) Close() (err error) {
if mr.hbw == nil || mr.Reader == nil {
return os.ErrInvalid
}
err = mr.hbw.Flush()
mr.c.putWriter(mr.hbw)
mr.hbw, mr.Reader = nil, nil
if err != nil {
_ = mr.r.Close()
return
}
var n int64
if n, err = io.Copy(mr.h, mr.r); err != nil {
_ = mr.r.Close()
return
}
if n > 0 {
mr.c.msg.Verbosef("missed %d bytes on measured reader", n)
}
if err = mr.r.Close(); err != nil {
return
}
buf := mr.c.getIdentBuf()
mr.h.Sum(buf[:0])
if got := Checksum(buf[:]); got != mr.want.Value() {
err = &ChecksumMismatchError{
Got: got,
Want: mr.want.Value(),
}
}
mr.c.putIdentBuf(buf)
return
}
// newMeasuredReader implements [RContext.NewMeasuredReader].
func (c *Cache) newMeasuredReader(
r io.ReadCloser,
checksum unique.Handle[Checksum],
) io.ReadCloser {
mr := measuredReader{r: r, h: sha512.New384(), want: checksum, c: c}
mr.hbw = c.getWriter(mr.h)
mr.Reader = io.TeeReader(r, mr.hbw)
return &mr
}
// NewMeasuredReader returns an [io.ReadCloser] implementing behaviour required
// by [FileArtifact]. The resulting [io.ReadCloser] holds a buffer originating
// from [Cache] and must be closed to return this buffer.
func (r *RContext) NewMeasuredReader(
rc io.ReadCloser,
checksum unique.Handle[Checksum],
) io.ReadCloser {
return r.cache.newMeasuredReader(rc, checksum)
}
// putWriter adds bw to bufioPool.
func (c *Cache) putWriter(bw *bufio.Writer) { c.bufioPool.Put(bw) }
// cure implements Cure without checking the full dependency graph. // cure implements Cure without checking the full dependency graph.
func (c *Cache) cure(a Artifact) ( func (c *Cache) cure(a Artifact, curesExempt bool) (
pathname *check.Absolute, pathname *check.Absolute,
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
@@ -1283,8 +1455,8 @@ func (c *Cache) cure(a Artifact) (
}() }()
} }
// cure File outside type switch to skip TContext initialisation // cure FileArtifact outside type switch to skip TContext initialisation
if f, ok := a.(File); ok { if f, ok := a.(FileArtifact); ok {
if checksumFi != nil { if checksumFi != nil {
if !checksumFi.Mode().IsRegular() { if !checksumFi.Mode().IsRegular() {
// unreachable // unreachable
@@ -1293,63 +1465,96 @@ func (c *Cache) cure(a Artifact) (
return return
} }
var data []byte work := c.base.Append(dirWork, ids)
data, err = f.Cure(c.ctx) var w *os.File
if err != nil { if w, err = os.OpenFile(
work.String(),
os.O_CREATE|os.O_EXCL|os.O_WRONLY,
0400,
); err != nil {
return return
} }
defer func() {
closeErr := w.Close()
if err == nil {
err = closeErr
}
if checksumPathname == nil { removeErr := os.Remove(work.String())
if err == nil && !errors.Is(removeErr, os.ErrNotExist) {
err = removeErr
}
}()
var r io.ReadCloser
if err = c.enterCure(a, curesExempt); err != nil {
return
}
r, err = f.Cure(&RContext{c})
if err == nil {
if checksumPathname == nil || c.IsStrict() {
h := sha512.New384() h := sha512.New384()
h.Write(data) hbw := c.getWriter(h)
_, err = io.Copy(w, io.TeeReader(r, hbw))
flushErr := hbw.Flush()
c.putWriter(hbw)
if err == nil {
err = flushErr
}
if err == nil {
buf := c.getIdentBuf() buf := c.getIdentBuf()
h.Sum(buf[:0]) h.Sum(buf[:0])
if checksumPathname == nil {
checksum = unique.Make(Checksum(buf[:])) checksum = unique.Make(Checksum(buf[:]))
checksums = Encode(Checksum(buf[:])) checksums = Encode(Checksum(buf[:]))
c.putIdentBuf(buf)
checksumPathname = c.base.Append(
dirChecksum,
checksums,
)
} else if c.IsStrict() { } else if c.IsStrict() {
h := sha512.New384() if got := Checksum(buf[:]); got != checksum.Value() {
h.Write(data)
if got := Checksum(h.Sum(nil)); got != checksum.Value() {
err = &ChecksumMismatchError{ err = &ChecksumMismatchError{
Got: got, Got: got,
Want: checksum.Value(), Want: checksum.Value(),
} }
return
} }
} }
c.putIdentBuf(buf)
if checksumPathname == nil {
checksumPathname = c.base.Append(
dirChecksum,
checksums,
)
}
}
} else {
_, err = io.Copy(w, r)
}
closeErr := r.Close()
if err == nil {
err = closeErr
}
}
c.exitCure(a, curesExempt)
if err != nil {
return
}
c.checksumMu.Lock() c.checksumMu.Lock()
var w *os.File if err = os.Rename(
w, err = os.OpenFile( work.String(),
checksumPathname.String(), checksumPathname.String(),
os.O_CREATE|os.O_EXCL|os.O_WRONLY, ); err != nil {
0400,
)
if err != nil {
c.checksumMu.Unlock() c.checksumMu.Unlock()
if errors.Is(err, os.ErrExist) {
err = nil
}
return return
} }
_, err = w.Write(data)
closeErr := w.Close()
timeErr := zeroTimes(checksumPathname.String()) timeErr := zeroTimes(checksumPathname.String())
c.checksumMu.Unlock() c.checksumMu.Unlock()
if err == nil { if err == nil {
err = timeErr err = timeErr
} }
if err == nil {
err = closeErr
}
return return
} }
@@ -1365,7 +1570,12 @@ func (c *Cache) cure(a Artifact) (
switch ca := a.(type) { switch ca := a.(type) {
case TrivialArtifact: case TrivialArtifact:
defer t.destroy(&err) defer t.destroy(&err)
if err = ca.Cure(&t); err != nil { if err = c.enterCure(a, curesExempt); err != nil {
return
}
err = ca.Cure(&t)
c.exitCure(a, curesExempt)
if err != nil {
return return
} }
break break
@@ -1381,14 +1591,7 @@ func (c *Cache) cure(a Artifact) (
var errsMu sync.Mutex var errsMu sync.Mutex
for i, d := range deps { for i, d := range deps {
pending := pendingArtifactDep{d, &res[i], &errs, &errsMu, &wg} pending := pendingArtifactDep{d, &res[i], &errs, &errsMu, &wg}
select { go pending.cure(c)
case c.cureDep <- &pending:
break
case <-c.ctx.Done():
err = c.ctx.Err()
return
}
} }
wg.Wait() wg.Wait()
@@ -1401,7 +1604,12 @@ func (c *Cache) cure(a Artifact) (
} }
defer f.destroy(&err) defer f.destroy(&err)
if err = ca.Cure(&f); err != nil { if err = c.enterCure(a, curesExempt); err != nil {
return
}
err = ca.Cure(&f)
c.exitCure(a, curesExempt)
if err != nil {
return return
} }
break break
@@ -1486,7 +1694,7 @@ func (pending *pendingArtifactDep) cure(c *Cache) {
defer pending.Done() defer pending.Done()
var err error var err error
pending.resP.pathname, pending.resP.checksum, err = c.cure(pending.a) pending.resP.pathname, pending.resP.checksum, err = c.cure(pending.a, false)
if err == nil { if err == nil {
return return
} }
@@ -1501,6 +1709,7 @@ func (c *Cache) Close() {
c.closeOnce.Do(func() { c.closeOnce.Do(func() {
c.cancel() c.cancel()
c.wg.Wait() c.wg.Wait()
close(c.cures)
c.unlock() c.unlock()
}) })
} }
@@ -1536,6 +1745,10 @@ func open(
base *check.Absolute, base *check.Absolute,
lock bool, lock bool,
) (*Cache, error) { ) (*Cache, error) {
if cures < 1 {
cures = runtime.NumCPU()
}
for _, name := range []string{ for _, name := range []string{
dirIdentifier, dirIdentifier,
dirChecksum, dirChecksum,
@@ -1548,6 +1761,8 @@ func open(
} }
c := Cache{ c := Cache{
cures: make(chan struct{}, cures),
msg: msg, msg: msg,
base: base, base: base,
@@ -1556,9 +1771,8 @@ func open(
identPending: make(map[unique.Handle[ID]]<-chan struct{}), identPending: make(map[unique.Handle[ID]]<-chan struct{}),
} }
c.ctx, c.cancel = context.WithCancel(ctx) c.ctx, c.cancel = context.WithCancel(ctx)
cureDep := make(chan *pendingArtifactDep, cures)
c.cureDep = cureDep
c.identPool.New = func() any { return new(extIdent) } c.identPool.New = func() any { return new(extIdent) }
c.bufioPool.New = func() any { return new(bufio.Writer) }
if lock || !testing.Testing() { if lock || !testing.Testing() {
if unlock, err := lockedfile.MutexAt( if unlock, err := lockedfile.MutexAt(
@@ -1572,23 +1786,5 @@ func open(
c.unlock = func() {} c.unlock = func() {}
} }
if cures < 1 {
cures = runtime.NumCPU()
}
for i := 0; i < cures; i++ {
c.wg.Go(func() {
for {
select {
case <-c.ctx.Done():
return
case pending := <-cureDep:
pending.cure(&c)
break
}
}
})
}
return &c, nil return &c, nil
} }

View File

@@ -47,10 +47,10 @@ type overrideIdent struct {
func (a overrideIdent) ID() pkg.ID { return a.id } func (a overrideIdent) ID() pkg.ID { return a.id }
// overrideIdentFile overrides the ID method of [File]. // overrideIdentFile overrides the ID method of [FileArtifact].
type overrideIdentFile struct { type overrideIdentFile struct {
id pkg.ID id pkg.ID
pkg.File pkg.FileArtifact
} }
func (a overrideIdentFile) ID() pkg.ID { return a.id } func (a overrideIdentFile) ID() pkg.ID { return a.id }
@@ -61,10 +61,10 @@ type knownIdentArtifact interface {
pkg.TrivialArtifact pkg.TrivialArtifact
} }
// A knownIdentFile implements [pkg.KnownIdent] and [File] // A knownIdentFile implements [pkg.KnownIdent] and [FileArtifact]
type knownIdentFile interface { type knownIdentFile interface {
pkg.KnownIdent pkg.KnownIdent
pkg.File pkg.FileArtifact
} }
// overrideChecksum overrides the Checksum method of [Artifact]. // overrideChecksum overrides the Checksum method of [Artifact].
@@ -75,7 +75,7 @@ type overrideChecksum struct {
func (a overrideChecksum) Checksum() pkg.Checksum { return a.checksum } func (a overrideChecksum) Checksum() pkg.Checksum { return a.checksum }
// overrideChecksumFile overrides the Checksum method of [File]. // overrideChecksumFile overrides the Checksum method of [FileArtifact].
type overrideChecksumFile struct { type overrideChecksumFile struct {
checksum pkg.Checksum checksum pkg.Checksum
knownIdentFile knownIdentFile
@@ -96,12 +96,14 @@ func (a *stubArtifact) Kind() pkg.Kind { return a.kind }
func (a *stubArtifact) Params(ctx *pkg.IContext) { ctx.GetHash().Write(a.params) } func (a *stubArtifact) Params(ctx *pkg.IContext) { ctx.GetHash().Write(a.params) }
func (a *stubArtifact) Dependencies() []pkg.Artifact { return a.deps } func (a *stubArtifact) Dependencies() []pkg.Artifact { return a.deps }
func (a *stubArtifact) Cure(t *pkg.TContext) error { return a.cure(t) } func (a *stubArtifact) Cure(t *pkg.TContext) error { return a.cure(t) }
func (*stubArtifact) IsExclusive() bool { return false }
// A stubArtifactF implements [FloodArtifact] with hardcoded behaviour. // A stubArtifactF implements [FloodArtifact] with hardcoded behaviour.
type stubArtifactF struct { type stubArtifactF struct {
kind pkg.Kind kind pkg.Kind
params []byte params []byte
deps []pkg.Artifact deps []pkg.Artifact
excl bool
cure func(f *pkg.FContext) error cure func(f *pkg.FContext) error
} }
@@ -110,8 +112,9 @@ func (a *stubArtifactF) Kind() pkg.Kind { return a.kind }
func (a *stubArtifactF) Params(ctx *pkg.IContext) { ctx.GetHash().Write(a.params) } func (a *stubArtifactF) Params(ctx *pkg.IContext) { ctx.GetHash().Write(a.params) }
func (a *stubArtifactF) Dependencies() []pkg.Artifact { return a.deps } func (a *stubArtifactF) Dependencies() []pkg.Artifact { return a.deps }
func (a *stubArtifactF) Cure(f *pkg.FContext) error { return a.cure(f) } func (a *stubArtifactF) Cure(f *pkg.FContext) error { return a.cure(f) }
func (a *stubArtifactF) IsExclusive() bool { return a.excl }
// A stubFile implements [File] with hardcoded behaviour. // A stubFile implements [FileArtifact] with hardcoded behaviour.
type stubFile struct { type stubFile struct {
data []byte data []byte
err error err error
@@ -119,7 +122,9 @@ type stubFile struct {
stubArtifact stubArtifact
} }
func (a *stubFile) Cure(context.Context) ([]byte, error) { return a.data, a.err } func (a *stubFile) Cure(*pkg.RContext) (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader(a.data)), a.err
}
// newStubFile returns an implementation of [pkg.File] with hardcoded behaviour. // newStubFile returns an implementation of [pkg.File] with hardcoded behaviour.
func newStubFile( func newStubFile(
@@ -128,7 +133,7 @@ func newStubFile(
sum *pkg.Checksum, sum *pkg.Checksum,
data []byte, data []byte,
err error, err error,
) pkg.File { ) pkg.FileArtifact {
f := overrideIdentFile{id, &stubFile{data, err, stubArtifact{ f := overrideIdentFile{id, &stubFile{data, err, stubArtifact{
kind, kind,
nil, nil,
@@ -283,7 +288,7 @@ func checkWithCache(t *testing.T, testCases []cacheTestCase) {
msg.SwapVerbose(testing.Verbose()) msg.SwapVerbose(testing.Verbose())
var scrubFunc func() error // scrub after hashing var scrubFunc func() error // scrub after hashing
if c, err := pkg.Open(t.Context(), msg, 0, base); err != nil { if c, err := pkg.Open(t.Context(), msg, 1<<4, base); err != nil {
t.Fatalf("Open: error = %v", err) t.Fatalf("Open: error = %v", err)
} else { } else {
t.Cleanup(c.Close) t.Cleanup(c.Close)
@@ -561,6 +566,10 @@ func TestCache(t *testing.T) {
stub.UniqueError stub.UniqueError
}{UniqueError: 0xbad}, }{UniqueError: 0xbad},
)}, )},
cure: func(f *pkg.FContext) error {
panic("attempting to cure impossible artifact")
},
}, nil, pkg.Checksum{}, &pkg.DependencyCureError{ }, nil, pkg.Checksum{}, &pkg.DependencyCureError{
{ {
Ident: unique.Make(pkg.ID{0xff, 3}), Ident: unique.Make(pkg.ID{0xff, 3}),

View File

@@ -24,7 +24,7 @@ const (
TarBzip2 TarBzip2
) )
// A tarArtifact is an [Artifact] unpacking a tarball backed by a [File]. // A tarArtifact is an [Artifact] unpacking a tarball backed by a [FileArtifact].
type tarArtifact struct { type tarArtifact struct {
// Caller-supplied backing tarball. // Caller-supplied backing tarball.
f Artifact f Artifact
@@ -80,6 +80,9 @@ func (a *tarArtifact) Dependencies() []Artifact {
return []Artifact{a.f} return []Artifact{a.f}
} }
// IsExclusive returns false: decompressor and tar reader are fully sequential.
func (a *tarArtifact) IsExclusive() bool { return false }
// A DisallowedTypeflagError describes a disallowed typeflag encountered while // A DisallowedTypeflagError describes a disallowed typeflag encountered while
// unpacking a tarball. // unpacking a tarball.
type DisallowedTypeflagError byte type DisallowedTypeflagError byte
@@ -97,12 +100,11 @@ func (a *tarArtifact) Cure(t *TContext) (err error) {
} }
defer func(f io.ReadCloser) { defer func(f io.ReadCloser) {
closeErr := tr.Close()
if err == nil { if err == nil {
err = closeErr err = tr.Close()
} }
closeErr = f.Close() closeErr := f.Close()
if err == nil { if err == nil {
err = closeErr err = closeErr
} }
@@ -175,7 +177,10 @@ func (a *tarArtifact) Cure(t *TContext) (err error) {
break break
case tar.TypeLink: case tar.TypeLink:
if err = os.Link(header.Linkname, pathname.String()); err != nil { if err = os.Link(
temp.Append(header.Linkname).String(),
pathname.String(),
); err != nil {
return return
} }
break break

98
internal/rosa/acl.go Normal file
View File

@@ -0,0 +1,98 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newAttr() pkg.Artifact {
const (
version = "2.5.2"
checksum = "YWEphrz6vg1sUMmHHVr1CRo53pFXRhq_pjN-AlG8UgwZK1y6m7zuDhxqJhD0SV0l"
)
return t.New("attr-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Perl),
}, nil, nil, `
ln -s ../../system/bin/perl /usr/bin
cd "$(mktemp -d)"
/usr/src/attr/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("attr"), true, t.NewPatchedSource(
"attr", version, pkg.NewHTTPGetTar(
nil,
"https://download.savannah.nongnu.org/releases/attr/"+
"attr-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
), true, [2]string{"libgen-basename", `From 8a80d895dfd779373363c3a4b62ecce5a549efb2 Mon Sep 17 00:00:00 2001
From: "Haelwenn (lanodan) Monnier" <contact@hacktivis.me>
Date: Sat, 30 Mar 2024 10:17:10 +0100
Subject: tools/attr.c: Add missing libgen.h include for basename(3)
Fixes compilation issue with musl and modern C99 compilers.
See: https://bugs.gentoo.org/926294
---
tools/attr.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/attr.c b/tools/attr.c
index f12e4af..6a3c1e9 100644
--- a/tools/attr.c
+++ b/tools/attr.c
@@ -28,6 +28,7 @@
#include <errno.h>
#include <string.h>
#include <locale.h>
+#include <libgen.h>
#include <attr/attributes.h>
--
cgit v1.1`}, [2]string{"musl-errno", `diff --git a/test/attr.test b/test/attr.test
index 6ce2f9b..e9bde92 100644
--- a/test/attr.test
+++ b/test/attr.test
@@ -11,7 +11,7 @@ Try various valid and invalid names
$ touch f
$ setfattr -n user -v value f
- > setfattr: f: Operation not supported
+ > setfattr: f: Not supported
$ setfattr -n user. -v value f
> setfattr: f: Invalid argument
`},
)))
}
func init() { artifactsF[Attr] = Toolchain.newAttr }
func (t Toolchain) newACL() pkg.Artifact {
const (
version = "2.3.2"
checksum = "-fY5nwH4K8ZHBCRXrzLdguPkqjKI6WIiGu4dBtrZ1o0t6AIU73w8wwJz_UyjIS0P"
)
return t.New("acl-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Attr),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/acl/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)"
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("acl"), true, pkg.NewHTTPGetTar(
nil,
"https://download.savannah.nongnu.org/releases/acl/"+
"acl-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[ACL] = Toolchain.newACL }

70
internal/rosa/all.go Normal file
View File

@@ -0,0 +1,70 @@
package rosa
import (
"sync"
"hakurei.app/internal/pkg"
)
// PArtifact is a lazily-initialised [pkg.Artifact] preset.
type PArtifact int
const (
ACL PArtifact = iota
Attr
Autoconf
Bash
Busybox
CMake
Coreutils
Diffutils
Gettext
Git
Go
Gperf
Hakurei
KernelHeaders
LibXau
Libexpat
Libffi
Libgd
Libseccomp
Libxml2
M4
Make
Meson
Ninja
Patch
Perl
PkgConfig
Python
Rsync
Setuptools
Wayland
WaylandProtocols
XCB
XCBProto
Xproto
Zlib
// _presetEnd is the total number of presets and does not denote a preset.
_presetEnd
)
var (
// artifactsF is an array of functions for the result of [PArtifact].
artifactsF [_presetEnd]func(t Toolchain) pkg.Artifact
// artifacts stores the result of artifactsF.
artifacts [_toolchainEnd][len(artifactsF)]pkg.Artifact
// artifactsOnce is for lazy initialisation of artifacts.
artifactsOnce [_toolchainEnd][len(artifactsF)]sync.Once
)
// Load returns the resulting [pkg.Artifact] of [PArtifact].
func (t Toolchain) Load(p PArtifact) pkg.Artifact {
artifactsOnce[t][p].Do(func() {
artifacts[t][p] = artifactsF[p](t)
})
return artifacts[t][p]
}

View File

@@ -16,7 +16,7 @@ import (
// busyboxBin is a busybox binary distribution installed under bin/busybox. // busyboxBin is a busybox binary distribution installed under bin/busybox.
type busyboxBin struct { type busyboxBin struct {
// Underlying busybox binary. // Underlying busybox binary.
bin pkg.File bin pkg.FileArtifact
} }
// Kind returns the hardcoded [pkg.Kind] value. // Kind returns the hardcoded [pkg.Kind] value.
@@ -25,6 +25,9 @@ func (a busyboxBin) Kind() pkg.Kind { return kindBusyboxBin }
// Params is a noop. // Params is a noop.
func (a busyboxBin) Params(*pkg.IContext) {} func (a busyboxBin) Params(*pkg.IContext) {}
// IsExclusive returns false: Cure performs a trivial filesystem write.
func (busyboxBin) IsExclusive() bool { return false }
// Dependencies returns the underlying busybox [pkg.File]. // Dependencies returns the underlying busybox [pkg.File].
func (a busyboxBin) Dependencies() []pkg.Artifact { func (a busyboxBin) Dependencies() []pkg.Artifact {
return []pkg.Artifact{a.bin} return []pkg.Artifact{a.bin}
@@ -80,7 +83,8 @@ func newBusyboxBin() pkg.Artifact {
checksum = "L7OBIsPu9enNHn7FqpBT1kOg_mCLNmetSeNMA3i4Y60Z5jTgnlX3qX3zcQtLx5AB" checksum = "L7OBIsPu9enNHn7FqpBT1kOg_mCLNmetSeNMA3i4Y60Z5jTgnlX3qX3zcQtLx5AB"
) )
return pkg.NewExec( return pkg.NewExec(
"busybox-bin-"+version, nil, pkg.ExecTimeoutMax, fhs.AbsRoot, []string{ "busybox-bin-"+version, nil, pkg.ExecTimeoutMax, false,
fhs.AbsRoot, []string{
"PATH=/system/bin", "PATH=/system/bin",
}, },
AbsSystem.Append("bin", "busybox"), AbsSystem.Append("bin", "busybox"),
@@ -100,26 +104,21 @@ func newBusyboxBin() pkg.Artifact {
) )
} }
// NewBusybox returns a [pkg.Artifact] containing a dynamically linked busybox func (t Toolchain) newBusybox() pkg.Artifact {
// installation usable within the [Toolchain] it is compiled against.
func (t Toolchain) NewBusybox() pkg.Artifact {
const ( const (
version = "1.37.0" version = "1.37.0"
checksum = "Ial94Tnt7esJ_YEeb0AxunVL6MGYFyOw7Rtu2o87CXCi1TLrc6rlznVsN1rZk7it" checksum = "Ial94Tnt7esJ_YEeb0AxunVL6MGYFyOw7Rtu2o87CXCi1TLrc6rlznVsN1rZk7it"
) )
extra := []pkg.Artifact{
t.NewMake(),
t.NewKernelHeaders(),
}
var env []string var env []string
if t == toolchainStage3 { if t == toolchainStage3 {
extra = nil
env = append(env, "EXTRA_LDFLAGS=-static") env = append(env, "EXTRA_LDFLAGS=-static")
} }
return t.New("busybox-"+version, extra, nil, slices.Concat([]string{ return t.New("busybox-"+version, false, stage3Concat(t, []pkg.Artifact{},
t.Load(Make),
t.Load(KernelHeaders),
), nil, slices.Concat([]string{
"ROSA_BUSYBOX_ENABLE=" + strings.Join([]string{ "ROSA_BUSYBOX_ENABLE=" + strings.Join([]string{
"STATIC", "STATIC",
"PIE", "PIE",
@@ -297,7 +296,6 @@ config_disable() {
cat > /bin/gcc << EOF cat > /bin/gcc << EOF
exec clang \ exec clang \
-Wno-ignored-optimization-argument \ -Wno-ignored-optimization-argument \
${ROSA_CFLAGS} \
${LDFLAGS} \ ${LDFLAGS} \
\$@ \$@
EOF EOF
@@ -358,3 +356,4 @@ index 64e752f4b..40f5ba7f7 100644
}`)), }`)),
)) ))
} }
func init() { artifactsF[Busybox] = Toolchain.newBusybox }

View File

@@ -8,15 +8,14 @@ import (
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
// NewCMake returns a [pkg.Artifact] containing an installation of CMake. func (t Toolchain) newCMake() pkg.Artifact {
func (t Toolchain) NewCMake() pkg.Artifact {
const ( const (
version = "4.2.1" version = "4.2.1"
checksum = "Y3OdbMsob6Xk2y1DCME6z4Fryb5_TkFD7knRT8dTNIRtSqbiCJyyDN9AxggN_I75" checksum = "Y3OdbMsob6Xk2y1DCME6z4Fryb5_TkFD7knRT8dTNIRtSqbiCJyyDN9AxggN_I75"
) )
return t.New("cmake-"+version, []pkg.Artifact{ return t.New("cmake-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
t.NewKernelHeaders(), t.Load(KernelHeaders),
}, nil, nil, ` }, nil, nil, `
# expected to be writable in the copy made during bootstrap # expected to be writable in the copy made during bootstrap
chmod -R +w /usr/src/cmake/Tests chmod -R +w /usr/src/cmake/Tests
@@ -37,6 +36,7 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[CMake] = Toolchain.newCMake }
// CMakeAttr holds the project-specific attributes that will be applied to a new // CMakeAttr holds the project-specific attributes that will be applied to a new
// [pkg.Artifact] compiled via CMake. // [pkg.Artifact] compiled via CMake.
@@ -59,6 +59,9 @@ type CMakeAttr struct {
// Override the default installation prefix [AbsSystem]. // Override the default installation prefix [AbsSystem].
Prefix *check.Absolute Prefix *check.Absolute
// Return an exclusive artifact.
Exclusive bool
} }
// NewViaCMake returns a [pkg.Artifact] for compiling and installing via CMake. // NewViaCMake returns a [pkg.Artifact] for compiling and installing via CMake.
@@ -81,14 +84,6 @@ func (t Toolchain) NewViaCMake(
panic("CACHE must be non-empty") panic("CACHE must be non-empty")
} }
cmakeExtras := []pkg.Artifact{
t.NewCMake(),
t.NewNinja(),
}
if t == toolchainStage3 {
cmakeExtras = nil
}
scriptEarly := attr.ScriptEarly scriptEarly := attr.ScriptEarly
if attr.Writable { if attr.Writable {
scriptEarly = ` scriptEarly = `
@@ -102,9 +97,9 @@ chmod -R +w "${ROSA_SOURCE}"
} }
sourcePath := AbsUsrSrc.Append(name) sourcePath := AbsUsrSrc.Append(name)
return t.New(name+"-"+variant+"-"+version, slices.Concat( return t.New(name+"-"+variant+"-"+version, attr.Exclusive, stage3Concat(t, attr.Extra,
attr.Extra, t.Load(CMake),
cmakeExtras, t.Load(Ninja),
), nil, slices.Concat([]string{ ), nil, slices.Concat([]string{
"ROSA_SOURCE=" + sourcePath.String(), "ROSA_SOURCE=" + sourcePath.String(),
"ROSA_CMAKE_SOURCE=" + sourcePath.Append(attr.Append...).String(), "ROSA_CMAKE_SOURCE=" + sourcePath.Append(attr.Append...).String(),

View File

@@ -89,6 +89,9 @@ func (cureEtc) Kind() pkg.Kind { return kindEtc }
// Params is a noop. // Params is a noop.
func (cureEtc) Params(*pkg.IContext) {} func (cureEtc) Params(*pkg.IContext) {}
// IsExclusive returns false: Cure performs a few trivial filesystem writes.
func (cureEtc) IsExclusive() bool { return false }
// Dependencies returns a slice containing the backing iana-etc release. // Dependencies returns a slice containing the backing iana-etc release.
func (a cureEtc) Dependencies() []pkg.Artifact { func (a cureEtc) Dependencies() []pkg.Artifact {
if a.iana != nil { if a.iana != nil {
@@ -98,7 +101,12 @@ func (a cureEtc) Dependencies() []pkg.Artifact {
} }
// String returns a hardcoded reporting name. // String returns a hardcoded reporting name.
func (cureEtc) String() string { return "cure-etc" } func (a cureEtc) String() string {
if a.iana == nil {
return "cure-etc-minimal"
}
return "cure-etc"
}
// newIANAEtc returns an unpacked iana-etc release. // newIANAEtc returns an unpacked iana-etc release.
func newIANAEtc() pkg.Artifact { func newIANAEtc() pkg.Artifact {

View File

@@ -4,25 +4,20 @@ import (
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
// NewGit returns a [pkg.Artifact] containing an installation of git. func (t Toolchain) newGit() pkg.Artifact {
func (t Toolchain) NewGit() pkg.Artifact {
const ( const (
version = "2.52.0" version = "2.52.0"
checksum = "uH3J1HAN_c6PfGNJd2OBwW4zo36n71wmkdvityYnrh8Ak0D1IifiAvEWz9Vi9DmS" checksum = "uH3J1HAN_c6PfGNJd2OBwW4zo36n71wmkdvityYnrh8Ak0D1IifiAvEWz9Vi9DmS"
) )
extra := []pkg.Artifact{ return t.New("git-"+version, false, stage3Concat(t, []pkg.Artifact{},
t.NewMake(), t.Load(Make),
t.NewPerl(), t.Load(Perl),
t.NewM4(), t.Load(M4),
t.NewAutoconf(), t.Load(Autoconf),
t.NewGettext(), t.Load(Gettext),
t.NewZlib(), t.Load(Zlib),
} ), nil, nil, `
if t == toolchainStage3 {
extra = nil
}
return t.New("git-"+version, extra, nil, nil, `
chmod -R +w /usr/src/git && cd /usr/src/git chmod -R +w /usr/src/git && cd /usr/src/git
make configure make configure
./configure --prefix=/system ./configure --prefix=/system
@@ -35,3 +30,4 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Git] = Toolchain.newGit }

View File

@@ -2,13 +2,12 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewMake returns a [pkg.Artifact] containing an installation of GNU Make. func (t Toolchain) newMake() pkg.Artifact {
func (t Toolchain) NewMake() pkg.Artifact {
const ( const (
version = "4.4.1" version = "4.4.1"
checksum = "YS_B07ZcAy9PbaK5_vKGj64SrxO2VMpnMKfc9I0Q9IC1rn0RwOH7802pJoj2Mq4a" checksum = "YS_B07ZcAy9PbaK5_vKGj64SrxO2VMpnMKfc9I0Q9IC1rn0RwOH7802pJoj2Mq4a"
) )
return t.New("make-"+version, nil, nil, nil, ` return t.New("make-"+version, false, nil, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/make/configure \ /usr/src/make/configure \
--prefix=/system \ --prefix=/system \
@@ -23,15 +22,15 @@ cd "$(mktemp -d)"
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Make] = Toolchain.newMake }
// NewM4 returns a [pkg.Artifact] containing an installation of GNU M4. func (t Toolchain) newM4() pkg.Artifact {
func (t Toolchain) NewM4() pkg.Artifact {
const ( const (
version = "1.4.20" version = "1.4.20"
checksum = "RT0_L3m4Co86bVBY3lCFAEs040yI1WdeNmRylFpah8IZovTm6O4wI7qiHJN3qsW9" checksum = "RT0_L3m4Co86bVBY3lCFAEs040yI1WdeNmRylFpah8IZovTm6O4wI7qiHJN3qsW9"
) )
return t.New("m4-"+version, []pkg.Artifact{ return t.New("m4-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd /usr/src/m4 cd /usr/src/m4
chmod +w tests/test-c32ispunct.sh && echo '#!/bin/sh' > tests/test-c32ispunct.sh chmod +w tests/test-c32ispunct.sh && echo '#!/bin/sh' > tests/test-c32ispunct.sh
@@ -49,23 +48,26 @@ make DESTDIR=/work install
pkg.TarBzip2, pkg.TarBzip2,
))) )))
} }
func init() { artifactsF[M4] = Toolchain.newM4 }
// NewAutoconf returns a [pkg.Artifact] containing an installation of GNU Autoconf. func (t Toolchain) newAutoconf() pkg.Artifact {
func (t Toolchain) NewAutoconf() pkg.Artifact {
const ( const (
version = "2.72" version = "2.72"
checksum = "-c5blYkC-xLDer3TWEqJTyh1RLbOd1c5dnRLKsDnIrg_wWNOLBpaqMY8FvmUFJ33" checksum = "-c5blYkC-xLDer3TWEqJTyh1RLbOd1c5dnRLKsDnIrg_wWNOLBpaqMY8FvmUFJ33"
) )
return t.New("autoconf-"+version, []pkg.Artifact{ return t.New("autoconf-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
t.NewM4(), t.Load(M4),
t.NewPerl(), t.Load(Perl),
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/autoconf/configure \ /usr/src/autoconf/configure \
--prefix=/system \ --prefix=/system \
--build="${ROSA_TRIPLE}" --build="${ROSA_TRIPLE}"
make "-j$(nproc)" check make \
"-j$(nproc)" \
TESTSUITEFLAGS="-j$(nproc)" \
check
make DESTDIR=/work install make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("autoconf"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("autoconf"), false, pkg.NewHTTPGetTar(
nil, nil,
@@ -74,15 +76,15 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Autoconf] = Toolchain.newAutoconf }
// NewGettext returns a [pkg.Artifact] containing an installation of GNU gettext. func (t Toolchain) newGettext() pkg.Artifact {
func (t Toolchain) NewGettext() pkg.Artifact {
const ( const (
version = "0.26" version = "0.26"
checksum = "IMu7yDZX7xL5UO1ZxXc-iBMbY9LLEUlOroyuSlHMZwg9MKtxG7HIm8F2LheDua0y" checksum = "IMu7yDZX7xL5UO1ZxXc-iBMbY9LLEUlOroyuSlHMZwg9MKtxG7HIm8F2LheDua0y"
) )
return t.New("gettext-"+version, []pkg.Artifact{ return t.New("gettext-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd /usr/src/gettext cd /usr/src/gettext
test_disable() { chmod +w "$2" && echo "$1" > "$2"; } test_disable() { chmod +w "$2" && echo "$1" > "$2"; }
@@ -110,15 +112,15 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Gettext] = Toolchain.newGettext }
// NewDiffutils returns a [pkg.Artifact] containing an installation of GNU diffutils. func (t Toolchain) newDiffutils() pkg.Artifact {
func (t Toolchain) NewDiffutils() pkg.Artifact {
const ( const (
version = "3.12" version = "3.12"
checksum = "9J5VAq5oA7eqwzS1Yvw-l3G5o-TccUrNQR3PvyB_lgdryOFAfxtvQfKfhdpquE44" checksum = "9J5VAq5oA7eqwzS1Yvw-l3G5o-TccUrNQR3PvyB_lgdryOFAfxtvQfKfhdpquE44"
) )
return t.New("diffutils-"+version, []pkg.Artifact{ return t.New("diffutils-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd /usr/src/diffutils cd /usr/src/diffutils
test_disable() { chmod +w "$2" && echo "$1" > "$2"; } test_disable() { chmod +w "$2" && echo "$1" > "$2"; }
@@ -139,15 +141,44 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Diffutils] = Toolchain.newDiffutils }
// NewBash returns a [pkg.Artifact] containing an installation of GNU Bash. func (t Toolchain) newPatch() pkg.Artifact {
func (t Toolchain) NewBash() pkg.Artifact { const (
version = "2.8"
checksum = "MA0BQc662i8QYBD-DdGgyyfTwaeALZ1K0yusV9rAmNiIsQdX-69YC4t9JEGXZkeR"
)
return t.New("patch-"+version, false, []pkg.Artifact{
t.Load(Make),
}, nil, nil, `
cd /usr/src/patch
test_disable() { chmod +w "$2" && echo "$1" > "$2"; }
test_disable '#!/bin/sh' tests/ed-style
test_disable '#!/bin/sh' tests/need-filename
cd "$(mktemp -d)"
/usr/src/patch/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}"
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("patch"), true, pkg.NewHTTPGetTar(
nil,
"https://ftp.gnu.org/gnu/patch/patch-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Patch] = Toolchain.newPatch }
func (t Toolchain) newBash() pkg.Artifact {
const ( const (
version = "5.3" version = "5.3"
checksum = "4LQ_GRoB_ko-Ih8QPf_xRKA02xAm_TOxQgcJLmFDT6udUPxTAWrsj-ZNeuTusyDq" checksum = "4LQ_GRoB_ko-Ih8QPf_xRKA02xAm_TOxQgcJLmFDT6udUPxTAWrsj-ZNeuTusyDq"
) )
return t.New("bash-"+version, []pkg.Artifact{ return t.New("bash-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/bash/configure \ /usr/src/bash/configure \
@@ -163,18 +194,18 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Bash] = Toolchain.newBash }
// NewCoreutils returns a [pkg.Artifact] containing an installation of GNU Coreutils. func (t Toolchain) newCoreutils() pkg.Artifact {
func (t Toolchain) NewCoreutils() pkg.Artifact {
const ( const (
version = "9.9" version = "9.9"
checksum = "B1_TaXj1j5aiVIcazLWu8Ix03wDV54uo2_iBry4qHG6Y-9bjDpUPlkNLmU_3Nvw6" checksum = "B1_TaXj1j5aiVIcazLWu8Ix03wDV54uo2_iBry4qHG6Y-9bjDpUPlkNLmU_3Nvw6"
) )
return t.New("coreutils-"+version, []pkg.Artifact{ return t.New("coreutils-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
t.NewPerl(), t.Load(Perl),
t.NewKernelHeaders(), t.Load(KernelHeaders),
}, nil, nil, ` }, nil, nil, `
cd /usr/src/coreutils cd /usr/src/coreutils
test_disable() { chmod +w "$2" && echo "$1" > "$2"; } test_disable() { chmod +w "$2" && echo "$1" > "$2"; }
@@ -194,3 +225,26 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Coreutils] = Toolchain.newCoreutils }
func (t Toolchain) newGperf() pkg.Artifact {
const (
version = "3.3"
checksum = "RtIy9pPb_Bb8-31J2Nw-rRGso2JlS-lDlVhuNYhqR7Nt4xM_nObznxAlBMnarJv7"
)
return t.New("gperf-"+version, false, []pkg.Artifact{
t.Load(Make),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/gperf/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}"
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("gperf"), true, pkg.NewHTTPGetTar(
nil, "https://ftp.gnu.org/pub/gnu/gperf/gperf-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Gperf] = Toolchain.newGperf }

View File

@@ -1,6 +1,7 @@
package rosa package rosa
import ( import (
"runtime"
"slices" "slices"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -9,14 +10,14 @@ import (
// newGoBootstrap returns the Go bootstrap toolchain. // newGoBootstrap returns the Go bootstrap toolchain.
func (t Toolchain) newGoBootstrap() pkg.Artifact { func (t Toolchain) newGoBootstrap() pkg.Artifact {
const checksum = "8o9JL_ToiQKadCTb04nvBDkp8O1xiWOolAxVEqaTGodieNe4lOFEjlOxN3bwwe23" const checksum = "8o9JL_ToiQKadCTb04nvBDkp8O1xiWOolAxVEqaTGodieNe4lOFEjlOxN3bwwe23"
return t.New("go1.4-bootstrap", []pkg.Artifact{ return t.New("go1.4-bootstrap", false, []pkg.Artifact{
t.NewBash(), t.Load(Bash),
}, nil, []string{ }, nil, []string{
"CGO_ENABLED=0", "CGO_ENABLED=0",
}, ` }, `
mkdir -p /var/tmp mkdir -p /var/tmp
cp -r /usr/src/go1.4-bootstrap /work cp -r /usr/src/go /work
cd /work/go1.4-bootstrap/src cd /work/go/src
chmod -R +w .. chmod -R +w ..
ln -s ../system/bin/busybox /bin/pwd ln -s ../system/bin/busybox /bin/pwd
@@ -31,8 +32,11 @@ rm \
syscall/creds_test.go \ syscall/creds_test.go \
net/multicast_test.go net/multicast_test.go
CC="${CC} ${LDFLAGS}" ./all.bash ./all.bash
`, pkg.Path(AbsUsrSrc.Append("go1.4-bootstrap"), false, pkg.NewHTTPGetTar( cd /work/
mkdir system/
mv go/ system/
`, pkg.Path(AbsUsrSrc.Append("go"), false, pkg.NewHTTPGetTar(
nil, "https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz", nil, "https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarGzip, pkg.TarGzip,
@@ -42,22 +46,30 @@ CC="${CC} ${LDFLAGS}" ./all.bash
// newGo returns a specific version of the Go toolchain. // newGo returns a specific version of the Go toolchain.
func (t Toolchain) newGo( func (t Toolchain) newGo(
version, checksum string, version, checksum string,
boot pkg.Artifact, env []string,
env ...string, script string,
extra ...pkg.Artifact,
) pkg.Artifact { ) pkg.Artifact {
return t.New("go"+version, []pkg.Artifact{ return t.New("go"+version, false, slices.Concat([]pkg.Artifact{
boot, t.Load(Bash),
}, nil, slices.Concat([]string{ }, extra), nil, slices.Concat([]string{
"CC=cc",
"GOCACHE=/tmp/gocache", "GOCACHE=/tmp/gocache",
"GOROOT_BOOTSTRAP=/system/go", "GOROOT_BOOTSTRAP=/system/go",
"CGO_" + ldflags(false) + " -O2 -g", "TMPDIR=/dev/shm/go",
}, env), ` }, env), `
mkdir /work/system mkdir /work/system "${TMPDIR}"
cp -r /usr/src/go /work/system cp -r /usr/src/go /work/system
cd /work/system/go/src cd /work/system/go/src
chmod -R +w .. chmod -R +w ..
sed -i 's/bash run.bash/sh run.bash/' all.bash `+script+`
sh make.bash ./all.bash
mkdir /work/system/bin
ln -s \
../go/bin/go \
../go/bin/gofmt \
/work/system/bin
`, pkg.Path(AbsUsrSrc.Append("go"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("go"), false, pkg.NewHTTPGetTar(
nil, "https://go.dev/dl/go"+version+".src.tar.gz", nil, "https://go.dev/dl/go"+version+".src.tar.gz",
mustDecode(checksum), mustDecode(checksum),
@@ -65,28 +77,50 @@ sh make.bash
))) )))
} }
// NewGo returns a [pkg.Artifact] containing the Go toolchain. func (t Toolchain) newGoLatest() pkg.Artifact {
func (t Toolchain) NewGo() pkg.Artifact {
go119 := t.newGo( go119 := t.newGo(
"1.19", "1.19",
"9_e0aFHsIkVxWVGsp9T2RvvjOc3p4n9o9S8tkNe9Cvgzk_zI2FhRQB7ioQkeAAro", "9_e0aFHsIkVxWVGsp9T2RvvjOc3p4n9o9S8tkNe9Cvgzk_zI2FhRQB7ioQkeAAro",
t.newGoBootstrap(), []string{"CGO_ENABLED=0"}, `
"GOROOT_BOOTSTRAP=/go1.4-bootstrap", rm \
crypto/tls/handshake_client_test.go
`, t.newGoBootstrap(),
) )
go121 := t.newGo( go121 := t.newGo(
"1.21.13", "1.21.13",
"YtrDka402BOAEwywx03Vz4QlVwoBiguJHzG7PuythMCPHXS8CVMLvzmvgEbu4Tzu", "YtrDka402BOAEwywx03Vz4QlVwoBiguJHzG7PuythMCPHXS8CVMLvzmvgEbu4Tzu",
go119, []string{"CGO_ENABLED=0"}, `
sed -i \
's,/lib/ld-musl-`+linuxArch()+`.so.1,/system/bin/linker,' \
cmd/link/internal/`+runtime.GOARCH+`/obj.go
rm \
crypto/tls/handshake_client_test.go \
crypto/tls/handshake_server_test.go
`, go119,
) )
go123 := t.newGo( go123 := t.newGo(
"1.23.12", "1.23.12",
"wcI32bl1tkqbgcelGtGWPI4RtlEddd-PTd76Eb-k7nXA5LbE9yTNdIL9QSOOxMOs", "wcI32bl1tkqbgcelGtGWPI4RtlEddd-PTd76Eb-k7nXA5LbE9yTNdIL9QSOOxMOs",
go121, nil, `
sed -i \
's,/lib/ld-musl-`+linuxArch()+`.so.1,/system/bin/linker,' \
cmd/link/internal/`+runtime.GOARCH+`/obj.go
`, go121,
) )
go125 := t.newGo( go125 := t.newGo(
"1.25.6", "1.25.6",
"x0z430qoDvQbbw_fftjW0rh_GSoh0VJhPzttWk_0hj9yz9AKOjuwRMupF_Q0dbt7", "x0z430qoDvQbbw_fftjW0rh_GSoh0VJhPzttWk_0hj9yz9AKOjuwRMupF_Q0dbt7",
go123, nil, `
sed -i \
's,/lib/ld-musl-`+linuxArch()+`.so.1,/system/bin/linker,' \
cmd/link/internal/`+runtime.GOARCH+`/obj.go
`, go123,
) )
return go125 return go125
} }
func init() { artifactsF[Go] = Toolchain.newGoLatest }

83
internal/rosa/hakurei.go Normal file
View File

@@ -0,0 +1,83 @@
package rosa
import (
"hakurei.app/internal/pkg"
)
func (t Toolchain) newHakurei() pkg.Artifact {
const (
version = "0.3.3"
checksum = "iMN9qzDB000noZ6dOHh_aSdrhRZPopjyWHd0KFVjxjQLQstAOvLYZEZ74btlL0pu"
)
return t.New("hakurei-"+version, false, []pkg.Artifact{
t.Load(Go),
t.Load(PkgConfig),
t.Load(KernelHeaders),
t.Load(Libseccomp),
t.Load(ACL),
t.Load(Attr),
t.Load(Xproto),
t.Load(LibXau),
t.Load(XCBProto),
t.Load(XCB),
t.Load(Libffi),
t.Load(Libexpat),
t.Load(Libxml2),
t.Load(Wayland),
t.Load(WaylandProtocols),
}, nil, []string{
"GOCACHE=/tmp/gocache",
"CC=clang -O3 -Werror",
}, `
echo '# Building test helper (hostname).'
go build -v -o /bin/hostname /usr/src/hostname/main.go
echo
chmod -R +w /usr/src/hakurei
cd /usr/src/hakurei
mkdir -p /work/system/{bin,libexec/hakurei}
echo '# Building hakurei.'
go generate -v ./...
go build -trimpath -v -o /work/system/libexec/hakurei -ldflags="-s -w
-buildid=
-extldflags=-static
-X hakurei.app/internal/info.buildVersion='v`+version+`'
-X hakurei.app/internal/info.hakureiPath=/system/bin/hakurei
-X hakurei.app/internal/info.hsuPath=/system/bin/hsu
-X main.hakureiPath=/system/bin/hakurei" ./...
echo
echo '# Testing hakurei.'
go test -ldflags='-buildid= -extldflags=-static' ./...
echo
mv \
/work/system/libexec/hakurei/{hakurei,hpkg} \
/work/system/bin
`, pkg.Path(AbsUsrSrc.Append("hakurei"), true, pkg.NewHTTPGetTar(
nil, "https://git.gensokyo.uk/security/hakurei/archive/"+
"v"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)), pkg.Path(AbsUsrSrc.Append("hostname", "main.go"), false, pkg.NewFile(
"hostname.go",
[]byte(`
package main
import "os"
func main() {
if name, err := os.Hostname(); err != nil {
panic(err)
} else {
os.Stdout.WriteString(name)
}
}
`),
)))
}
func init() { artifactsF[Hakurei] = Toolchain.newHakurei }

View File

@@ -8,6 +8,8 @@ import (
// newKernel is a helper for interacting with Kbuild. // newKernel is a helper for interacting with Kbuild.
func (t Toolchain) newKernel( func (t Toolchain) newKernel(
exclusive bool,
patches [][2]string,
script string, script string,
extra ...pkg.Artifact, extra ...pkg.Artifact,
) pkg.Artifact { ) pkg.Artifact {
@@ -15,27 +17,28 @@ func (t Toolchain) newKernel(
version = "6.18.5" version = "6.18.5"
checksum = "-V1e1WWl7HuePkmm84sSKF7nLuHfUs494uNMzMqXEyxcNE_PUE0FICL0oGWn44mM" checksum = "-V1e1WWl7HuePkmm84sSKF7nLuHfUs494uNMzMqXEyxcNE_PUE0FICL0oGWn44mM"
) )
return t.New("kernel-"+version, slices.Concat([]pkg.Artifact{ return t.New("kernel-"+version, exclusive, slices.Concat([]pkg.Artifact{
t.NewMake(), t.Load(Make),
}, extra), nil, nil, ` }, extra), nil, nil, `
export LLVM=1 export LLVM=1
export HOSTCFLAGS="${ROSA_CFLAGS}"
export HOSTLDFLAGS="${LDFLAGS}" export HOSTLDFLAGS="${LDFLAGS}"
chmod -R +w /usr/src/linux && cd /usr/src/linux cd /usr/src/linux
`+script, pkg.Path(AbsUsrSrc.Append("linux"), true, pkg.NewHTTPGetTar( `+script, pkg.Path(AbsUsrSrc.Append("linux"), true, t.NewPatchedSource(
"kernel", version, pkg.NewHTTPGetTar(
nil, nil,
"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+ "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+
"snapshot/linux-"+version+".tar.gz", "snapshot/linux-"+version+".tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarGzip, pkg.TarGzip,
), false, patches...,
))) )))
} }
// NewKernelHeaders returns a [pkg.Artifact] containing kernel headers. func (t Toolchain) newKernelHeaders() pkg.Artifact {
func (t Toolchain) NewKernelHeaders() pkg.Artifact { return t.newKernel(false, nil, `
return t.newKernel(`
make "-j$(nproc)" \ make "-j$(nproc)" \
INSTALL_HDR_PATH=/work/system \ INSTALL_HDR_PATH=/work/system \
headers_install headers_install
`, t.NewRsync()) `, t.Load(Rsync))
} }
func init() { artifactsF[KernelHeaders] = Toolchain.newKernelHeaders }

33
internal/rosa/libexpat.go Normal file
View File

@@ -0,0 +1,33 @@
package rosa
import (
"strings"
"hakurei.app/internal/pkg"
)
func (t Toolchain) newLibexpat() pkg.Artifact {
const (
version = "2.7.3"
checksum = "GmkoD23nRi9cMT0cgG1XRMrZWD82UcOMzkkvP1gkwSFWCBgeSXMuoLpa8-v8kxW-"
)
return t.New("libexpat-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Bash),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/libexpat/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libexpat"), false, pkg.NewHTTPGetTar(
nil, "https://github.com/libexpat/libexpat/releases/download/"+
"R_"+strings.ReplaceAll(version, ".", "_")+"/"+
"expat-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
)))
}
func init() { artifactsF[Libexpat] = Toolchain.newLibexpat }

View File

@@ -2,20 +2,20 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewLibffi returns a [pkg.Artifact] containing an installation of libffi. func (t Toolchain) newLibffi() pkg.Artifact {
func (t Toolchain) NewLibffi() pkg.Artifact {
const ( const (
version = "3.4.5" version = "3.4.5"
checksum = "apIJzypF4rDudeRoI_n3K7N-zCeBLTbQlHRn9NSAZqdLAWA80mR0gXPTpHsL7oMl" checksum = "apIJzypF4rDudeRoI_n3K7N-zCeBLTbQlHRn9NSAZqdLAWA80mR0gXPTpHsL7oMl"
) )
return t.New("libffi-"+version, []pkg.Artifact{ return t.New("libffi-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
t.NewKernelHeaders(), t.Load(KernelHeaders),
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/libffi/configure \ /usr/src/libffi/configure \
--prefix=/system \ --prefix=/system \
--build="${ROSA_TRIPLE}" --build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check make "-j$(nproc)" check
make DESTDIR=/work install make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libffi"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("libffi"), false, pkg.NewHTTPGetTar(
@@ -26,3 +26,4 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Libffi] = Toolchain.newLibffi }

33
internal/rosa/libgd.go Normal file
View File

@@ -0,0 +1,33 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newLibgd() pkg.Artifact {
const (
version = "2.3.3"
checksum = "8T-sh1_FJT9K9aajgxzh8ot6vWIF-xxjcKAHvTak9MgGUcsFfzP8cAvvv44u2r36"
)
return t.New("libgd-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Zlib),
}, nil, []string{
"TMPDIR=/dev/shm/gd",
}, `
mkdir /dev/shm/gd
cd "$(mktemp -d)"
/usr/src/libgd/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libgd"), true, pkg.NewHTTPGetTar(
nil, "https://github.com/libgd/libgd/releases/download/"+
"gd-"+version+"/libgd-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Libgd] = Toolchain.newLibgd }

View File

@@ -0,0 +1,36 @@
package rosa
import (
"hakurei.app/internal/pkg"
)
func (t Toolchain) newLibseccomp() pkg.Artifact {
const (
version = "2.6.0"
checksum = "mMu-iR71guPjFbb31u-YexBaanKE_nYPjPux-vuBiPfS_0kbwJdfCGlkofaUm-EY"
)
return t.New("libseccomp-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Bash),
t.Load(Gperf),
t.Load(KernelHeaders),
}, nil, nil, `
ln -s ../system/bin/bash /bin/bash
cd "$(mktemp -d)"
/usr/src/libseccomp/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libseccomp"), false, pkg.NewHTTPGetTar(
nil,
"https://github.com/seccomp/libseccomp/releases/download/"+
"v"+version+"/libseccomp-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Libseccomp] = Toolchain.newLibseccomp }

35
internal/rosa/libxml2.go Normal file
View File

@@ -0,0 +1,35 @@
package rosa
import (
"strings"
"hakurei.app/internal/pkg"
)
func (t Toolchain) newLibxml2() pkg.Artifact {
const (
version = "2.15.1"
checksum = "pYzAR3cNrEHezhEMirgiq7jbboLzwMj5GD7SQp0jhSIMdgoU4G9oU9Gxun3zzUIU"
)
return t.New("libxml2-"+version, false, []pkg.Artifact{
t.Load(Make),
}, nil, nil, `
cd /usr/src/
tar xf libxml2.tar.xz
mv libxml2-`+version+` libxml2
cd "$(mktemp -d)"
/usr/src/libxml2/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libxml2.tar.xz"), false, pkg.NewHTTPGet(
nil, "https://download.gnome.org/sources/libxml2/"+
strings.Join(strings.Split(version, ".")[:2], ".")+
"/libxml2-"+version+".tar.xz",
mustDecode(checksum),
)))
}
func init() { artifactsF[Libxml2] = Toolchain.newLibxml2 }

View File

@@ -5,6 +5,7 @@ import (
"slices" "slices"
"strconv" "strconv"
"strings" "strings"
"sync"
"hakurei.app/container/check" "hakurei.app/container/check"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -72,8 +73,8 @@ func llvmFlagName(flag int) string {
} }
} }
// newLLVM returns a [pkg.Artifact] containing a LLVM variant. // newLLVMVariant returns a [pkg.Artifact] containing a LLVM variant.
func (t Toolchain) newLLVM(variant string, attr *llvmAttr) pkg.Artifact { func (t Toolchain) newLLVMVariant(variant string, attr *llvmAttr) pkg.Artifact {
const ( const (
version = "21.1.8" version = "21.1.8"
checksum = "8SUpqDkcgwOPsqHVtmf9kXfFeVmjVxl4LMn-qSE1AI_Xoeju-9HaoPNGtidyxyka" checksum = "8SUpqDkcgwOPsqHVtmf9kXfFeVmjVxl4LMn-qSE1AI_Xoeju-9HaoPNGtidyxyka"
@@ -124,24 +125,12 @@ func (t Toolchain) newLLVM(variant string, attr *llvmAttr) pkg.Artifact {
) )
} }
extra := []pkg.Artifact{
t.NewLibffi(),
t.NewPython(),
t.NewPerl(),
t.NewDiffutils(),
t.NewBash(),
t.NewCoreutils(),
t.NewKernelHeaders(),
}
if t == toolchainStage3 {
extra = nil
}
if attr.flags&llvmProjectClang != 0 { if attr.flags&llvmProjectClang != 0 {
cache = append(cache, cache = append(cache,
[2]string{"CLANG_DEFAULT_LINKER", "lld"},
[2]string{"CLANG_DEFAULT_CXX_STDLIB", "libc++"}, [2]string{"CLANG_DEFAULT_CXX_STDLIB", "libc++"},
[2]string{"CLANG_DEFAULT_RTLIB", "compiler-rt"}, [2]string{"CLANG_DEFAULT_RTLIB", "compiler-rt"},
[2]string{"CLANG_DEFAULT_UNWINDLIB", "libunwind"},
) )
} }
if attr.flags&llvmProjectLld != 0 { if attr.flags&llvmProjectLld != 0 {
@@ -181,35 +170,26 @@ cp -r /system/include /usr/include && rm -rf /system/include
) )
} }
source := pkg.NewHTTPGetTar( return t.NewViaCMake("llvm", version, variant, t.NewPatchedSource(
"llvmorg", version, pkg.NewHTTPGetTar(
nil, "https://github.com/llvm/llvm-project/archive/refs/tags/"+ nil, "https://github.com/llvm/llvm-project/archive/refs/tags/"+
"llvmorg-"+version+".tar.gz", "llvmorg-"+version+".tar.gz",
mustDecode(checksum), mustDecode(checksum),
pkg.TarGzip, pkg.TarGzip,
) ), true, attr.patches...,
), &CMakeAttr{
patches := make([]pkg.ExecPath, len(attr.patches)+1)
for i, p := range attr.patches {
patches[i+1] = pkg.Path(
AbsUsrSrc.Append("llvm-patches", p[0]+".patch"), false,
pkg.NewFile(p[0], []byte(p[1])),
)
}
patches[0] = pkg.Path(AbsUsrSrc.Append("llvmorg"), false, source)
if len(patches) > 1 {
source = t.New(
"llvmorg-patched", nil, nil, nil, `
cp -r /usr/src/llvmorg/. /work/.
chmod -R +w /work && cd /work
cat /usr/src/llvm-patches/* | patch -p 1
`, patches...,
)
}
return t.NewViaCMake("llvm", version, variant, source, &CMakeAttr{
Cache: slices.Concat(cache, attr.cmake), Cache: slices.Concat(cache, attr.cmake),
Append: cmakeAppend, Append: cmakeAppend,
Extra: slices.Concat(attr.extra, extra), Extra: stage3Concat(t, attr.extra,
t.Load(Libffi),
t.Load(Python),
t.Load(Perl),
t.Load(Diffutils),
t.Load(Bash),
t.Load(Coreutils),
t.Load(KernelHeaders),
),
Prefix: attr.prefix, Prefix: attr.prefix,
Env: slices.Concat([]string{ Env: slices.Concat([]string{
@@ -217,11 +197,13 @@ cat /usr/src/llvm-patches/* | patch -p 1
"ROSA_LLVM_RUNTIMES=" + strings.Join(runtimes, ";"), "ROSA_LLVM_RUNTIMES=" + strings.Join(runtimes, ";"),
}, attr.env), }, attr.env),
ScriptEarly: scriptEarly, Script: script + attr.script, ScriptEarly: scriptEarly, Script: script + attr.script,
Exclusive: true,
}) })
} }
// NewLLVM returns LLVM toolchain across multiple [pkg.Artifact]. // newLLVM returns LLVM toolchain across multiple [pkg.Artifact].
func (t Toolchain) NewLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) { func (t Toolchain) newLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
var target string var target string
switch runtime.GOARCH { switch runtime.GOARCH {
case "386", "amd64": case "386", "amd64":
@@ -237,10 +219,10 @@ func (t Toolchain) NewLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
{"LLVM_ENABLE_LIBXML2", "OFF"}, {"LLVM_ENABLE_LIBXML2", "OFF"},
} }
compilerRT = t.newLLVM("compiler-rt", &llvmAttr{ compilerRT = t.newLLVMVariant("compiler-rt", &llvmAttr{
env: []string{ env: stage3ExclConcat(t, []string{},
ldflags(false), "LDFLAGS="+earlyLDFLAGS(false),
}, ),
cmake: [][2]string{ cmake: [][2]string{
// libc++ not yet available // libc++ not yet available
{"CMAKE_CXX_COMPILER_TARGET", ""}, {"CMAKE_CXX_COMPILER_TARGET", ""},
@@ -281,20 +263,21 @@ ln -s \
musl = t.NewMusl(&MuslAttr{ musl = t.NewMusl(&MuslAttr{
Extra: []pkg.Artifact{compilerRT}, Extra: []pkg.Artifact{compilerRT},
Env: []string{ Env: stage3ExclConcat(t, []string{
ldflags(false),
"CC=clang", "CC=clang",
"LIBCC=/system/lib/clang/21/lib/" + "LIBCC=/system/lib/clang/21/lib/" +
triplet() + "/libclang_rt.builtins.a", triplet() + "/libclang_rt.builtins.a",
"AR=ar", "AR=ar",
"RANLIB=ranlib", "RANLIB=ranlib",
}, },
"LDFLAGS="+earlyLDFLAGS(false),
),
}) })
runtimes = t.newLLVM("runtimes", &llvmAttr{ runtimes = t.newLLVMVariant("runtimes", &llvmAttr{
env: []string{ env: stage3ExclConcat(t, []string{},
ldflags(false), "LDFLAGS="+earlyLDFLAGS(false),
}, ),
flags: llvmRuntimeLibunwind | llvmRuntimeLibcxx | llvmRuntimeLibcxxABI, flags: llvmRuntimeLibunwind | llvmRuntimeLibcxx | llvmRuntimeLibcxxABI,
cmake: slices.Concat([][2]string{ cmake: slices.Concat([][2]string{
// libc++ not yet available // libc++ not yet available
@@ -310,13 +293,13 @@ ln -s \
}, },
}) })
clang = t.newLLVM("clang", &llvmAttr{ clang = t.newLLVMVariant("clang", &llvmAttr{
flags: llvmProjectClang | llvmProjectLld, flags: llvmProjectClang | llvmProjectLld,
env: []string{ env: stage3ExclConcat(t, []string{},
"CFLAGS=" + cflags, "CFLAGS="+earlyCFLAGS,
"CXXFLAGS=" + cxxflags(), "CXXFLAGS="+earlyCXXFLAGS(),
ldflags(false), "LDFLAGS="+earlyLDFLAGS(false),
}, ),
cmake: slices.Concat([][2]string{ cmake: slices.Concat([][2]string{
{"LLVM_TARGETS_TO_BUILD", target}, {"LLVM_TARGETS_TO_BUILD", target},
{"CMAKE_CROSSCOMPILING", "OFF"}, {"CMAKE_CROSSCOMPILING", "OFF"},
@@ -326,24 +309,17 @@ ln -s \
musl, musl,
compilerRT, compilerRT,
runtimes, runtimes,
t.NewGit(), t.Load(Git),
}, },
script: ` script: `
ln -s clang /work/system/bin/cc
ln -s clang++ /work/system/bin/c++
ninja check-all ninja check-all
`, `,
patches: [][2]string{ patches: [][2]string{
{"xfail-broken-tests", `diff --git a/clang/test/Driver/hexagon-toolchain-linux.c b/clang/test/Driver/hexagon-toolchain-linux.c {"xfail-broken-tests", `diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c
index e791353cca07..4efaf3948054 100644
--- a/clang/test/Driver/hexagon-toolchain-linux.c
+++ b/clang/test/Driver/hexagon-toolchain-linux.c
@@ -1,3 +1,5 @@
+// XFAIL: target={{.*-rosa-linux-musl}}
+
// UNSUPPORTED: system-windows
// -----------------------------------------------------------------------------
diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c
index 50fdce630255..4b4465a75617 100644 index 50fdce630255..4b4465a75617 100644
--- a/clang/test/Modules/timestamps.c --- a/clang/test/Modules/timestamps.c
+++ b/clang/test/Modules/timestamps.c +++ b/clang/test/Modules/timestamps.c
@@ -353,9 +329,131 @@ index 50fdce630255..4b4465a75617 100644
/// Verify timestamps that gets embedded in the module /// Verify timestamps that gets embedded in the module
#include <c-header.h> #include <c-header.h>
`},
{"path-system-include", `diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index cdbf21fb9026..dd052858700d 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -773,6 +773,12 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(
DriverArgs, CC1Args,
concat(SysRoot, "/usr/include", MultiarchIncludeDir));
+ if (!MultiarchIncludeDir.empty() &&
+ D.getVFS().exists(concat(SysRoot, "/system/include", MultiarchIncludeDir)))
+ addExternCSystemInclude(
+ DriverArgs, CC1Args,
+ concat(SysRoot, "/system/include", MultiarchIncludeDir));
+
if (getTriple().getOS() == llvm::Triple::RTEMS)
return;
@@ -783,6 +789,7 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/include"));
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/usr/include"));
+ addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/system/include"));
if (!DriverArgs.hasArg(options::OPT_nobuiltininc) && getTriple().isMusl())
addSystemInclude(DriverArgs, CC1Args, ResourceDirInclude);
`},
{"path-system-libraries", `diff --git a/clang/lib/Driver/ToolChains/CommonArgs.cpp b/clang/lib/Driver/ToolChains/CommonArgs.cpp
index 8d3775de9be5..1e126e2d6f24 100644
--- a/clang/lib/Driver/ToolChains/CommonArgs.cpp
+++ b/clang/lib/Driver/ToolChains/CommonArgs.cpp
@@ -463,6 +463,15 @@ void tools::AddLinkerInputs(const ToolChain &TC, const InputInfoList &Inputs,
if (!TC.isCrossCompiling())
addDirectoryList(Args, CmdArgs, "-L", "LIBRARY_PATH");
+ const std::string RosaSuffix = "-rosa-linux-musl";
+ if (TC.getTripleString().size() > RosaSuffix.size() &&
+ std::equal(RosaSuffix.rbegin(), RosaSuffix.rend(), TC.getTripleString().rbegin())) {
+ CmdArgs.push_back("-rpath");
+ CmdArgs.push_back("/system/lib");
+ CmdArgs.push_back("-rpath");
+ CmdArgs.push_back(("/system/lib/" + TC.getTripleString()).c_str());
+ }
+
for (const auto &II : Inputs) {
// If the current tool chain refers to an OpenMP offloading host, we
// should ignore inputs that refer to OpenMP offloading devices -
diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index 8ac8d4eb9181..795995bb53cb 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -324,6 +324,7 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
Generic_GCC::AddMultilibPaths(D, SysRoot, "libo32", MultiarchTriple, Paths);
addPathIfExists(D, concat(SysRoot, "/libo32"), Paths);
addPathIfExists(D, concat(SysRoot, "/usr/libo32"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system/libo32"), Paths);
}
Generic_GCC::AddMultilibPaths(D, SysRoot, OSLibDir, MultiarchTriple, Paths);
@@ -343,16 +344,20 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
addPathIfExists(D, concat(SysRoot, "/usr/lib", MultiarchTriple), Paths);
addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system/lib", MultiarchTriple), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir), Paths);
if (IsRISCV) {
StringRef ABIName = tools::riscv::getRISCVABI(Args, Triple);
addPathIfExists(D, concat(SysRoot, "/", OSLibDir, ABIName), Paths);
addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir, ABIName), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir, ABIName), Paths);
}
Generic_GCC::AddMultiarchPaths(D, SysRoot, OSLibDir, Paths);
addPathIfExists(D, concat(SysRoot, "/lib"), Paths);
addPathIfExists(D, concat(SysRoot, "/usr/lib"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system/lib"), Paths);
}
ToolChain::RuntimeLibType Linux::GetDefaultRuntimeLibType() const {
@@ -457,6 +462,11 @@ std::string Linux::getDynamicLinker(const ArgList &Args) const {
return Triple.isArch64Bit() ? "/system/bin/linker64" : "/system/bin/linker";
}
if (Triple.isMusl()) {
+ const std::string RosaSuffix = "-rosa-linux-musl";
+ if (Triple.str().size() > RosaSuffix.size() &&
+ std::equal(RosaSuffix.rbegin(), RosaSuffix.rend(), Triple.str().rbegin()))
+ return "/system/bin/linker";
+
std::string ArchName;
bool IsArm = false;
diff --git a/clang/tools/clang-installapi/Options.cpp b/clang/tools/clang-installapi/Options.cpp
index 64324a3f8b01..15ce70b68217 100644
--- a/clang/tools/clang-installapi/Options.cpp
+++ b/clang/tools/clang-installapi/Options.cpp
@@ -515,7 +515,7 @@ bool Options::processFrontendOptions(InputArgList &Args) {
FEOpts.FwkPaths = std::move(FrameworkPaths);
// Add default framework/library paths.
- PathSeq DefaultLibraryPaths = {"/usr/lib", "/usr/local/lib"};
+ PathSeq DefaultLibraryPaths = {"/usr/lib", "/system/lib", "/usr/local/lib"};
PathSeq DefaultFrameworkPaths = {"/Library/Frameworks",
"/System/Library/Frameworks"};
`}, `},
}, },
}) })
return return
} }
var (
// llvm stores the result of Toolchain.newLLVM.
llvm [_toolchainEnd][4]pkg.Artifact
// llvmOnce is for lazy initialisation of llvm.
llvmOnce [_toolchainEnd]sync.Once
)
// NewLLVM returns LLVM toolchain across multiple [pkg.Artifact].
func (t Toolchain) NewLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
llvmOnce[t].Do(func() {
llvm[t][0], llvm[t][1], llvm[t][2], llvm[t][3] = t.newLLVM()
})
return llvm[t][0], llvm[t][1], llvm[t][2], llvm[t][3]
}

27
internal/rosa/meson.go Normal file
View File

@@ -0,0 +1,27 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newMeson() pkg.Artifact {
const (
version = "1.10.1"
checksum = "w895BXF_icncnXatT_OLCFe2PYEtg4KrKooMgUYdN-nQVvbFX3PvYWHGEpogsHtd"
)
return t.New("meson-"+version, false, []pkg.Artifact{
t.Load(Python),
t.Load(Setuptools),
}, nil, nil, `
cd /usr/src/meson
chmod -R +w meson.egg-info
python3 setup.py \
install \
--prefix=/system \
--root=/work
`, pkg.Path(AbsUsrSrc.Append("meson"), true, pkg.NewHTTPGetTar(
nil, "https://github.com/mesonbuild/meson/releases/download/"+
version+"/meson-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Meson] = Toolchain.newMeson }

View File

@@ -29,24 +29,21 @@ func (t Toolchain) NewMusl(attr *MuslAttr) pkg.Artifact {
target := "install" target := "install"
script := ` script := `
mv -v /work/lib/* /work/system/lib mkdir -p /work/system/bin
rmdir -v /work/lib/ COMPAT_LINKER_NAME="ld-musl-$(uname -m).so.1"
ln -vs ../lib/libc.so /work/system/bin/linker
ln -vs ../lib/libc.so /work/system/bin/ldd
ln -vs libc.so "/work/system/lib/${COMPAT_LINKER_NAME}"
rm -v "/work/lib/${COMPAT_LINKER_NAME}"
rmdir -v /work/lib
` `
if attr.Headers { if attr.Headers {
target = "install-headers" target = "install-headers"
script = "" script = ""
} }
extra := []pkg.Artifact{ return t.New("musl-"+version, false, stage3Concat(t, attr.Extra,
t.NewMake(), t.Load(Make),
}
if t == toolchainStage3 {
extra = nil
}
return t.New("musl-"+version, slices.Concat(
attr.Extra,
extra,
), nil, slices.Concat([]string{ ), nil, slices.Concat([]string{
"ROSA_MUSL_TARGET=" + target, "ROSA_MUSL_TARGET=" + target,
}, attr.Env), ` }, attr.Env), `

View File

@@ -2,15 +2,14 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewNinja returns a [pkg.Artifact] containing an installation of Ninja. func (t Toolchain) newNinja() pkg.Artifact {
func (t Toolchain) NewNinja() pkg.Artifact {
const ( const (
version = "1.13.2" version = "1.13.2"
checksum = "ygKWMa0YV2lWKiFro5hnL-vcKbc_-RACZuPu0Io8qDvgQlZ0dxv7hPNSFkt4214v" checksum = "ygKWMa0YV2lWKiFro5hnL-vcKbc_-RACZuPu0Io8qDvgQlZ0dxv7hPNSFkt4214v"
) )
return t.New("ninja-"+version, []pkg.Artifact{ return t.New("ninja-"+version, false, []pkg.Artifact{
t.NewCMake(), t.Load(CMake),
t.NewPython(), t.Load(Python),
}, nil, nil, ` }, nil, nil, `
chmod -R +w /usr/src/ninja/ chmod -R +w /usr/src/ninja/
mkdir -p /work/system/bin/ && cd /work/system/bin/ mkdir -p /work/system/bin/ && cd /work/system/bin/
@@ -33,3 +32,4 @@ python3 /usr/src/ninja/configure.py \
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Ninja] = Toolchain.newNinja }

View File

@@ -2,26 +2,30 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewPerl returns a [pkg.Artifact] containing an installation of perl. func (t Toolchain) newPerl() pkg.Artifact {
func (t Toolchain) NewPerl() pkg.Artifact {
const ( const (
version = "5.42.0" version = "5.42.0"
checksum = "2KR7Jbpk-ZVn1a30LQRwbgUvg2AXlPQZfzrqCr31qD5-yEsTwVQ_W76eZH-EdxM9" checksum = "2KR7Jbpk-ZVn1a30LQRwbgUvg2AXlPQZfzrqCr31qD5-yEsTwVQ_W76eZH-EdxM9"
) )
return t.New("perl-"+version, []pkg.Artifact{ return t.New("perl-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
chmod -R +w /usr/src/perl && cd /usr/src/perl chmod -R +w /usr/src/perl && cd /usr/src/perl
echo 'print STDOUT "1..0 # Skip broken test\n";' > ext/Pod-Html/t/htmldir3.t
./Configure \ ./Configure \
-des \ -des \
-Dprefix=/system \ -Dprefix=/system \
-Dcc="${CC}" \ -Dcc="clang" \
-Dcflags='--std=gnu99' \ -Dcflags='--std=gnu99' \
-Dldflags="${LDFLAGS}" \ -Dldflags="${LDFLAGS}" \
-Doptimize='-O2 -fno-strict-aliasing' \ -Doptimize='-O2 -fno-strict-aliasing' \
-Duseithreads -Duseithreads
make "-j$(nproc)" # test make \
"-j$(nproc)" \
TEST_JOBS=256 \
test_harness
make DESTDIR=/work install make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("perl"), true, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("perl"), true, pkg.NewHTTPGetTar(
nil, nil,
@@ -30,3 +34,4 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Perl] = Toolchain.newPerl }

View File

@@ -0,0 +1,29 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newPkgConfig() pkg.Artifact {
const (
version = "0.29.2"
checksum = "gi7yAvkwo20Inys1tHbeYZ3Wjdm5VPkrnO0Q6_QZPCAwa1zrA8F4a63cdZDd-717"
)
return t.New("pkg-config-"+version, false, []pkg.Artifact{
t.Load(Make),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/pkg-config/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
CFLAGS='-Wno-int-conversion' \
--with-internal-glib
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("pkg-config"), true, pkg.NewHTTPGetTar(
nil,
"https://pkgconfig.freedesktop.org/releases/"+
"pkg-config-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[PkgConfig] = Toolchain.newPkgConfig }

View File

@@ -6,8 +6,7 @@ import (
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
// NewPython returns a [pkg.Artifact] containing an installation of Python. func (t Toolchain) newPython() pkg.Artifact {
func (t Toolchain) NewPython() pkg.Artifact {
const ( const (
version = "3.14.2" version = "3.14.2"
checksum = "7nZunVMGj0viB-CnxpcRego2C90X5wFsMTgsoewd5z-KSZY2zLuqaBwG-14zmKys" checksum = "7nZunVMGj0viB-CnxpcRego2C90X5wFsMTgsoewd5z-KSZY2zLuqaBwG-14zmKys"
@@ -33,12 +32,16 @@ func (t Toolchain) NewPython() pkg.Artifact {
// breaks on llvm // breaks on llvm
"test_dbm_gnu", "test_dbm_gnu",
} }
return t.New("python-"+version, []pkg.Artifact{ return t.New("python-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
t.NewZlib(), t.Load(Zlib),
t.NewLibffi(), t.Load(Libffi),
}, nil, []string{ }, nil, []string{
"EXTRATESTOPTS=-j0 -x " + strings.Join(skipTests, " -x "), "EXTRATESTOPTS=-j0 -x " + strings.Join(skipTests, " -x "),
// _ctypes appears to infer something from the linker name
"LDFLAGS=-Wl,--dynamic-linker=/system/lib/" +
"ld-musl-" + linuxArch() + ".so.1",
}, ` }, `
# test_synopsis_sourceless assumes this is writable and checks __pycache__ # test_synopsis_sourceless assumes this is writable and checks __pycache__
chmod -R +w /usr/src/python/ chmod -R +w /usr/src/python/
@@ -46,7 +49,8 @@ chmod -R +w /usr/src/python/
export HOME="$(mktemp -d)" export HOME="$(mktemp -d)"
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/python/configure \ /usr/src/python/configure \
--prefix=/system --prefix=/system \
--build="${ROSA_TRIPLE}"
make "-j$(nproc)" make "-j$(nproc)"
make test make test
make DESTDIR=/work install make DESTDIR=/work install
@@ -58,3 +62,26 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Python] = Toolchain.newPython }
func (t Toolchain) newSetuptools() pkg.Artifact {
const (
version = "80.10.1"
checksum = "p3rlwEmy1krcUH1KabprQz1TCYjJ8ZUjOQknQsWh3q-XEqLGEd3P4VrCc7ouHGXU"
)
return t.New("setuptools-"+version, false, []pkg.Artifact{
t.Load(Python),
}, nil, nil, `
pip3 install \
--no-index \
--prefix=/system \
--root=/work \
/usr/src/setuptools
`, pkg.Path(AbsUsrSrc.Append("setuptools"), true, pkg.NewHTTPGetTar(
nil, "https://github.com/pypa/setuptools/archive/refs/tags/"+
"v"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[Setuptools] = Toolchain.newSetuptools }

View File

@@ -63,16 +63,11 @@ func triplet() string {
const ( const (
// EnvTriplet holds the return value of triplet. // EnvTriplet holds the return value of triplet.
EnvTriplet = "ROSA_TRIPLE" EnvTriplet = "ROSA_TRIPLE"
// EnvRefCFLAGS holds toolchain-specific reference CFLAGS.
EnvRefCFLAGS = "ROSA_CFLAGS"
// EnvRefCXXFLAGS holds toolchain-specific reference CXXFLAGS.
EnvRefCXXFLAGS = "ROSA_CXXFLAGS"
) )
// ldflags returns LDFLAGS corresponding to triplet. // earlyLDFLAGS returns LDFLAGS corresponding to triplet.
func ldflags(static bool) string { func earlyLDFLAGS(static bool) string {
s := "LDFLAGS=" + s := "-fuse-ld=lld " +
"-fuse-ld=lld " +
"-L/system/lib -Wl,-rpath=/system/lib " + "-L/system/lib -Wl,-rpath=/system/lib " +
"-L/system/lib/" + triplet() + " " + "-L/system/lib/" + triplet() + " " +
"-Wl,-rpath=/system/lib/" + triplet() + " " + "-Wl,-rpath=/system/lib/" + triplet() + " " +
@@ -80,18 +75,18 @@ func ldflags(static bool) string {
"-unwindlib=libunwind " + "-unwindlib=libunwind " +
"-Wl,--as-needed" "-Wl,--as-needed"
if !static { if !static {
s += " -Wl,--dynamic-linker=/system/lib/ld-musl-x86_64.so.1" s += " -Wl,--dynamic-linker=/system/bin/linker"
} }
return s return s
} }
// cflags is reference CFLAGS for the Rosa OS toolchain. // earlyCFLAGS is reference CFLAGS for the stage3 toolchain.
const cflags = "-Qunused-arguments " + const earlyCFLAGS = "-Qunused-arguments " +
"-isystem/system/include" "-isystem/system/include"
// cxxflags returns reference CXXFLAGS for the Rosa OS toolchain corresponding // earlyCXXFLAGS returns reference CXXFLAGS for the stage3 toolchain
// to [runtime.GOARCH]. // corresponding to [runtime.GOARCH].
func cxxflags() string { func earlyCXXFLAGS() string {
return "--start-no-unused-arguments " + return "--start-no-unused-arguments " +
"-stdlib=libc++ " + "-stdlib=libc++ " +
"--end-no-unused-arguments " + "--end-no-unused-arguments " +
@@ -119,8 +114,30 @@ const (
// Std denotes the standard Rosa OS toolchain. // Std denotes the standard Rosa OS toolchain.
Std Std
// _toolchainEnd is the total number of toolchains available and does not
// denote a valid toolchain.
_toolchainEnd
) )
// stage3Concat concatenates s and values. If the current toolchain is
// toolchainStage3, stage3Concat returns s as is.
func stage3Concat[S ~[]E, E any](t Toolchain, s S, values ...E) S {
if t == toolchainStage3 {
return s
}
return slices.Concat(s, values)
}
// stage3ExclConcat concatenates s and values. If the current toolchain is not
// toolchainStage3, stage3ExclConcat returns s as is.
func stage3ExclConcat[S ~[]E, E any](t Toolchain, s S, values ...E) S {
if t == toolchainStage3 {
return slices.Concat(s, values)
}
return s
}
// lastIndexFunc is like [strings.LastIndexFunc] but for [slices]. // lastIndexFunc is like [strings.LastIndexFunc] but for [slices].
func lastIndexFunc[S ~[]E, E any](s S, f func(E) bool) (i int) { func lastIndexFunc[S ~[]E, E any](s S, f func(E) bool) (i int) {
if i = slices.IndexFunc(s, f); i < 0 { if i = slices.IndexFunc(s, f); i < 0 {
@@ -159,6 +176,7 @@ var absCureScript = fhs.AbsUsrBin.Append(".cure-script")
// New returns a [pkg.Artifact] compiled on this toolchain. // New returns a [pkg.Artifact] compiled on this toolchain.
func (t Toolchain) New( func (t Toolchain) New(
name string, name string,
exclusive bool,
extra []pkg.Artifact, extra []pkg.Artifact,
checksum *pkg.Checksum, checksum *pkg.Checksum,
env []string, env []string,
@@ -175,10 +193,12 @@ func (t Toolchain) New(
) )
switch t { switch t {
case toolchainBusybox: case toolchainBusybox:
name += "-early"
support = slices.Concat([]pkg.Artifact{newBusyboxBin()}, extra) support = slices.Concat([]pkg.Artifact{newBusyboxBin()}, extra)
env = fixupEnviron(env, nil, "/system/bin") env = fixupEnviron(env, nil, "/system/bin")
case toolchainStage3: case toolchainStage3:
name += "-boot"
const ( const (
version = "20260111T160052Z" version = "20260111T160052Z"
checksum = "c5_FwMnRN8RZpTdBLGYkL4RR8ampdaZN2JbkgrFLe8-QHQAVQy08APVvIL6eT7KW" checksum = "c5_FwMnRN8RZpTdBLGYkL4RR8ampdaZN2JbkgrFLe8-QHQAVQy08APVvIL6eT7KW"
@@ -187,7 +207,7 @@ func (t Toolchain) New(
args[0] = "bash" args[0] = "bash"
support = slices.Concat([]pkg.Artifact{ support = slices.Concat([]pkg.Artifact{
cureEtc{}, cureEtc{},
toolchainBusybox.New("stage3-"+version, nil, nil, nil, ` toolchainBusybox.New("stage3-"+version, false, nil, nil, nil, `
tar -C /work -xf /usr/src/stage3.tar.xz tar -C /work -xf /usr/src/stage3.tar.xz
rm -rf /work/dev/ /work/proc/ rm -rf /work/dev/ /work/proc/
ln -vs ../usr/bin /work/bin ln -vs ../usr/bin /work/bin
@@ -203,16 +223,17 @@ ln -vs ../usr/bin /work/bin
env = fixupEnviron(env, []string{ env = fixupEnviron(env, []string{
EnvTriplet + "=" + triplet(), EnvTriplet + "=" + triplet(),
lcMessages, lcMessages,
"LDFLAGS=" + earlyLDFLAGS(true),
EnvRefCFLAGS + "=" + cflags,
EnvRefCXXFLAGS + "=" + cxxflags(),
ldflags(true),
}, "/system/bin", }, "/system/bin",
"/usr/bin", "/usr/bin",
"/usr/lib/llvm/21/bin", "/usr/lib/llvm/21/bin",
) )
case toolchainIntermediate, Std: case toolchainIntermediate, Std:
if t < Std {
name += "-std"
}
boot := t - 1 boot := t - 1
musl, compilerRT, runtimes, clang := boot.NewLLVM() musl, compilerRT, runtimes, clang := boot.NewLLVM()
support = slices.Concat(extra, []pkg.Artifact{ support = slices.Concat(extra, []pkg.Artifact{
@@ -221,19 +242,12 @@ ln -vs ../usr/bin /work/bin
compilerRT, compilerRT,
runtimes, runtimes,
clang, clang,
boot.NewBusybox(), boot.Load(Busybox),
}) })
env = fixupEnviron(env, []string{ env = fixupEnviron(env, []string{
EnvTriplet + "=" + triplet(), EnvTriplet + "=" + triplet(),
lcMessages, lcMessages,
// autotools projects act up with CFLAGS
"CC=clang " + cflags,
EnvRefCFLAGS + "=" + cflags,
"CXX=clang++ " + cxxflags(),
EnvRefCXXFLAGS + "=" + cxxflags(),
ldflags(false),
"AR=ar", "AR=ar",
"RANLIB=ranlib", "RANLIB=ranlib",
"LIBCC=/system/lib/clang/21/lib/" + triplet() + "LIBCC=/system/lib/clang/21/lib/" + triplet() +
@@ -245,7 +259,7 @@ ln -vs ../usr/bin /work/bin
} }
return pkg.NewExec( return pkg.NewExec(
name, checksum, pkg.ExecTimeoutMax, name, checksum, pkg.ExecTimeoutMax, exclusive,
fhs.AbsRoot, env, fhs.AbsRoot, env,
path, args, path, args,
@@ -258,3 +272,43 @@ ln -vs ../usr/bin /work/bin
)}, paths)..., )}, paths)...,
) )
} }
// NewPatchedSource returns [pkg.Artifact] of source with patches applied. If
// passthrough is true, source is returned as is for zero length patches.
func (t Toolchain) NewPatchedSource(
name, version string,
source pkg.Artifact,
passthrough bool,
patches ...[2]string,
) pkg.Artifact {
if passthrough && len(patches) == 0 {
return source
}
paths := make([]pkg.ExecPath, len(patches)+1)
for i, p := range patches {
paths[i+1] = pkg.Path(
AbsUsrSrc.Append(name+"-patches", p[0]+".patch"), false,
pkg.NewFile(p[0]+".patch", []byte(p[1])),
)
}
paths[0] = pkg.Path(AbsUsrSrc.Append(name), false, source)
aname := name + "-" + version + "-src"
script := `
cp -r /usr/src/` + name + `/. /work/.
chmod -R +w /work && cd /work
`
if len(paths) > 1 {
script += `
cat /usr/src/` + name + `-patches/* | \
patch \
-p 1 \
--ignore-whitespace
`
aname += "-patched"
}
return t.New(aname, false, stage3Concat(t, []pkg.Artifact{},
t.Load(Patch),
), nil, nil, script, paths...)
}

View File

@@ -2,14 +2,13 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewRsync returns a [pkg.Artifact] containing an installation of rsync. func (t Toolchain) newRsync() pkg.Artifact {
func (t Toolchain) NewRsync() pkg.Artifact {
const ( const (
version = "3.4.1" version = "3.4.1"
checksum = "VBlTsBWd9z3r2-ex7GkWeWxkUc5OrlgDzikAC0pK7ufTjAJ0MbmC_N04oSVTGPiv" checksum = "VBlTsBWd9z3r2-ex7GkWeWxkUc5OrlgDzikAC0pK7ufTjAJ0MbmC_N04oSVTGPiv"
) )
return t.New("rsync-"+version, []pkg.Artifact{ return t.New("rsync-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
/usr/src/rsync/configure --prefix=/system \ /usr/src/rsync/configure --prefix=/system \
@@ -27,3 +26,4 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Rsync] = Toolchain.newRsync }

82
internal/rosa/wayland.go Normal file
View File

@@ -0,0 +1,82 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newWayland() pkg.Artifact {
const (
version = "1.24.0"
checksum = "JxgLiFRRGw2D3uhVw8ZeDbs3V7K_d4z_ypDog2LBqiA_5y2vVbUAk5NT6D5ozm0m"
)
return t.New("wayland-"+version, false, []pkg.Artifact{
t.Load(Python),
t.Load(Meson),
t.Load(PkgConfig),
t.Load(CMake),
t.Load(Ninja),
t.Load(Libffi),
t.Load(Libexpat),
t.Load(Libxml2),
}, nil, nil, `
cd /usr/src/wayland
chmod +w tests tests/sanity-test.c
echo 'int main(){}' > tests/sanity-test.c
cd "$(mktemp -d)"
meson setup \
--reconfigure \
--buildtype=release \
--prefix=/system \
--prefer-static \
-Ddocumentation=false \
-Dtests=true \
-Ddefault_library=both \
. /usr/src/wayland
meson compile
meson test
meson install \
--destdir=/work
`, pkg.Path(AbsUsrSrc.Append("wayland"), true, pkg.NewHTTPGetTar(
nil, "https://gitlab.freedesktop.org/wayland/wayland/"+
"-/archive/"+version+"/wayland-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
)))
}
func init() { artifactsF[Wayland] = Toolchain.newWayland }
func (t Toolchain) newWaylandProtocols() pkg.Artifact {
const (
version = "1.47"
checksum = "B_NodZ7AQfCstcx7kgbaVjpkYOzbAQq0a4NOk-SA8bQixAE20FY3p1-6gsbPgHn9"
)
return t.New("wayland-protocols-"+version, false, []pkg.Artifact{
t.Load(Python),
t.Load(Meson),
t.Load(PkgConfig),
t.Load(CMake),
t.Load(Ninja),
t.Load(Wayland),
t.Load(Libffi),
t.Load(Libexpat),
t.Load(Libxml2),
}, nil, nil, `
cd "$(mktemp -d)"
meson setup \
--reconfigure \
--buildtype=release \
--prefix=/system \
--prefer-static \
. /usr/src/wayland-protocols
meson compile
meson install \
--destdir=/work
`, pkg.Path(AbsUsrSrc.Append("wayland-protocols"), false, pkg.NewHTTPGetTar(
nil, "https://gitlab.freedesktop.org/wayland/wayland-protocols/"+
"-/archive/"+version+"/wayland-protocols-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
)))
}
func init() { artifactsF[WaylandProtocols] = Toolchain.newWaylandProtocols }

52
internal/rosa/x.go Normal file
View File

@@ -0,0 +1,52 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newXproto() pkg.Artifact {
const (
version = "7.0.23"
checksum = "goxwWxV0jZ_3pNczXFltZWHAhq92x-aEreUGyp5Ns8dBOoOmgbpeNIu1nv0Zx07z"
)
return t.New("xproto-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(PkgConfig),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/xproto/configure \
--prefix=/system \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("xproto"), true, pkg.NewHTTPGetTar(
nil, "https://www.x.org/releases/X11R7.7/src/proto/xproto-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
)))
}
func init() { artifactsF[Xproto] = Toolchain.newXproto }
func (t Toolchain) newLibXau() pkg.Artifact {
const (
version = "1.0.7"
checksum = "bm768RoZZnHRe9VjNU1Dw3BhfE60DyS9D_bgSR-JLkEEyUWT_Hb_lQripxrXto8j"
)
return t.New("libXau-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(PkgConfig),
t.Load(Xproto),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/libXau/configure \
--prefix=/system \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("libXau"), true, pkg.NewHTTPGetTar(
nil, "https://www.x.org/releases/X11R7.7/src/lib/"+
"libXau-"+version+".tar.bz2",
mustDecode(checksum),
pkg.TarBzip2,
)))
}
func init() { artifactsF[LibXau] = Toolchain.newLibXau }

56
internal/rosa/xcb.go Normal file
View File

@@ -0,0 +1,56 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newXCBProto() pkg.Artifact {
const (
version = "1.17.0"
checksum = "_NtbKaJ_iyT7XiJz25mXQ7y-niTzE8sHPvLXZPcqtNoV_-vTzqkezJ8Hp2U1enCv"
)
return t.New("xcb-proto-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Python),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/xcb-proto/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("xcb-proto"), true, pkg.NewHTTPGetTar(
nil, "https://xcb.freedesktop.org/dist/xcb-proto-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[XCBProto] = Toolchain.newXCBProto }
func (t Toolchain) newXCB() pkg.Artifact {
const (
version = "1.17.0"
checksum = "hjjsc79LpWM_hZjNWbDDS6qRQUXREjjekS6UbUsDq-RR1_AjgNDxhRvZf-1_kzDd"
)
return t.New("xcb-"+version, false, []pkg.Artifact{
t.Load(Make),
t.Load(Python),
t.Load(PkgConfig),
t.Load(XCBProto),
t.Load(Xproto),
t.Load(LibXau),
}, nil, nil, `
cd "$(mktemp -d)"
/usr/src/xcb/configure \
--prefix=/system \
--build="${ROSA_TRIPLE}" \
--enable-static
make "-j$(nproc)" check
make DESTDIR=/work install
`, pkg.Path(AbsUsrSrc.Append("xcb"), true, pkg.NewHTTPGetTar(
nil, "https://xcb.freedesktop.org/dist/libxcb-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
)))
}
func init() { artifactsF[XCB] = Toolchain.newXCB }

View File

@@ -2,17 +2,16 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
// NewZlib returns a new [pkg.Artifact] containing an installation of zlib. func (t Toolchain) newZlib() pkg.Artifact {
func (t Toolchain) NewZlib() pkg.Artifact {
const ( const (
version = "1.3.1" version = "1.3.1"
checksum = "E-eIpNzE8oJ5DsqH4UuA_0GDKuQF5csqI8ooDx2w7Vx-woJ2mb-YtSbEyIMN44mH" checksum = "E-eIpNzE8oJ5DsqH4UuA_0GDKuQF5csqI8ooDx2w7Vx-woJ2mb-YtSbEyIMN44mH"
) )
return t.New("zlib-"+version, []pkg.Artifact{ return t.New("zlib-"+version, false, []pkg.Artifact{
t.NewMake(), t.Load(Make),
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
CFLAGS="${CFLAGS} -fPIC" /usr/src/zlib/configure \ CC="clang -fPIC" /usr/src/zlib/configure \
--prefix /system --prefix /system
make "-j$(nproc)" test make "-j$(nproc)" test
make DESTDIR=/work install make DESTDIR=/work install
@@ -23,3 +22,4 @@ make DESTDIR=/work install
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
func init() { artifactsF[Zlib] = Toolchain.newZlib }

View File

@@ -35,7 +35,7 @@ package
*Default:* *Default:*
` <derivation hakurei-static-x86_64-unknown-linux-musl-0.3.3> ` ` <derivation hakurei-static-x86_64-unknown-linux-musl-0.3.4> `
@@ -805,7 +805,7 @@ package
*Default:* *Default:*
` <derivation hakurei-hsu-0.3.3> ` ` <derivation hakurei-hsu-0.3.4> `

View File

@@ -35,7 +35,7 @@
buildGoModule rec { buildGoModule rec {
pname = "hakurei"; pname = "hakurei";
version = "0.3.3"; version = "0.3.4";
srcFiltered = builtins.path { srcFiltered = builtins.path {
name = "${pname}-src"; name = "${pname}-src";