60 Commits

Author SHA1 Message Date
kat
cb4b2706c0 cmd/mbf: bring back pkgserver's favicon!
It existed in mae's #33, but ozy seems to have lost it during her
changes pre-merge, so just add it back again.

This favicon image was grabbed from mae:
8a38b614c6/cmd/pkgserver/ui/static/favicon.ico
That commit is the latest one of the salvaged original #33 history; see
#33 (comment).
2026-05-15 21:05:26 +10:00
c32c06b2e8 internal/rosa: mesa artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m48s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m52s
Test / Sandbox (race detector) (push) Successful in 5m21s
Test / Hakurei (race detector) (push) Successful in 6m32s
Test / Flake checks (push) Successful in 1m22s
This has many dependencies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 05:12:35 +09:00
61199f734c internal/rosa/glslang: remove headers prefix
All checks were successful
Test / Create distribution (push) Successful in 1m33s
Test / Sandbox (push) Successful in 3m18s
Test / ShareFS (push) Successful in 4m55s
Test / Hakurei (push) Successful in 5m3s
Test / Sandbox (race detector) (push) Successful in 6m23s
Test / Hakurei (race detector) (push) Successful in 7m24s
Test / Flake checks (push) Successful in 1m21s
Maintainers tried to be clever with this and breaks cmake paths.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 04:57:38 +09:00
87cf0d4e6b internal/rosa/mesa: libva artifact
All checks were successful
Test / Create distribution (push) Successful in 1m6s
Test / Sandbox (push) Successful in 2m49s
Test / ShareFS (push) Successful in 3m47s
Test / Hakurei (push) Successful in 3m53s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m22s
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 04:40:04 +09:00
cf0dffa0f5 internal/rosa/mesa: libglvnd enable glx
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m52s
Test / ShareFS (push) Successful in 3m44s
Test / Hakurei (push) Successful in 3m56s
Test / Sandbox (race detector) (push) Successful in 5m29s
Test / Hakurei (race detector) (push) Successful in 6m33s
Test / Flake checks (push) Successful in 1m32s
Required to break circular dependency.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 04:35:50 +09:00
686d7ec63a internal/rosa/x: xserver artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m47s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m44s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m22s
Required by libglvnd test suite.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 04:15:48 +09:00
4c653b1151 internal/rosa/x: xkeyboard-config artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m56s
Test / ShareFS (push) Successful in 3m46s
Test / Hakurei (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m25s
Required by xserver test suite.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 03:59:22 +09:00
0b0a63d151 internal/rosa/x: libxcb-util-wm artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m49s
Test / ShareFS (push) Successful in 3m45s
Test / Hakurei (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m18s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m23s
Required by xserver xephyr.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 03:46:04 +09:00
6231cfe2aa internal/rosa/x: libxcb-util-image artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m19s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m25s
Required by xserver xephyr.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 03:36:45 +09:00
712e80890b internal/rosa/x: libxcb-util artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m52s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m22s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m25s
Required by xserver xephyr.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 03:25:24 +09:00
3fe7d48014 internal/rosa/x: libxcb-render-util artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m52s
Test / ShareFS (push) Successful in 3m44s
Test / Hakurei (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m31s
Test / Hakurei (race detector) (push) Successful in 3m32s
Test / Flake checks (push) Successful in 1m24s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 03:09:37 +09:00
16f9d39427 internal/rosa: libepoxy artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m49s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m30s
Test / Flake checks (push) Successful in 1m23s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 02:16:55 +09:00
c1cd5ba07b internal/rosa: libtirpc artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m44s
Test / Hakurei (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m33s
Test / Flake checks (push) Successful in 1m23s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 02:07:25 +09:00
7b0cd2e472 internal/rosa/x: libXdmcp artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m54s
Test / ShareFS (push) Successful in 3m44s
Test / Hakurei (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m32s
Test / Hakurei (race detector) (push) Successful in 6m32s
Test / Flake checks (push) Successful in 1m22s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 01:44:37 +09:00
e580307528 internal/rosa/x: libxcvt artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m50s
Test / ShareFS (push) Successful in 3m41s
Test / Hakurei (push) Successful in 3m47s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m23s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 01:24:00 +09:00
ee1dffb676 internal/rosa/x: libXfont2 artifact
All checks were successful
Test / Create distribution (push) Successful in 1m6s
Test / Sandbox (push) Successful in 2m46s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m52s
Test / Sandbox (race detector) (push) Successful in 5m23s
Test / Hakurei (race detector) (push) Successful in 6m27s
Test / Flake checks (push) Successful in 1m23s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 01:17:27 +09:00
f095fcf181 internal/rosa/x: font-util and libfontenc artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m53s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m27s
Test / Hakurei (race detector) (push) Successful in 6m27s
Test / Flake checks (push) Successful in 1m32s
Required by libXfont2.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 01:14:12 +09:00
ca8a130130 internal/rosa: freetype artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m54s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m22s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m23s
Required by libXfont.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-14 00:54:42 +09:00
0ad6b00e41 internal/rosa/x: xkbcomp artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m44s
Test / ShareFS (push) Successful in 3m39s
Test / Hakurei (push) Successful in 3m45s
Test / Sandbox (race detector) (push) Successful in 5m16s
Test / Hakurei (race detector) (push) Successful in 6m29s
Test / Flake checks (push) Successful in 1m19s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 22:24:08 +09:00
ad0f1cf36b internal/rosa/x: libxkbfile artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m53s
Test / ShareFS (push) Successful in 3m41s
Test / Hakurei (push) Successful in 3m57s
Test / Sandbox (race detector) (push) Successful in 5m27s
Test / Hakurei (race detector) (push) Successful in 6m32s
Test / Flake checks (push) Successful in 1m29s
Required by xkbcomp.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 22:22:02 +09:00
b12d924fa2 internal/rosa: pixman artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m50s
Test / ShareFS (push) Successful in 3m41s
Test / Hakurei (push) Successful in 3m47s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m31s
Test / Flake checks (push) Successful in 1m21s
Required by xserver.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 22:07:53 +09:00
c31d8ae41a internal/rosa/x: libXfixes artifact
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m44s
Test / ShareFS (push) Successful in 3m40s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m30s
Test / Hakurei (push) Successful in 2m30s
Test / Flake checks (push) Successful in 1m20s
Required by libva.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 21:36:47 +09:00
6dbbf15c0e internal/rosa: lm_sensors artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m46s
Test / ShareFS (push) Successful in 3m41s
Test / Hakurei (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m26s
Test / Hakurei (race detector) (push) Successful in 6m34s
Test / Flake checks (push) Successful in 1m22s
Generally useful, and an optional dependency of mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 20:11:37 +09:00
be7de68a42 internal/rosa/perl: Test::Cmd artifact
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m43s
Test / ShareFS (push) Successful in 3m41s
Test / Hakurei (push) Successful in 3m46s
Test / Sandbox (race detector) (push) Successful in 5m21s
Test / Hakurei (race detector) (push) Successful in 6m26s
Test / Flake checks (push) Successful in 1m31s
Required by lm_sensors test suite.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 20:05:43 +09:00
a759cf3666 internal/pkg: check exec substitution
All checks were successful
Test / Create distribution (push) Successful in 1m16s
Test / Sandbox (push) Successful in 3m0s
Test / Hakurei (push) Successful in 4m15s
Test / ShareFS (push) Successful in 4m9s
Test / Sandbox (race detector) (push) Successful in 5m45s
Test / Hakurei (race detector) (push) Successful in 6m47s
Test / Flake checks (push) Successful in 1m23s
This relies on the testtool having ident as relevant input to assert successful substitution.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 19:43:53 +09:00
8c2dd3e984 internal/pkg: verify status kind
All checks were successful
Test / Create distribution (push) Successful in 1m14s
Test / Sandbox (push) Successful in 3m0s
Test / ShareFS (push) Successful in 4m8s
Test / Hakurei (push) Successful in 4m13s
Test / Sandbox (race detector) (push) Successful in 5m47s
Test / Hakurei (race detector) (push) Successful in 6m45s
Test / Flake checks (push) Successful in 1m23s
While it is still impossible to reliably determine the expected contents of these status files, this checks their nature for expected substitution behaviour.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 19:27:58 +09:00
67038d5af4 internal/pkg: log fault in tests when available
All checks were successful
Test / Create distribution (push) Successful in 1m11s
Test / Sandbox (push) Successful in 3m15s
Test / ShareFS (push) Successful in 4m12s
Test / Hakurei (push) Successful in 4m19s
Test / Sandbox (race detector) (push) Successful in 5m34s
Test / Hakurei (race detector) (push) Successful in 6m44s
Test / Flake checks (push) Successful in 1m24s
This would otherwise only be available in verbose output, interleaved with everything else.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 18:58:18 +09:00
53d8d12e7f internal/rosa/git: disable flaky test
All checks were successful
Test / Create distribution (push) Successful in 1m8s
Test / Sandbox (push) Successful in 3m7s
Test / Hakurei (push) Successful in 4m12s
Test / ShareFS (push) Successful in 3m36s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m45s
Test / Flake checks (push) Successful in 1m21s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 18:51:11 +09:00
7997d79e56 cmd/mbf: display and destroy fault entries
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m52s
Test / Hakurei (push) Successful in 3m55s
Test / ShareFS (push) Successful in 3m52s
Test / Sandbox (race detector) (push) Successful in 5m30s
Test / Hakurei (race detector) (push) Successful in 6m33s
Test / Flake checks (push) Successful in 1m24s
This change extends cmd/mbf commands for working with fault entries.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 19:06:09 +09:00
f2f1726190 internal/pkg: record cure faults
All checks were successful
Test / ShareFS (push) Successful in 47s
Test / Sandbox (race detector) (push) Successful in 53s
Test / Sandbox (push) Successful in 54s
Test / Hakurei (race detector) (push) Successful in 59s
Test / Hakurei (push) Successful in 59s
Test / Create distribution (push) Successful in 1m12s
Test / Flake checks (push) Successful in 1m35s
These are useful for troubleshooting. This change records them in a separate directory.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 17:58:18 +09:00
f63203cb0a internal/pkg: populate substitute status
All checks were successful
Test / Hakurei (push) Successful in 58s
Test / Sandbox (push) Successful in 53s
Test / Sandbox (race detector) (push) Successful in 51s
Test / Hakurei (race detector) (push) Successful in 57s
Test / ShareFS (push) Successful in 42s
Test / Create distribution (push) Successful in 1m9s
Test / Flake checks (push) Successful in 1m36s
These are not created when taking the fast path, but should be inherited from the alternative.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 16:16:37 +09:00
19555c7670 internal/rosa/gtk: glib 2.88.0 to 2.88.1
All checks were successful
Test / ShareFS (push) Successful in 47s
Test / Create distribution (push) Successful in 41s
Test / Sandbox (push) Successful in 56s
Test / Sandbox (race detector) (push) Successful in 54s
Test / Hakurei (push) Successful in 1m1s
Test / Hakurei (race detector) (push) Successful in 59s
Test / Flake checks (push) Successful in 1m35s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:48:38 +09:00
a3beab8959 internal/rosa/libucontext: 1.5 to 1.5.1
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m49s
Test / ShareFS (push) Successful in 3m45s
Test / Hakurei (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m27s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m28s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:48:15 +09:00
2ea786d6a9 internal/rosa/libbsd: libmd 1.1.0 to 1.2.0
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m53s
Test / ShareFS (push) Successful in 3m50s
Test / Hakurei (push) Successful in 4m0s
Test / Sandbox (race detector) (push) Successful in 5m25s
Test / Hakurei (race detector) (push) Successful in 6m32s
Test / Flake checks (push) Successful in 1m45s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:47:57 +09:00
747d4ec4b0 internal/rosa/libexpat: 2.8.0 to 2.8.1
All checks were successful
Test / Create distribution (push) Successful in 1m14s
Test / Sandbox (push) Successful in 3m2s
Test / Hakurei (push) Successful in 4m15s
Test / ShareFS (push) Successful in 4m12s
Test / Sandbox (race detector) (push) Successful in 5m43s
Test / Hakurei (race detector) (push) Successful in 6m59s
Test / Flake checks (push) Successful in 1m22s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:47:32 +09:00
b76e6f6519 internal/rosa/tamago: 1.26.2 to 1.26.3
All checks were successful
Test / Create distribution (push) Successful in 1m14s
Test / Sandbox (push) Successful in 3m3s
Test / ShareFS (push) Successful in 4m8s
Test / Hakurei (push) Successful in 4m16s
Test / Sandbox (race detector) (push) Successful in 5m39s
Test / Hakurei (race detector) (push) Successful in 6m57s
Test / Flake checks (push) Successful in 1m22s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:47:05 +09:00
840d8f68bf internal/rosa/git: disable flaky test
All checks were successful
Test / Create distribution (push) Successful in 1m13s
Test / Sandbox (push) Successful in 3m15s
Test / ShareFS (push) Successful in 4m0s
Test / Hakurei (push) Successful in 4m15s
Test / Sandbox (race detector) (push) Successful in 5m38s
Test / Hakurei (race detector) (push) Successful in 6m41s
Test / Flake checks (push) Successful in 1m26s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:38:59 +09:00
4bede7ecdd internal/pkg: discontinue DCE resolution on signal
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m45s
Test / Hakurei (push) Successful in 3m53s
Test / Sandbox (race detector) (push) Successful in 5m22s
Test / Hakurei (race detector) (push) Successful in 6m24s
Test / Flake checks (push) Successful in 1m28s
This serves as a stopgap measure to skip long-running DCE resolutions.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:29:01 +09:00
487a03b5a3 internal/pkg: deduplicate DCE by ident
All checks were successful
Test / Create distribution (push) Successful in 1m19s
Test / Sandbox (push) Successful in 3m19s
Test / ShareFS (push) Successful in 4m21s
Test / Hakurei (push) Successful in 4m33s
Test / Sandbox (race detector) (push) Successful in 5m48s
Test / Hakurei (race detector) (push) Successful in 6m52s
Test / Flake checks (push) Successful in 1m23s
This eliminates edge cases where target artifacts do not compare equal.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-13 00:18:27 +09:00
8f3c22896a internal/pkg: DCE benchmark unwrap only
All checks were successful
Test / Create distribution (push) Successful in 1m43s
Test / Sandbox (push) Successful in 3m54s
Test / ShareFS (push) Successful in 5m21s
Test / Hakurei (push) Successful in 5m28s
Test / Sandbox (race detector) (push) Successful in 6m50s
Test / Hakurei (race detector) (push) Successful in 7m52s
Test / Flake checks (push) Successful in 1m28s
This eliminates noise at lower depths.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 19:56:59 +09:00
a167c1aba5 internal/pkg: hold artifact in DCE
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m42s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m56s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m24s
Test / Flake checks (push) Successful in 1m21s
This is significantly slower but enables much better error reporting.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 19:45:25 +09:00
a6008ef68b internal/pkg: benchmark early DCE
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m45s
Test / ShareFS (push) Successful in 3m43s
Test / Hakurei (push) Successful in 3m55s
Test / Sandbox (race detector) (push) Successful in 5m26s
Test / Hakurei (race detector) (push) Successful in 6m24s
Test / Flake checks (push) Successful in 1m21s
This error has never had decent performance, now is a good time to improve that.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 18:59:25 +09:00
5228b27362 internal/rosa/glslang: 16.2.0 to 16.3.0
All checks were successful
Test / Create distribution (push) Successful in 1m14s
Test / Sandbox (push) Successful in 2m56s
Test / ShareFS (push) Successful in 4m3s
Test / Sandbox (race detector) (push) Successful in 5m36s
Test / Hakurei (race detector) (push) Successful in 6m48s
Test / Hakurei (push) Successful in 3m13s
Test / Flake checks (push) Successful in 1m26s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 17:53:35 +09:00
f00d3a07ad internal/rosa/python: trove-classifiers 2026.4.28.13 to 2026.5.7.17
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 3m21s
Test / Hakurei (push) Successful in 4m56s
Test / ShareFS (push) Successful in 4m21s
Test / Sandbox (race detector) (push) Successful in 5m40s
Test / Hakurei (race detector) (push) Successful in 7m12s
Test / Flake checks (push) Successful in 1m22s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 17:53:17 +09:00
f9538bc21b internal/rosa/python: 3.14.4 to 3.14.5
All checks were successful
Test / Create distribution (push) Successful in 1m13s
Test / Sandbox (push) Successful in 3m10s
Test / Hakurei (push) Successful in 4m43s
Test / ShareFS (push) Successful in 4m47s
Test / Sandbox (race detector) (push) Successful in 6m6s
Test / Hakurei (race detector) (push) Successful in 7m10s
Test / Flake checks (push) Successful in 1m23s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 17:52:53 +09:00
6ae5efec56 internal/rosa/gnu: gcc 15.2.0 to 16.1.0
All checks were successful
Test / Create distribution (push) Successful in 39s
Test / Sandbox (push) Successful in 3m6s
Test / Hakurei (push) Successful in 4m36s
Test / ShareFS (push) Successful in 4m44s
Test / Sandbox (race detector) (push) Successful in 6m6s
Test / Hakurei (race detector) (push) Successful in 7m9s
Test / Flake checks (push) Successful in 1m24s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 17:52:31 +09:00
14f4c59c8c internal/rosa/llvm: 22.1.4 to 22.1.5
All checks were successful
Test / Create distribution (push) Successful in 1m11s
Test / Sandbox (push) Successful in 3m1s
Test / ShareFS (push) Successful in 4m1s
Test / Hakurei (push) Successful in 2m54s
Test / Sandbox (race detector) (push) Successful in 2m42s
Test / Hakurei (race detector) (push) Successful in 3m26s
Test / Flake checks (push) Successful in 1m29s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 17:52:13 +09:00
688d43417b internal/pkg: rename measured exec type
All checks were successful
Test / Create distribution (push) Successful in 1m47s
Test / Sandbox (push) Successful in 7m1s
Test / Hakurei (push) Successful in 11m5s
Test / ShareFS (push) Successful in 11m11s
Test / Hakurei (race detector) (push) Successful in 4m45s
Test / Sandbox (race detector) (push) Successful in 9m1s
Test / Flake checks (push) Successful in 4m23s
This type is no longer exclusive to KindExecNet.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 15:23:33 +09:00
9f8fafa39b internal/rosa: measure kernel headers
All checks were successful
Test / Create distribution (push) Successful in 3m17s
Test / Sandbox (push) Successful in 9m8s
Test / Sandbox (race detector) (push) Successful in 10m19s
Test / ShareFS (push) Successful in 10m1s
Test / Hakurei (push) Successful in 10m46s
Test / Hakurei (race detector) (push) Successful in 14m51s
Test / Flake checks (push) Successful in 1m24s
This makes version bumps robust and much less tedious.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 15:19:57 +09:00
6643cfbeee internal/pkg: optionally measure exec artifact
All checks were successful
Test / Create distribution (push) Successful in 1m55s
Test / Sandbox (push) Successful in 5m22s
Test / Hakurei (push) Successful in 9m50s
Test / ShareFS (push) Successful in 11m20s
Test / Sandbox (race detector) (push) Successful in 3m29s
Test / Hakurei (race detector) (push) Successful in 12m9s
Test / Flake checks (push) Successful in 4m23s
Useful for verifying deterministic output without enabling network access.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 15:11:17 +09:00
dcde38f2e9 internal/rosa/llvm: set exclusive bit
All checks were successful
Test / Create distribution (push) Successful in 4m32s
Test / Sandbox (push) Successful in 6m44s
Test / ShareFS (push) Successful in 7m56s
Test / Hakurei (push) Successful in 8m10s
Test / Sandbox (race detector) (push) Successful in 9m27s
Test / Hakurei (race detector) (push) Successful in 12m50s
Test / Flake checks (push) Successful in 2m19s
This was missed when improving bootstrap.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 15:08:09 +09:00
deebbf6b1a internal/rosa/git: disable more flaky tests
All checks were successful
Test / Create distribution (push) Successful in 1m10s
Test / Hakurei (push) Successful in 9m32s
Test / ShareFS (push) Successful in 9m30s
Test / Sandbox (push) Successful in 1m43s
Test / Sandbox (race detector) (push) Successful in 2m29s
Test / Hakurei (race detector) (push) Successful in 3m28s
Test / Flake checks (push) Successful in 1m23s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 04:13:02 +09:00
0c557798bc internal/rosa/curl: 8.19.0 to 8.20.0
All checks were successful
Test / Create distribution (push) Successful in 1m9s
Test / ShareFS (push) Successful in 9m32s
Test / Sandbox (push) Successful in 1m32s
Test / Sandbox (race detector) (push) Successful in 2m23s
Test / Hakurei (push) Successful in 2m36s
Test / Hakurei (race detector) (push) Successful in 3m28s
Test / Flake checks (push) Successful in 1m21s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 04:12:40 +09:00
327e6ed5a2 internal/rosa/kernel: 6.12.84 to 6.12.87
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m43s
Test / ShareFS (push) Successful in 3m45s
Test / Hakurei (push) Successful in 3m56s
Test / Sandbox (race detector) (push) Successful in 5m29s
Test / Hakurei (race detector) (push) Successful in 6m42s
Test / Flake checks (push) Successful in 1m42s
This change also pins header version constants to the same values, to be updated manually on a real API change. This eliminates rebuilds on bumping kernel version.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 04:05:30 +09:00
76c7a423a9 internal/rosa/git: disable more flaky tests
All checks were successful
Test / Create distribution (push) Successful in 1m7s
Test / Sandbox (push) Successful in 3m27s
Test / ShareFS (push) Successful in 4m19s
Test / Sandbox (race detector) (push) Successful in 6m3s
Test / Hakurei (race detector) (push) Successful in 7m0s
Test / Hakurei (push) Successful in 2m38s
Test / Flake checks (push) Successful in 1m21s
Again, causing too many spurious failures.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 03:18:12 +09:00
6e113b8836 internal/pkg: content-based dependency substitution
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m54s
Test / ShareFS (push) Successful in 3m51s
Test / Hakurei (push) Successful in 4m6s
Test / Sandbox (race detector) (push) Successful in 5m31s
Test / Hakurei (race detector) (push) Successful in 6m55s
Test / Flake checks (push) Successful in 1m28s
This change introduces a new fast path for FloodArtifact. It is taken when a curing artifact has identical-by-content controlled relevant inputs and are otherwise identical to an already-cured artifact.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-12 00:19:42 +09:00
ce9f4b5f71 internal/rosa: vim artifact
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m38s
Test / Hakurei (push) Successful in 3m46s
Test / Sandbox (race detector) (push) Successful in 5m26s
Test / Hakurei (race detector) (push) Successful in 6m23s
Test / Flake checks (push) Successful in 1m18s
Very useful for troubleshooting.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-10 21:45:56 +09:00
8f727273ef internal/pkg: add riscv64 sums
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m38s
Test / Hakurei (push) Successful in 3m48s
Test / Sandbox (race detector) (push) Successful in 5m25s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m20s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-10 17:12:30 +09:00
d0a63b942e internal/pkg: add arm64 sums
All checks were successful
Test / Create distribution (push) Successful in 1m41s
Test / Sandbox (push) Successful in 2m52s
Test / ShareFS (push) Successful in 4m6s
Test / Hakurei (push) Successful in 4m16s
Test / Sandbox (race detector) (push) Successful in 5m30s
Test / Hakurei (race detector) (push) Successful in 6m25s
Test / Flake checks (push) Successful in 1m19s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-10 16:42:42 +09:00
7f2126df32 internal/rosa/hakurei: 0.4.1 to 0.4.2
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m45s
Test / ShareFS (push) Successful in 3m39s
Test / Hakurei (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m30s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m20s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-05-10 16:30:12 +09:00
41 changed files with 1945 additions and 233 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -8,7 +8,7 @@ import (
) )
//go:generate tsc //go:generate tsc
//go:generate cp index.html style.css static //go:generate cp index.html style.css favicon.ico static
//go:embed static //go:embed static
var _static embed.FS var _static embed.FS

View File

@@ -506,6 +506,8 @@ func main() {
flagExport string flagExport string
flagRemote bool flagRemote bool
flagNoReply bool flagNoReply bool
flagFaults bool
flagPop bool
flagBoot bool flagBoot bool
flagStd bool flagStd bool
@@ -614,6 +616,49 @@ func main() {
} }
return err return err
case flagFaults:
var faults []pkg.Fault
if err := cm.Do(func(cache *pkg.Cache) (err error) {
faults, err = cache.ReadFaults(t.Load(p))
return
}); err != nil {
return err
}
for _, fault := range faults {
log.Printf("%s: %s ago", fault.String(), time.Since(fault.Time()))
}
return nil
case flagPop:
var faults []pkg.Fault
if err := cm.Do(func(cache *pkg.Cache) (err error) {
faults, err = cache.ReadFaults(t.Load(p))
return
}); err != nil {
return err
}
if len(faults) == 0 {
return errors.New("no fault entries found")
}
fault := faults[len(faults)-1]
r, err := fault.Open()
if err != nil {
return err
}
if _, err = io.Copy(os.Stdout, r); err != nil {
_ = r.Close()
return err
}
fmt.Println()
if err = r.Close(); err != nil {
return err
}
log.Printf("faulting cure terminated %s ago", time.Since(fault.Time()))
return fault.Destroy()
} }
}, },
).Flag( ).Flag(
@@ -644,9 +689,40 @@ func main() {
&flagStd, &flagStd,
"std", command.BoolFlag(false), "std", command.BoolFlag(false),
"Build on the intermediate toolchain", "Build on the intermediate toolchain",
).Flag(
&flagFaults,
"faults", command.BoolFlag(false),
"Display fault entries of the specified artifact",
).Flag(
&flagPop,
"pop", command.BoolFlag(false),
"Display and destroy the most recent fault entry",
) )
} }
c.NewCommand(
"clear",
"Remove all fault entries from the cache",
func([]string) error {
return cm.Do(func(*pkg.Cache) error {
pathname := filepath.Join(cm.base, "fault")
dents, err := os.ReadDir(pathname)
if err != nil {
return err
}
for _, dent := range dents {
msg.Verbosef("destroying entry %s", dent.Name())
if err = os.Remove(filepath.Join(pathname, dent.Name())); err != nil {
return err
}
}
log.Printf("destroyed %d fault entries", len(dents))
return nil
})
},
)
c.NewCommand( c.NewCommand(
"abort", "abort",
"Abort all pending cures on the daemon", "Abort all pending cures on the daemon",

View File

@@ -162,28 +162,40 @@ type execArtifact struct {
var _ fmt.Stringer = new(execArtifact) var _ fmt.Stringer = new(execArtifact)
// execNetArtifact is like execArtifact but implements [KnownChecksum] and has // execMeasuredArtifact is like execArtifact but implements [KnownChecksum] and
// its resulting container keep the host net namespace. // has its resulting container optionally keep the host net namespace.
type execNetArtifact struct { type execMeasuredArtifact struct {
checksum Checksum checksum Checksum
// Whether to keep host net namespace.
hostNet bool
execArtifact execArtifact
} }
var _ KnownChecksum = new(execNetArtifact) var _ KnownChecksum = new(execMeasuredArtifact)
// Checksum returns the caller-supplied checksum. // Checksum returns the caller-supplied checksum.
func (a *execNetArtifact) Checksum() Checksum { return a.checksum } func (a *execMeasuredArtifact) Checksum() Checksum { return a.checksum }
// Kind returns the hardcoded [Kind] constant. // Kind returns [KindExecNet], or [KindExec] if hostNet is false.
func (*execNetArtifact) Kind() Kind { return KindExecNet } func (a *execMeasuredArtifact) Kind() Kind {
if a == nil || a.hostNet {
return KindExecNet
}
return KindExec
}
// Cure cures the [Artifact] in the container described by the caller. The // Cure cures the [Artifact] in the container described by the caller. The
// container retains host networking. // container optionally retains host networking.
func (a *execNetArtifact) Cure(f *FContext) error { func (a *execMeasuredArtifact) Cure(f *FContext) error {
return a.cure(f, true) return a.cure(f, a.hostNet)
} }
// ErrNetChecksum is panicked by [NewExec] if host net namespace is requested
// with a nil checksum.
var ErrNetChecksum = errors.New("attempting to keep net namespace without checksum")
// NewExec returns a new [Artifact] that executes the program path in a // NewExec returns a new [Artifact] that executes the program path in a
// container with specified paths bind mounted read-only in order. A private // container with specified paths bind mounted read-only in order. A private
// instance of /proc and /dev is made available to the container. // instance of /proc and /dev is made available to the container.
@@ -197,7 +209,7 @@ func (a *execNetArtifact) Cure(f *FContext) error {
// regular or symlink. // regular or symlink.
// //
// If checksum is non-nil, the resulting [Artifact] implements [KnownChecksum] // If checksum is non-nil, the resulting [Artifact] implements [KnownChecksum]
// and its container runs in the host net namespace. // and its container optionally runs in the host net namespace.
// //
// The container is allowed to run for the specified duration before the initial // The container is allowed to run for the specified duration before the initial
// process and all processes originating from it is terminated. A zero or // process and all processes originating from it is terminated. A zero or
@@ -211,7 +223,7 @@ func NewExec(
name, arch string, name, arch string,
checksum *Checksum, checksum *Checksum,
timeout time.Duration, timeout time.Duration,
exclusive bool, hostNet, exclusive bool,
dir *check.Absolute, dir *check.Absolute,
env []string, env []string,
@@ -234,9 +246,12 @@ func NewExec(
} }
a := execArtifact{name, arch, paths, dir, env, pathname, args, timeout, exclusive} a := execArtifact{name, arch, paths, dir, env, pathname, args, timeout, exclusive}
if checksum == nil { if checksum == nil {
if hostNet {
panic(ErrNetChecksum)
}
return &a return &a
} }
return &execNetArtifact{*checksum, a} return &execMeasuredArtifact{*checksum, hostNet, a}
} }
// Kind returns the hardcoded [Kind] constant. // Kind returns the hardcoded [Kind] constant.
@@ -361,22 +376,17 @@ func readExecArtifact(r *IRReader, net bool) Artifact {
exclusive := r.ReadUint32() != 0 exclusive := r.ReadUint32() != 0
checksum, ok := r.Finalise() checksum, ok := r.Finalise()
var checksumP *Checksum var checksumP *Checksum
if net { if ok {
if !ok { checksumP = new(checksum.Value())
panic(ErrExpectedChecksum) }
}
checksumVal := checksum.Value() if net && !ok {
checksumP = &checksumVal panic(ErrExpectedChecksum)
} else {
if ok {
panic(ErrUnexpectedChecksum)
}
} }
return NewExec( return NewExec(
name, arch, checksumP, timeout, exclusive, dir, env, pathname, args, paths..., name, arch, checksumP, timeout, net, exclusive, dir, env, pathname, args, paths...,
) )
} }
@@ -588,9 +598,9 @@ func (c *Cache) EnterExec(
case *execArtifact: case *execArtifact:
e = f e = f
case *execNetArtifact: case *execMeasuredArtifact:
e = &f.execArtifact e = &f.execArtifact
hostNet = true hostNet = f.hostNet
default: default:
return ErrNotExec return ErrNotExec

View File

@@ -1,6 +1,7 @@
package pkg_test package pkg_test
import ( import (
"bytes"
_ "embed" _ "embed"
"encoding/gob" "encoding/gob"
"errors" "errors"
@@ -11,7 +12,6 @@ import (
"path/filepath" "path/filepath"
"slices" "slices"
"testing" "testing"
"unique"
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/container" "hakurei.app/container"
@@ -50,14 +50,21 @@ func TestExec(t *testing.T) {
"check": {Mode: 0400, Data: []byte{0}}, "check": {Mode: 0400, Data: []byte{0}},
} }
wantOfflineEncode := pkg.Encode(wantOffline.hash()) wantOfflineEncode := pkg.Encode(wantOffline.hash())
failingArtifact := &stubArtifact{
kind: pkg.KindTar,
params: []byte("doomed artifact"),
cure: func(t *pkg.TContext) error {
return stub.UniqueError(0xcafe)
},
}
checkWithCache(t, []cacheTestCase{ checkWithCache(t, []cacheTestCase{
{"offline", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"offline", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-offline", "", nil, 0, false, "exec-offline", "", new(wantOffline.hash()), 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -79,31 +86,47 @@ func TestExec(t *testing.T) {
pkg.MustPath("/opt", false, testtool), pkg.MustPath("/opt", false, testtool),
), ignorePathname, wantOffline, nil}, ), ignorePathname, wantOffline, nil},
{"error passthrough", pkg.NewExec( {"substitution", pkg.NewExec(
"", "", nil, 0, true, "exec-offline", "", new(wantOffline.hash()), 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
[]string{"testtool"}, []string{"testtool"},
pkg.MustPath("/proc/nonexistent", false, &stubArtifact{ pkg.MustPath("/file", false, newStubFile(
pkg.KindHTTPGet,
pkg.ID{0xfe, 0},
nil,
nil, nil,
)),
// substitution miss fails in testtool due to differing idents
pkg.MustPath("/.hakurei", false, &stubArtifact{
kind: pkg.KindTar, kind: pkg.KindTar,
params: []byte("doomed artifact"), params: []byte("empty directory (substituted)"),
cure: func(t *pkg.TContext) error { cure: func(t *pkg.TContext) error {
return stub.UniqueError(0xcafe) return os.MkdirAll(t.GetWorkDir().String(), 0700)
}, },
}), }),
pkg.MustPath("/opt", false, testtool),
), ignorePathname, wantOffline, nil},
{"error passthrough", pkg.NewExec(
"", "", nil, 0, false, true,
pkg.AbsWork,
[]string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"),
[]string{"testtool"},
pkg.MustPath("/proc/nonexistent", false, failingArtifact),
), nil, nil, &pkg.DependencyCureError{ ), nil, nil, &pkg.DependencyCureError{
{ {
Ident: unique.Make(pkg.ID(pkg.MustDecode( A: failingArtifact,
"Sowo6oZRmG6xVtUaxB6bDWZhVsqAJsIJWUp0OPKlE103cY0lodx7dem8J-qQF0Z1",
))),
Err: stub.UniqueError(0xcafe), Err: stub.UniqueError(0xcafe),
}, },
}}, }},
{"invalid paths", pkg.NewExec( {"invalid paths", pkg.NewExec(
"", "", nil, 0, false, "", "", nil, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -114,18 +137,41 @@ func TestExec(t *testing.T) {
}) })
// check init failure passthrough // check init failure passthrough
var exitError *exec.ExitError initFailureArtifact := pkg.NewExec(
if _, _, err := c.Cure(pkg.NewExec( "", "", nil, 0, false, false,
"", "", nil, 0, false,
pkg.AbsWork, pkg.AbsWork,
nil, nil,
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
[]string{"testtool"}, []string{"testtool"},
)); !errors.As(err, &exitError) || )
var exitError *exec.ExitError
if _, _, err := c.Cure(initFailureArtifact); !errors.As(err, &exitError) ||
exitError.ExitCode() != hst.ExitFailure { exitError.ExitCode() != hst.ExitFailure {
t.Fatalf("Cure: error = %v, want init exit status 1", err) t.Fatalf("Cure: error = %v, want init exit status 1", err)
} }
var faultStatus []byte
if faults, err := c.ReadFaults(initFailureArtifact); err != nil {
t.Fatal(err)
} else if len(faults) != 1 {
t.Fatalf("ReadFaults: %v", faults)
} else if faultStatus, err = os.ReadFile(faults[0].String()); err != nil {
t.Fatal(err)
} else if err = faults[0].Destroy(); err != nil {
t.Fatal(err)
} else {
t.Logf("destroyed expected fault at %s", faults[0].Time().UTC())
}
if !bytes.HasPrefix(faultStatus, []byte(
"internal/pkg ",
)) || !bytes.Contains(faultStatus, []byte(
"\ninit: fork/exec /opt/bin/testtool: no such file or directory\n",
)) {
t.Errorf("unexpected status:\n%s", string(faultStatus))
}
destroyStatus(t, base, 2, 1)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -137,15 +183,19 @@ func TestExec(t *testing.T) {
"checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb": {Mode: 0400, Data: []byte{}}, "checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb": {Mode: 0400, Data: []byte{}},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"identifier/IY91PCtOpCYy21AaIK0c9f8-Z6fb2_2ewoHWkt4dxoLf0GOrWqS8yAGFLV84b1Dw": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"identifier/QwS7SmiatdqryQYgESdGw7Yw2PcpNf0vNfpvUA0t92BTlKiUjfCrXyMW17G2X77X": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"identifier/_gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb")}, "identifier/_gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb")},
"identifier/" + expected.Offline: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)}, "identifier/" + expected.Offline: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"net", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"net", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
wantNet := expectsFS{ wantNet := expectsFS{
@@ -155,7 +205,7 @@ func TestExec(t *testing.T) {
} }
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-net", "", new(wantNet.hash()), 0, false, "exec-net", "", new(wantNet.hash()), 0, true, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -178,6 +228,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantNet, nil}, ), ignorePathname, wantNet, nil},
}) })
destroyStatus(t, base, 2, 0)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -193,16 +244,18 @@ func TestExec(t *testing.T) {
"identifier/_gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb")}, "identifier/_gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/OLBgp1GsljhM2TJ-sbHjaiH9txEUvgdDTAzHv2P24donTt6_529l-9Ua0vFImLlb")},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"overlay root", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"overlay root", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-overlay-root", "", nil, 0, false, "exec-overlay-root", "", nil, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -219,6 +272,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantOffline, nil}, ), ignorePathname, wantOffline, nil},
}) })
destroyStatus(t, base, 2, 0)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -232,16 +286,18 @@ func TestExec(t *testing.T) {
"identifier/" + expected.OvlRoot: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)}, "identifier/" + expected.OvlRoot: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"overlay work", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"overlay work", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-overlay-work", "", nil, 0, false, "exec-overlay-work", "", nil, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/work/bin/testtool"), check.MustAbs("/work/bin/testtool"),
@@ -263,6 +319,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantOffline, nil}, ), ignorePathname, wantOffline, nil},
}) })
destroyStatus(t, base, 2, 0)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -276,16 +333,18 @@ func TestExec(t *testing.T) {
"identifier/" + expected.Work: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)}, "identifier/" + expected.Work: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"multiple layers", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"multiple layers", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-multiple-layers", "", nil, 0, false, "exec-multiple-layers", "", nil, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -329,6 +388,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantOffline, nil}, ), ignorePathname, wantOffline, nil},
}) })
destroyStatus(t, base, 2, 0)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -347,16 +407,18 @@ func TestExec(t *testing.T) {
"identifier/" + expected.Layers: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)}, "identifier/" + expected.Layers: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"overlay layer promotion", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"overlay layer promotion", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
testtool, testtoolDestroy := newTesttool() testtool, testtoolDestroy := newTesttool()
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-layer-promotion", "", nil, 0, true, "exec-layer-promotion", "", nil, 0, false, true,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -379,6 +441,7 @@ func TestExec(t *testing.T) {
), ignorePathname, wantOffline, nil}, ), ignorePathname, wantOffline, nil},
}) })
destroyStatus(t, base, 2, 0)
testtoolDestroy(t, base, c) testtoolDestroy(t, base, c)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -393,18 +456,20 @@ func TestExec(t *testing.T) {
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"identifier/" + expected.Promote: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)}, "identifier/" + expected.Promote: {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantOfflineEncode)},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
{"binfmt", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"binfmt", pkg.CValidateKnown | checkDestroySubstitutes, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
if info.CanDegrade && os.Getenv("ROSA_SKIP_BINFMT") != "" { if info.CanDegrade && os.Getenv("ROSA_SKIP_BINFMT") != "" {
t.Skip("binfmt_misc test explicitly skipped") t.Skip("binfmt_misc test explicitly skipped")
} }
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"container", pkg.NewExec( {"container", pkg.NewExec(
"exec-binfmt", "cafe", nil, 0, true, "exec-binfmt", "cafe", nil, 0, false, true,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1", "HAKUREI_BINFMT=1"}, []string{"HAKUREI_TEST=1", "HAKUREI_BINFMT=1"},
check.MustAbs("/opt/bin/sample"), check.MustAbs("/opt/bin/sample"),
@@ -440,6 +505,8 @@ func TestExec(t *testing.T) {
"check": {Mode: 0400, Data: []byte("binfmt")}, "check": {Mode: 0400, Data: []byte("binfmt")},
}, nil}, }, nil},
}) })
destroyStatus(t, base, 2, 0)
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
@@ -456,6 +523,8 @@ func TestExec(t *testing.T) {
"identifier/_v8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/5aevg3YpDxjqQZ-pdvXK7YqgkL5JKqcoStYQxeD96kuYar6K2mRQWMHib6NQRnpV")}, "identifier/_v8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/5aevg3YpDxjqQZ-pdvXK7YqgkL5JKqcoStYQxeD96kuYar6K2mRQWMHib6NQRnpV")},
"identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")}, "identifier/vjz1MHPcGBKV7sjcs8jQP3cqxJ1hgPTiQBMCEHP9BGXjGxd-tJmEmXKaStObo5gK": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/MGWmEfjut2QE2xPJwTsmUzpff4BN_FEnQ7T0j7gvUCCiugJQNwqt9m151fm9D1yU")},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},

View File

@@ -29,6 +29,8 @@ func TestFile(t *testing.T) {
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"identifier/3376ALA7hIUm2LbzH2fDvRezgzod1eTK_G6XjyOgbM2u-6swvkFaF0BOwSl_juBi": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/vsAhtPNo4waRNOASwrQwcIPTqb3SBuJOXw2G4T1mNmVZM-wrQTRllmgXqcIIoRcX")}, "identifier/3376ALA7hIUm2LbzH2fDvRezgzod1eTK_G6XjyOgbM2u-6swvkFaF0BOwSl_juBi": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/vsAhtPNo4waRNOASwrQwcIPTqb3SBuJOXw2G4T1mNmVZM-wrQTRllmgXqcIIoRcX")},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
}) })

View File

@@ -1,7 +1,7 @@
package expected package expected
const ( const (
Offline = "oe7Uv1u5BwxcuX3HLQzZRg1Q5oetJo6jWiKGMOeqLiqBkaVgyKzvx82N81_IzUAz" Offline = "q5ktDTq0miP-VvB2blxqXQeaRXCUWgP_KbC18KNtUDtyoaI_h5mHmGuPMArVEBDs"
OvlRoot = "NacZGXwuRkTvcHaG08a22ujJ8qCWN0RSoFlRSR5FSt0ZcBbJ28FRvkYsHEtX7G8i" OvlRoot = "NacZGXwuRkTvcHaG08a22ujJ8qCWN0RSoFlRSR5FSt0ZcBbJ28FRvkYsHEtX7G8i"
Layers = "WBJDrATtX6rIE5yAu8ePX3WmDF0Tt9kFiue0m3cRnyRoVx1my8a67fh3CAW486oP" Layers = "WBJDrATtX6rIE5yAu8ePX3WmDF0Tt9kFiue0m3cRnyRoVx1my8a67fh3CAW486oP"
Net = "CmYtj2sNB3LHtqiDuck_Lz3MjLLIiwyP8N4NDitQ1Icvv__LVP9p8tm-sHeQaKKp" Net = "CmYtj2sNB3LHtqiDuck_Lz3MjLLIiwyP8N4NDitQ1Icvv__LVP9p8tm-sHeQaKKp"

View File

@@ -0,0 +1,10 @@
package expected
const (
Offline = "WapqyoPxbWSnq07dWHt71mHaJXq99pAjJfFlELlJljSiZMhTFqqlzU1_mN86shSj"
OvlRoot = "V9anFOiRvjGfAeBhLl14AL8TKdWZyD0WTPYe4fS9mOBw8iW5Lmarvt6TG6MV8uWm"
Layers = "tKx7JNRoSBdK_7MdzI-nwTNV2wmiPzwYdcd17oLmXKL_iLmUzUiA79qTqdrTasrv"
Net = "aXyDLzBCJ9XltXZIfetEVsEkrqHfcXuD5XE_FcUnYbN3emwL55N6P8LlHzNfGnM5"
Promote = "3k4V16n96Lq04gjFSKmm4sFjyQ883FFBNXgTy9s_DjeTwxT3pg_iacEh8yMb_S4m"
Work = "6Q49MhFWRE3Ne6MycwAotgl1GtoU5WCHqJNWG2byYZCY-zX-IxPrWiKk7bKkNzhE"
)

View File

@@ -0,0 +1,10 @@
package expected
const (
Offline = "Z6yXE5gOJScL3srmnVMWgCXccDiUNZ5snSrf6RkXuU1_U0rX_kGVwsfHUgNG_awd"
OvlRoot = "zYXJHFRLuxvUhuisZEXgGgVvdQd6piMfp5jmtT6jdVjvC2gICXquOq-UTwlrSD5I"
Layers = "_F8EDazHbcLeT0sVSQXRN_kn9IjduqJcDYgzXpsT-hpKU4EBcZ0PISN2zchpqMbm"
Net = "CA_FAaSIYJgapBEHV40doxpH23PdUEy_6s1TZc7wfSPN0XYqwGpMceXXDSabGveO"
Promote = "_3LPrLp--4h9k4GsNNApu9hHtAafq-GUhfU6d4hJKBDKT3bz_szOsvkXxc5sK53d"
Work = "FEgHeiCD_WT4wsfB-9kDH5n6cRWCEYtJmXdKZgmUUukAOoXumH_hLlosXREC-tqq"
)

View File

@@ -76,6 +76,9 @@ type IContext struct {
// Written to by various methods, should be zeroed after [Artifact.Params] // Written to by various methods, should be zeroed after [Artifact.Params]
// returns and must not be exposed directly. // returns and must not be exposed directly.
w io.Writer w io.Writer
// Optional [Artifact] to cureRes cache, replaces [IRKindIdent] with
// checksum values if non-nil.
inputs map[Artifact]cureRes
} }
// irZero is a zero IR word. // irZero is a zero IR word.
@@ -163,7 +166,15 @@ func (i *IContext) WriteIdent(a Artifact) {
defer i.ic.putIdentBuf(buf) defer i.ic.putIdentBuf(buf)
IRKindIdent.encodeHeader(0).put(buf[:]) IRKindIdent.encodeHeader(0).put(buf[:])
*(*ID)(buf[wordSize:]) = i.ic.Ident(a).Value() if i.inputs != nil {
res, ok := i.inputs[a]
if !ok {
panic(InvalidLookupError(i.ic.Ident(a).Value()))
}
*(*ID)(buf[wordSize:]) = res.checksum.Value()
} else {
*(*ID)(buf[wordSize:]) = i.ic.Ident(a).Value()
}
i.mustWrite(buf[:]) i.mustWrite(buf[:])
} }
@@ -207,19 +218,44 @@ func (i *IContext) WriteString(s string) {
// Encode writes a deterministic, efficient representation of a to w and returns // Encode writes a deterministic, efficient representation of a to w and returns
// the first non-nil error encountered while writing to w. // the first non-nil error encountered while writing to w.
func (ic *irCache) Encode(w io.Writer, a Artifact) (err error) { func (ic *irCache) Encode(w io.Writer, a Artifact) (err error) {
return ic.encode(w, a, nil)
}
// encode implements Encode but replaces identifiers with their cured checksums
// for a non-nil ident. Caller must acquire Cache.identMu.
func (ic *irCache) encode(
w io.Writer,
a Artifact,
inputs map[Artifact]cureRes,
) (err error) {
deps := a.Dependencies() deps := a.Dependencies()
idents := make([]*extIdent, len(deps)) idents := make([]*extIdent, len(deps))
for i, d := range deps { if inputs == nil {
dbuf, did := ic.unsafeIdent(d, true) for i, d := range deps {
if dbuf == nil { dbuf, did := ic.unsafeIdent(d, true)
dbuf = ic.getIdentBuf() if dbuf == nil {
binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind())) dbuf = ic.getIdentBuf()
*(*ID)(dbuf[wordSize:]) = did.Value() binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind()))
} else { *(*ID)(dbuf[wordSize:]) = did.Value()
ic.storeIdent(d, dbuf) } else {
ic.storeIdent(d, dbuf)
}
defer ic.putIdentBuf(dbuf)
idents[i] = dbuf
}
} else {
for i, d := range deps {
res, ok := inputs[d]
if !ok {
return InvalidLookupError(ic.Ident(d).Value())
}
dbuf := ic.getIdentBuf()
binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind()))
*(*ID)(dbuf[wordSize:]) = res.checksum.Value()
defer ic.putIdentBuf(dbuf)
idents[i] = dbuf
} }
defer ic.putIdentBuf(dbuf)
idents[i] = dbuf
} }
slices.SortFunc(idents, func(a, b *extIdent) int { slices.SortFunc(idents, func(a, b *extIdent) int {
return bytes.Compare(a[:], b[:]) return bytes.Compare(a[:], b[:])
@@ -244,7 +280,7 @@ func (ic *irCache) Encode(w io.Writer, a Artifact) (err error) {
} }
func() { func() {
i := IContext{ic, w} i := IContext{ic, w, inputs}
defer panicToError(&err) defer panicToError(&err)
defer func() { i.ic, i.w = nil, nil }() defer func() { i.ic, i.w = nil, nil }()

View File

@@ -39,7 +39,7 @@ func TestIRRoundtrip(t *testing.T) {
)}, )},
{"exec offline", pkg.NewExec( {"exec offline", pkg.NewExec(
"exec-offline", "", nil, 0, false, "exec-offline", "", nil, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -61,7 +61,7 @@ func TestIRRoundtrip(t *testing.T) {
{"exec net", pkg.NewExec( {"exec net", pkg.NewExec(
"exec-net", "", "exec-net", "",
(*pkg.Checksum)(bytes.Repeat([]byte{0xfc}, len(pkg.Checksum{}))), (*pkg.Checksum)(bytes.Repeat([]byte{0xfc}, len(pkg.Checksum{}))),
0, false, 0, false, false,
pkg.AbsWork, pkg.AbsWork,
[]string{"HAKUREI_TEST=1"}, []string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"), check.MustAbs("/opt/bin/testtool"),
@@ -80,6 +80,28 @@ func TestIRRoundtrip(t *testing.T) {
)), )),
)}, )},
{"exec measured", pkg.NewExec(
"exec-measured", "",
(*pkg.Checksum)(bytes.Repeat([]byte{0xfd}, len(pkg.Checksum{}))),
0, false, false,
pkg.AbsWork,
[]string{"HAKUREI_TEST=1"},
check.MustAbs("/opt/bin/testtool"),
[]string{"testtool", "measured"},
pkg.MustPath("/file", false, pkg.NewFile("file", []byte(
"stub file",
))), pkg.MustPath("/.hakurei", false, pkg.NewHTTPGetTar(
nil, "file:///hakurei.tar",
pkg.Checksum(bytes.Repeat([]byte{0xfd}, len(pkg.Checksum{}))),
pkg.TarUncompressed,
)), pkg.MustPath("/opt", false, pkg.NewHTTPGetTar(
nil, "file:///testtool.tar.gz",
pkg.Checksum(bytes.Repeat([]byte{0xfd}, len(pkg.Checksum{}))),
pkg.TarGzip,
)),
)},
{"file anonymous", pkg.NewFile("", []byte{0})}, {"file anonymous", pkg.NewFile("", []byte{0})},
{"file", pkg.NewFile("stub", []byte("stub"))}, {"file", pkg.NewFile("stub", []byte("stub"))},
} }
@@ -110,6 +132,7 @@ func TestIRRoundtrip(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}, },
} }

View File

@@ -90,6 +90,7 @@ func TestHTTPGet(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -159,6 +160,8 @@ func TestHTTPGet(t *testing.T) {
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"identifier/oM-2pUlk-mOxK1t3aMWZer69UdOQlAXiAgMrpZ1476VoOqpYVP1aGFS9_HYy-D8_": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/fLYGIMHgN1louE-JzITJZJo2SDniPu-IHBXubtvQWFO-hXnDVKNuscV7-zlyr5fU")}, "identifier/oM-2pUlk-mOxK1t3aMWZer69UdOQlAXiAgMrpZ1476VoOqpYVP1aGFS9_HYy-D8_": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/fLYGIMHgN1louE-JzITJZJo2SDniPu-IHBXubtvQWFO-hXnDVKNuscV7-zlyr5fU")},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
}) })

View File

@@ -4,6 +4,7 @@ package pkg
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"cmp"
"context" "context"
"crypto/sha512" "crypto/sha512"
"encoding/base64" "encoding/base64"
@@ -15,6 +16,7 @@ import (
"io/fs" "io/fs"
"maps" "maps"
"os" "os"
"os/signal"
"path/filepath" "path/filepath"
"runtime" "runtime"
"slices" "slices"
@@ -24,6 +26,7 @@ import (
"sync/atomic" "sync/atomic"
"syscall" "syscall"
"testing" "testing"
"time"
"unique" "unique"
"unsafe" "unsafe"
@@ -247,7 +250,14 @@ func (t *TContext) destroy(errP *error) {
*errP = errors.Join(*errP, err) *errP = errors.Join(*errP, err)
} }
if *errP != nil { if *errP != nil {
*errP = errors.Join(*errP, os.Remove(t.statusPath.String())) *errP = errors.Join(*errP, os.Rename(
t.statusPath.String(), t.cache.base.Append(
dirFault,
t.ids+"."+strconv.FormatUint(uint64(
time.Now().UnixNano(),
), 10),
).String(),
))
} }
t.status = nil t.status = nil
} }
@@ -508,33 +518,34 @@ const (
) )
const ( const (
// fileLock is the file name appended to Cache.base for guaranteeing // fileLock is the lock file for exclusive access to the cache directory.
// exclusive access to the cache directory.
fileLock = "lock" fileLock = "lock"
// fileVariant is the file name appended to Cache.base holding the variant // fileVariant is a file holding the variant identification string set by a
// identification string set by a prior call to [SetExtension]. // prior call to [SetExtension].
fileVariant = "variant" fileVariant = "variant"
// dirIdentifier is the directory name appended to Cache.base for storing // dirSubstitute holds symlinks to artifacts by checksum, named after their
// artifacts named after their [ID]. // substitute identifier.
dirSubstitute = "substitute"
// dirIdentifier holds symlinks to artifacts by checksum, named after their
// IR-based identifier.
dirIdentifier = "identifier" dirIdentifier = "identifier"
// dirChecksum is the directory name appended to Cache.base for storing // dirChecksum holds artifacts named after their [Checksum].
// artifacts named after their [Checksum].
dirChecksum = "checksum" dirChecksum = "checksum"
// dirStatus is the directory name appended to Cache.base for storing // dirStatus holds artifact metadata and logs named after their IR-based
// artifact metadata and logs named after their [ID]. // identifier. For [FloodArtifact], the same file is also available under
// its substitute identifier.
dirStatus = "status" dirStatus = "status"
// dirFault holds status files of faulted cures.
dirFault = "fault"
// dirWork is the directory name appended to Cache.base for working // dirWork holds working pathnames set up during [Cache.Cure].
// pathnames set up during [Cache.Cure].
dirWork = "work" dirWork = "work"
// dirTemp is the directory name appended to Cache.base for scratch space // dirTemp holds scratch space allocated during [Cache.Cure].
// pathnames allocated during [Cache.Cure].
dirTemp = "temp" dirTemp = "temp"
// dirExecScratch is the directory name appended to Cache.base for scratch // dirExecScratch is scratch space set up for the container started by
// space setting up the container started by [Cache.EnterExec]. Exclusivity // [Cache.EnterExec]. Exclusivity via Cache.inExec.
// via Cache.inExec.
dirExecScratch = "scratch" dirExecScratch = "scratch"
// checksumLinknamePrefix is prepended to the encoded [Checksum] value // checksumLinknamePrefix is prepended to the encoded [Checksum] value
@@ -621,6 +632,9 @@ const (
// CSuppressInit arranges for verbose output of the container init to be // CSuppressInit arranges for verbose output of the container init to be
// suppressed regardless of [message.Msg] state. // suppressed regardless of [message.Msg] state.
CSuppressInit CSuppressInit
// CIgnoreSubstitutes disables content-based dependency substitution.
CIgnoreSubstitutes
) )
// toplevel holds [context.WithCancel] over caller-supplied context, where all // toplevel holds [context.WithCancel] over caller-supplied context, where all
@@ -676,6 +690,11 @@ type Cache struct {
// Synchronises access to dirChecksum. // Synchronises access to dirChecksum.
checksumMu sync.RWMutex checksumMu sync.RWMutex
// Presence of an alternative in the cache. Keys are not valid identifiers
// and must not be used as such.
substitute map[unique.Handle[ID]]unique.Handle[Checksum]
// Synchronises access to substitute and corresponding filesystem entries.
substituteMu sync.RWMutex
// Identifier to content pair cache. // Identifier to content pair cache.
ident map[unique.Handle[ID]]unique.Handle[Checksum] ident map[unique.Handle[ID]]unique.Handle[Checksum]
// Identifier to error pair for unrecoverably faulted [Artifact]. // Identifier to error pair for unrecoverably faulted [Artifact].
@@ -886,11 +905,14 @@ func (c *Cache) Scrub(checks int) error {
checks = runtime.NumCPU() checks = runtime.NumCPU()
} }
c.substituteMu.Lock()
defer c.substituteMu.Unlock()
c.identMu.Lock() c.identMu.Lock()
defer c.identMu.Unlock() defer c.identMu.Unlock()
c.checksumMu.Lock() c.checksumMu.Lock()
defer c.checksumMu.Unlock() defer c.checksumMu.Unlock()
c.substitute = make(map[unique.Handle[ID]]unique.Handle[Checksum])
c.ident = make(map[unique.Handle[ID]]unique.Handle[Checksum]) c.ident = make(map[unique.Handle[ID]]unique.Handle[Checksum])
c.identErr = make(map[unique.Handle[ID]]error) c.identErr = make(map[unique.Handle[ID]]error)
c.artifact.Clear() c.artifact.Clear()
@@ -998,47 +1020,52 @@ func (c *Cache) Scrub(checks int) error {
wg.Wait() wg.Wait()
} }
dir = c.base.Append(dirIdentifier) for _, suffix := range []string{
if entries, readdirErr := os.ReadDir(dir.String()); readdirErr != nil { dirSubstitute,
addErr(dir, readdirErr) dirIdentifier,
} else { } {
wg.Add(len(entries)) dir = c.base.Append(suffix)
for _, ent := range entries { if entries, readdirErr := os.ReadDir(dir.String()); readdirErr != nil {
w <- checkEntry{ent, func(ent os.DirEntry, want *Checksum) bool { addErr(dir, readdirErr)
got := p.Get().(*Checksum) } else {
defer p.Put(got) wg.Add(len(entries))
for _, ent := range entries {
w <- checkEntry{ent, func(ent os.DirEntry, want *Checksum) bool {
got := p.Get().(*Checksum)
defer p.Put(got)
pathname := dir.Append(ent.Name()) pathname := dir.Append(ent.Name())
if linkname, err := os.Readlink( if linkname, err := os.Readlink(
pathname.String(), pathname.String(),
); err != nil { ); err != nil {
seMu.Lock() seMu.Lock()
se.Errs[pathname.Handle()] = append(se.Errs[pathname.Handle()], err) se.Errs[pathname.Handle()] = append(se.Errs[pathname.Handle()], err)
se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want) se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want)
seMu.Unlock() seMu.Unlock()
return false return false
} else if err = Decode(got, filepath.Base(linkname)); err != nil { } else if err = Decode(got, filepath.Base(linkname)); err != nil {
seMu.Lock() seMu.Lock()
lnp := dir.Append(linkname) lnp := dir.Append(linkname)
se.Errs[lnp.Handle()] = append(se.Errs[lnp.Handle()], err) se.Errs[lnp.Handle()] = append(se.Errs[lnp.Handle()], err)
se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want) se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want)
seMu.Unlock() seMu.Unlock()
return false return false
}
if _, err := os.Stat(pathname.String()); err != nil {
if !errors.Is(err, os.ErrNotExist) {
addErr(pathname, err)
} }
seMu.Lock()
se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want) if _, err := os.Stat(pathname.String()); err != nil {
seMu.Unlock() if !errors.Is(err, os.ErrNotExist) {
return false addErr(pathname, err)
} }
return true seMu.Lock()
}} se.DanglingIdentifiers = append(se.DanglingIdentifiers, *want)
seMu.Unlock()
return false
}
return true
}}
}
wg.Wait()
} }
wg.Wait()
} }
dir = c.base.Append(dirStatus) dir = c.base.Append(dirStatus)
@@ -1186,6 +1213,52 @@ func (c *Cache) finaliseIdent(
close(done) close(done)
} }
// zeroChecksum is a zero [Checksum] handle, used for comparison only.
var zeroChecksum unique.Handle[Checksum]
// loadSubstitute returns a checksum corresponding to a substitute identifier,
// or zeroChecksum if an alternative is not available.
func (c *Cache) loadSubstitute(
substitute unique.Handle[ID],
) (unique.Handle[Checksum], error) {
c.substituteMu.RLock()
if checksum, ok := c.substitute[substitute]; ok {
c.substituteMu.RUnlock()
return checksum, nil
}
linkname, err := os.Readlink(c.base.Append(
dirSubstitute,
Encode(substitute.Value()),
).String())
c.substituteMu.RUnlock()
if err != nil {
if !errors.Is(err, os.ErrNotExist) {
return zeroChecksum, err
}
c.substituteMu.Lock()
c.substitute[substitute] = zeroChecksum
c.substituteMu.Unlock()
return zeroChecksum, nil
}
var checksum unique.Handle[Checksum]
buf := c.getIdentBuf()
err = Decode((*Checksum)(buf[:]), filepath.Base(linkname))
if err == nil {
checksum = unique.Make(Checksum(buf[:]))
c.substituteMu.Lock()
c.substitute[substitute] = checksum
c.substituteMu.Unlock()
}
c.putIdentBuf(buf)
return checksum, err
}
// Done returns a channel that is closed when the ongoing cure of an [Artifact] // Done returns a channel that is closed when the ongoing cure of an [Artifact]
// referred to by the specified identifier completes. Done may return nil if // referred to by the specified identifier completes. Done may return nil if
// no ongoing cure of the specified identifier exists. // no ongoing cure of the specified identifier exists.
@@ -1420,8 +1493,8 @@ func (c *Cache) Cure(a Artifact) (
// CureError wraps a non-nil error returned attempting to cure an [Artifact]. // CureError wraps a non-nil error returned attempting to cure an [Artifact].
type CureError struct { type CureError struct {
Ident unique.Handle[ID] A Artifact
Err error Err error
} }
// Unwrap returns the underlying error. // Unwrap returns the underlying error.
@@ -1434,40 +1507,63 @@ func (e *CureError) Error() string { return e.Err.Error() }
type DependencyCureError []*CureError type DependencyCureError []*CureError
// unwrapM recursively expands underlying errors into a caller-supplied map. // unwrapM recursively expands underlying errors into a caller-supplied map.
func (e *DependencyCureError) unwrapM(me map[unique.Handle[ID]]*CureError) { func (e *DependencyCureError) unwrapM(
ctx context.Context,
ir *IRCache,
me map[unique.Handle[ID]]*CureError,
) {
for _, err := range *e { for _, err := range *e {
if _, ok := me[err.Ident]; ok { if ctx.Err() != nil {
break
}
id := ir.Ident(err.A)
if _, ok := me[id]; ok {
continue continue
} }
if _e, ok := err.Err.(*DependencyCureError); ok { if _e, ok := err.Err.(*DependencyCureError); ok {
_e.unwrapM(me) _e.unwrapM(ctx, ir, me)
continue continue
} }
me[err.Ident] = err me[id] = err
} }
} }
// unwrap recursively expands and deduplicates underlying errors. // unwrap recursively expands and deduplicates underlying errors.
func (e *DependencyCureError) unwrap() DependencyCureError { func (e *DependencyCureError) unwrap(
ctx context.Context,
ir *IRCache,
) DependencyCureError {
me := make(map[unique.Handle[ID]]*CureError) me := make(map[unique.Handle[ID]]*CureError)
e.unwrapM(me) e.unwrapM(ctx, ir, me)
errs := slices.AppendSeq( type ent struct {
make(DependencyCureError, 0, len(me)), id unique.Handle[ID]
maps.Values(me), err *CureError
) }
errs := make([]*ent, 0, len(me))
for id, err := range me {
errs = append(errs, &ent{id, err})
}
var identBuf [2]ID var identBuf [2]ID
slices.SortFunc(errs, func(a, b *CureError) int { slices.SortFunc(errs, func(a, b *ent) int {
identBuf[0], identBuf[1] = a.Ident.Value(), b.Ident.Value() identBuf[0], identBuf[1] = a.id.Value(), b.id.Value()
return slices.Compare(identBuf[0][:], identBuf[1][:]) return slices.Compare(identBuf[0][:], identBuf[1][:])
}) })
return errs _errs := make(DependencyCureError, len(errs))
for i, v := range errs {
_errs[i] = v.err
}
return _errs
} }
// Unwrap returns a deduplicated slice of underlying errors. // Unwrap returns a deduplicated slice of underlying errors.
func (e *DependencyCureError) Unwrap() []error { func (e *DependencyCureError) Unwrap() []error {
errs := e.unwrap() ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
errs := e.unwrap(ctx, NewIR())
_errs := make([]error, len(errs)) _errs := make([]error, len(errs))
for i, err := range errs { for i, err := range errs {
_errs[i] = err _errs[i] = err
@@ -1477,14 +1573,23 @@ func (e *DependencyCureError) Unwrap() []error {
// Error returns a user-facing multiline error message. // Error returns a user-facing multiline error message.
func (e *DependencyCureError) Error() string { func (e *DependencyCureError) Error() string {
errs := e.unwrap() ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
ir := NewIR()
errs := e.unwrap(ctx, ir)
if len(errs) == 0 { if len(errs) == 0 {
return "invalid dependency cure outcome" return "invalid dependency cure outcome"
} }
var buf strings.Builder var buf strings.Builder
buf.WriteString("errors curing dependencies:") buf.WriteString("errors curing dependencies:")
for _, err := range errs { for _, err := range errs {
buf.WriteString("\n\t" + Encode(err.Ident.Value()) + ": " + err.Error()) buf.WriteString("\n\t" +
reportName(err.A, ir.Ident(err.A)) + ": " +
err.Error())
}
if ctx.Err() != nil {
buf.WriteString("\nerror resolution cancelled")
} }
return buf.String() return buf.String()
} }
@@ -1654,16 +1759,44 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
return return
} }
var checksums string var (
checksums string
substitute unique.Handle[ID]
alternative *check.Absolute
)
defer func() { defer func() {
if err == nil && checksums != "" { if err == nil && checksums != "" {
linkname := checksumLinknamePrefix + checksums
err = os.Symlink( err = os.Symlink(
checksumLinknamePrefix+checksums, linkname,
pathname.String(), pathname.String(),
) )
if err == nil { if err == nil {
err = zeroTimes(pathname.String()) err = zeroTimes(pathname.String())
} }
if err == nil && alternative != nil {
c.substituteMu.Lock()
err = os.Symlink(
linkname,
alternative.String(),
)
if errors.Is(err, os.ErrExist) {
c.msg.Verbosef(
"creating alternative over %s for artifact %s",
Encode(substitute.Value()), ids,
)
err = nil
}
if err == nil {
err = zeroTimes(alternative.String())
}
if err == nil && checksum != zeroChecksum {
c.substitute[substitute] = checksum
}
c.substituteMu.Unlock()
}
} }
}() }()
@@ -1860,11 +1993,64 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
f.deps[deps[i]] = p f.deps[deps[i]] = p
} }
sh := sha512.New384()
err = c.encode(sh, a, f.deps)
if err != nil {
return
}
buf := c.getIdentBuf()
sh.Sum(buf[wordSize:wordSize])
substitute = unique.Make(ID(buf[wordSize:]))
substitutes := Encode(substitute.Value())
c.putIdentBuf(buf)
alternative = c.base.Append(
dirSubstitute,
substitutes,
)
if c.flags&CIgnoreSubstitutes == 0 {
var substituteChecksum unique.Handle[Checksum]
substituteChecksum, err = c.loadSubstitute(substitute)
if err != nil {
return
}
if substituteChecksum != zeroChecksum {
checksum = substituteChecksum
checksums = Encode(checksum.Value())
checksumPathname = c.base.Append(
dirChecksum,
checksums,
)
if _, err = os.Lstat(c.base.Append(
dirStatus,
substitutes,
).String()); err == nil {
err = os.Symlink(substitutes, c.base.Append(
dirStatus,
ids,
).String())
} else if errors.Is(err, os.ErrNotExist) {
err = nil
}
return
}
}
defer f.destroy(&err) defer f.destroy(&err)
if err = c.enterCure(a, curesExempt); err != nil { if err = c.enterCure(a, curesExempt); err != nil {
return return
} }
err = ca.Cure(&f) err = ca.Cure(&f)
if err == nil && f.status != nil {
err = os.Link(c.base.Append(
dirStatus,
ids,
).String(), c.base.Append(
dirStatus,
substitutes,
).String())
}
c.exitCure(a, curesExempt) c.exitCure(a, curesExempt)
if err != nil { if err != nil {
return return
@@ -1957,7 +2143,7 @@ func (pending *pendingArtifactDep) cure(c *Cache) {
} }
pending.errsMu.Lock() pending.errsMu.Lock()
*pending.errs = append(*pending.errs, &CureError{c.Ident(pending.a), err}) *pending.errs = append(*pending.errs, &CureError{pending.a, err})
pending.errsMu.Unlock() pending.errsMu.Unlock()
} }
@@ -1973,6 +2159,52 @@ func (c *Cache) OpenStatus(a Artifact) (r io.ReadSeekCloser, err error) {
return return
} }
// Fault holds the pathname and termination time of an [Artifact] fault entry.
type Fault struct {
*check.Absolute
t uint64
}
// Time returns the instant in time where the fault occurred.
func (f Fault) Time() time.Time { return time.Unix(0, int64(f.t)) }
// Open opens the underlying entry for reading.
func (f Fault) Open() (io.ReadCloser, error) { return os.Open(f.Absolute.String()) }
// Destroy removes the underlying fault entry.
func (f Fault) Destroy() error { return os.Remove(f.Absolute.String()) }
// ReadFaults returns fault entries for an [Artifact].
func (c *Cache) ReadFaults(a Artifact) (faults []Fault, err error) {
prefix := Encode(c.Ident(a).Value()) + "."
var dents []os.DirEntry
if dents, err = os.ReadDir(c.base.Append(dirFault).String()); err != nil {
return
}
for _, dent := range dents {
name := dent.Name()
if !strings.HasPrefix(name, prefix) {
continue
}
var t uint64
t, err = strconv.ParseUint(name[len(prefix):], 10, 64)
if err != nil {
return
}
faults = append(faults, Fault{c.base.Append(
dirFault,
name,
), t})
}
slices.SortFunc(faults, func(a, b Fault) int {
return cmp.Compare(a.t, b.t)
})
return
}
// Abort cancels all pending cures and waits for them to clean up, but does not // Abort cancels all pending cures and waits for them to clean up, but does not
// close the cache. // close the cache.
func (c *Cache) Abort() { func (c *Cache) Abort() {
@@ -2072,9 +2304,11 @@ func open(
} }
for _, name := range []string{ for _, name := range []string{
dirSubstitute,
dirIdentifier, dirIdentifier,
dirChecksum, dirChecksum,
dirStatus, dirStatus,
dirFault,
dirWork, dirWork,
} { } {
if err := os.MkdirAll( if err := os.MkdirAll(
@@ -2097,6 +2331,7 @@ func open(
irCache: zeroIRCache(), irCache: zeroIRCache(),
substitute: make(map[unique.Handle[ID]]unique.Handle[Checksum]),
ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]), ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]),
identErr: make(map[unique.Handle[ID]]error), identErr: make(map[unique.Handle[ID]]error),
identPending: make(map[unique.Handle[ID]]*pendingCure), identPending: make(map[unique.Handle[ID]]*pendingCure),

View File

@@ -21,6 +21,7 @@ import (
"syscall" "syscall"
"testing" "testing"
"testing/fstest" "testing/fstest"
"time"
"unique" "unique"
"unsafe" "unsafe"
@@ -244,6 +245,41 @@ func newDestroyArtifactFunc(a pkg.Artifact) func(
} }
} }
// destroyStatus counts non-substitution status entries and destroys them.
func destroyStatus(t *testing.T, base *check.Absolute, c, s int) {
dents, err := os.ReadDir(base.Append("status").String())
if err != nil {
t.Fatal(err)
}
var gotC, gotS int
for _, dent := range dents {
if err = os.Remove(base.Append(
"status",
dent.Name(),
).String()); err != nil {
t.Fatal(err)
}
if dent.Type().IsRegular() {
gotC++
continue
}
if dent.Type()&fs.ModeSymlink == fs.ModeSymlink {
gotS++
continue
}
t.Errorf("%s: %s", dent.Name(), dent.Type())
}
if gotC != c {
t.Errorf("status: c = %d, want %d", gotC, c)
}
if gotS != s {
t.Errorf("status: s = %d, want %d", gotS, s)
}
}
func TestIdent(t *testing.T) { func TestIdent(t *testing.T) {
t.Parallel() t.Parallel()
@@ -392,6 +428,12 @@ type cacheTestCase struct {
want expectsFS want expectsFS
} }
const (
// checkDestroySubstitutes arranges for substitutes to be destroyed before
// measurement during checkWithCache.
checkDestroySubstitutes = 1 << (iota + 32)
)
// checkWithCache runs a slice of cacheTestCase. // checkWithCache runs a slice of cacheTestCase.
func checkWithCache(t *testing.T, testCases []cacheTestCase) { func checkWithCache(t *testing.T, testCases []cacheTestCase) {
t.Helper() t.Helper()
@@ -472,10 +514,27 @@ func checkWithCache(t *testing.T, testCases []cacheTestCase) {
} }
} }
// destroy non-deterministic status files // destroy non-deterministic substitutes
if err := os.RemoveAll(base.Append("status").String()); err != nil { if tc.flags&checkDestroySubstitutes != 0 {
substitute := base.Append("substitute")
if err := os.RemoveAll(substitute.String()); err != nil {
t.Fatal(err)
} else if err = os.Mkdir(substitute.String(), 0700); err != nil {
t.Fatal(err)
}
}
// destroy empty status directory
if err := syscall.Rmdir(base.Append("status").String()); err != nil {
t.Error(expectsFrom(base.Append("status").String()))
t.Fatal(err) t.Fatal(err)
} }
// destroy empty fault directory
if err := os.Remove(base.Append("fault").String()); err != nil {
t.Fatal(err)
}
want := tc.want.hash() want := tc.want.hash()
var checksum pkg.Checksum var checksum pkg.Checksum
@@ -539,6 +598,21 @@ func cureMany(t *testing.T, c *pkg.Cache, steps []cureStep) {
for _, step := range steps { for _, step := range steps {
t.Log("cure step:", step.name) t.Log("cure step:", step.name)
if pathname, checksum, err := c.Cure(step.a); !reflect.DeepEqual(err, step.err) { if pathname, checksum, err := c.Cure(step.a); !reflect.DeepEqual(err, step.err) {
faults, _err := c.ReadFaults(step.a)
if _err != nil {
t.Errorf("ReadFaults: error = %v", _err)
}
var p []byte
for _, fault := range faults {
p, _err = os.ReadFile(fault.String())
if _err != nil {
t.Error(_err)
continue
}
t.Log(string(p))
t.Logf("faulting cure terminated %s ago", time.Since(faults[0].Time()))
}
t.Fatalf("Cure: error = %v, want %v", err, step.err) t.Fatalf("Cure: error = %v, want %v", err, step.err)
} else if step.pathname != ignorePathname && !pathname.Is(step.pathname) { } else if step.pathname != ignorePathname && !pathname.Is(step.pathname) {
t.Fatalf("Cure: pathname = %q, want %q", pathname, step.pathname) t.Fatalf("Cure: pathname = %q, want %q", pathname, step.pathname)
@@ -635,6 +709,15 @@ func TestCache(t *testing.T) {
"identifier", "identifier",
"cafebabecafebabecafebabecafebabecafebabecafebabecafebabecafebabe", "cafebabecafebabecafebabecafebabecafebabecafebabecafebabecafebabe",
) )
failingFile := newStubFile(
pkg.KindHTTPGet,
pkg.ID{0xff, 3},
nil,
nil, struct {
_ []byte
stub.UniqueError
}{UniqueError: 0xbad},
)
cureMany(t, c, []cureStep{ cureMany(t, c, []cureStep{
{"initial file", newStubFile( {"initial file", newStubFile(
@@ -716,22 +799,14 @@ func TestCache(t *testing.T) {
{"noncomparable error", &stubArtifactF{ {"noncomparable error", &stubArtifactF{
kind: pkg.KindExec, kind: pkg.KindExec,
params: []byte("artifact with dependency returning noncomparable error"), params: []byte("artifact with dependency returning noncomparable error"),
deps: []pkg.Artifact{newStubFile( deps: []pkg.Artifact{failingFile},
pkg.KindHTTPGet,
pkg.ID{0xff, 3},
nil,
nil, struct {
_ []byte
stub.UniqueError
}{UniqueError: 0xbad},
)},
cure: func(f *pkg.FContext) error { cure: func(f *pkg.FContext) error {
panic("attempting to cure impossible artifact") panic("attempting to cure impossible artifact")
}, },
}, nil, nil, &pkg.DependencyCureError{ }, nil, nil, &pkg.DependencyCureError{
{ {
Ident: unique.Make(pkg.ID{0xff, 3}), A: failingFile,
Err: struct { Err: struct {
_ []byte _ []byte
stub.UniqueError stub.UniqueError
@@ -788,6 +863,8 @@ func TestCache(t *testing.T) {
"identifier/cafebabecafebabecafebabecafebabecafebabecafebabecafebabecafebabe": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/0bSFPu5Tnd-2Jj0Mv6co23PW2t3BmHc7eLFj9TgY3eIBg8zislo7xZYNBqovVLcq")}, "identifier/cafebabecafebabecafebabecafebabecafebabecafebabecafebabecafebabe": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/0bSFPu5Tnd-2Jj0Mv6co23PW2t3BmHc7eLFj9TgY3eIBg8zislo7xZYNBqovVLcq")},
"identifier/deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/0bSFPu5Tnd-2Jj0Mv6co23PW2t3BmHc7eLFj9TgY3eIBg8zislo7xZYNBqovVLcq")}, "identifier/deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/0bSFPu5Tnd-2Jj0Mv6co23PW2t3BmHc7eLFj9TgY3eIBg8zislo7xZYNBqovVLcq")},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -972,11 +1049,87 @@ func TestCache(t *testing.T) {
}}, nil, nil, pkg.InvalidFileModeError( }}, nil, nil, pkg.InvalidFileModeError(
fs.ModeSymlink | 0777, fs.ModeSymlink | 0777,
)}, )},
{"alternative", &stubArtifactF{
kind: pkg.KindExec,
params: []byte("substitutable artifact"),
deps: []pkg.Artifact{newStubFile(
pkg.KindHTTPGet,
pkg.ID{0xff, 8},
nil,
[]byte("substitutable dependency"),
nil,
)},
cure: func(f *pkg.FContext) error {
return makeSample(&f.TContext)
},
}, base.Append(
"identifier",
"xMDWovje7OfyIaDy_2VnjpKxRqSOQ_LoeD946t-3WsS2V2SeMJ7nDGrNfpa4Pbc-",
), want, nil},
{"substitutable", &stubArtifactF{
kind: pkg.KindExec,
params: []byte("substitutable artifact"),
deps: []pkg.Artifact{newStubFile(
pkg.KindHTTPGet,
pkg.ID{0xff, 10},
nil,
[]byte("substitutable dependency"),
nil,
)},
cure: func(f *pkg.FContext) error {
panic("substitution missed")
},
}, base.Append(
"identifier",
"k2ilgG5KQ9NXnMoT2oB6NdwOnSPRn_H24oXQc4l6qOYIxIG9XfuEczeyrR8UEv_f",
), want, nil},
}) })
if c0, err := unsafeOpen(
t.Context(),
message.New(nil),
0, 0, 0, base, false,
); err != nil {
t.Fatalf("open: error = %v", err)
} else {
t.Cleanup(c.Close)
cureMany(t, c0, []cureStep{
{"substitutable", &stubArtifactF{
kind: pkg.KindExec,
params: []byte("substitutable artifact"),
deps: []pkg.Artifact{newStubFile(
pkg.KindHTTPGet,
pkg.ID{0xff, 0xff, 0xfd, 0xfd},
nil,
[]byte("substitutable dependency"),
nil,
)},
cure: func(f *pkg.FContext) error {
panic("substitution missed")
},
}, base.Append(
"identifier",
"_EmV5nsYZ2UWHgRmLDMU8i-rJWDx-kv5_1pFrzQI7vMMCM5mAXivO8UZtVfOqMR_",
), want, nil},
})
}
if dents, err := os.ReadDir(base.Append("status").String()); err != nil {
t.Fatal(err)
} else if len(dents) > 0 {
t.Errorf("ReadDir: %v", dents)
}
}, expectsFS{ }, expectsFS{
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"checksum/7lfQ4QwSpV8nw7IDh0JiQ_jqUPrPv3_Vfie034RxsSy-cy4vO8DVvxgpx2LW08oO": {Mode: 0400, Data: []byte("substitutable dependency")},
"checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b": {Mode: fs.ModeDir | 0500}, "checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b": {Mode: fs.ModeDir | 0500},
"checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b/check": {Mode: 0400, Data: []byte{0, 0}}, "checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b/check": {Mode: 0400, Data: []byte{0, 0}},
"checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b/lib": {Mode: fs.ModeDir | 0700}, "checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b/lib": {Mode: fs.ModeDir | 0700},
@@ -986,6 +1139,15 @@ func TestCache(t *testing.T) {
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"identifier/HnySzeLQvSBZuTUcvfmLEX_OmH4yJWWH788NxuLuv7kVn8_uPM6Ks4rqFWM2NZJY": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")}, "identifier/HnySzeLQvSBZuTUcvfmLEX_OmH4yJWWH788NxuLuv7kVn8_uPM6Ks4rqFWM2NZJY": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"identifier/Zx5ZG9BAwegNT3zQwCySuI2ktCXxNgxirkGLFjW4FW06PtojYVaCdtEw8yuntPLa": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")}, "identifier/Zx5ZG9BAwegNT3zQwCySuI2ktCXxNgxirkGLFjW4FW06PtojYVaCdtEw8yuntPLa": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"identifier/_EmV5nsYZ2UWHgRmLDMU8i-rJWDx-kv5_1pFrzQI7vMMCM5mAXivO8UZtVfOqMR_": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"identifier/___9_QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/7lfQ4QwSpV8nw7IDh0JiQ_jqUPrPv3_Vfie034RxsSy-cy4vO8DVvxgpx2LW08oO")},
"identifier/_wgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/7lfQ4QwSpV8nw7IDh0JiQ_jqUPrPv3_Vfie034RxsSy-cy4vO8DVvxgpx2LW08oO")},
"identifier/_woAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/7lfQ4QwSpV8nw7IDh0JiQ_jqUPrPv3_Vfie034RxsSy-cy4vO8DVvxgpx2LW08oO")},
"identifier/k2ilgG5KQ9NXnMoT2oB6NdwOnSPRn_H24oXQc4l6qOYIxIG9XfuEczeyrR8UEv_f": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"identifier/xMDWovje7OfyIaDy_2VnjpKxRqSOQ_LoeD946t-3WsS2V2SeMJ7nDGrNfpa4Pbc-": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"substitute": {Mode: fs.ModeDir | 0700},
"substitute/OyBGorh72Z9kVw35JUa8FbqDbpR4DqT-MX1jic0uKN5PdYmUBiAF38BRsIRnBigf": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/qRN6in76LndiiOZJheHkwyW8UT1N5-f-bXvHfDvwrMw2fSkOoZdh8pWE1qhLk65b")},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -1061,6 +1223,7 @@ func TestCache(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -1119,6 +1282,7 @@ func TestCache(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -1180,6 +1344,8 @@ func TestCache(t *testing.T) {
"identifier/_wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/Aubi5EG4_Y8DhL9bQ3Q4HFBhLRF7X5gt9D3CNCQfT-TeBtlRXc7Zi_JYZEMoCC7M")}, "identifier/_wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/Aubi5EG4_Y8DhL9bQ3Q4HFBhLRF7X5gt9D3CNCQfT-TeBtlRXc7Zi_JYZEMoCC7M")},
"identifier/_wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/Aubi5EG4_Y8DhL9bQ3Q4HFBhLRF7X5gt9D3CNCQfT-TeBtlRXc7Zi_JYZEMoCC7M")}, "identifier/_wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/Aubi5EG4_Y8DhL9bQ3Q4HFBhLRF7X5gt9D3CNCQfT-TeBtlRXc7Zi_JYZEMoCC7M")},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -1230,6 +1396,7 @@ func TestCache(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
} }
@@ -1409,6 +1576,14 @@ errors during scrub:
func TestDependencyCureError(t *testing.T) { func TestDependencyCureError(t *testing.T) {
t.Parallel() t.Parallel()
makeIdent := func(ident ...byte) pkg.Artifact {
var a overrideIdent
copy(a.id[:], ident)
// does not compare equal
a.TrivialArtifact = new(stubArtifact)
return a
}
testCases := []struct { testCases := []struct {
name string name string
err pkg.DependencyCureError err pkg.DependencyCureError
@@ -1416,51 +1591,51 @@ func TestDependencyCureError(t *testing.T) {
unwrap []error unwrap []error
}{ }{
{"simple", pkg.DependencyCureError{ {"simple", pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, {A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
{Ident: unique.Make(pkg.ID{0xff, 0xf}), Err: stub.UniqueError(0xbad0f)}, {A: makeIdent(0xff, 0xf), Err: stub.UniqueError(0xbad0f)},
{Ident: unique.Make(pkg.ID{0xff, 1}), Err: stub.UniqueError(0xbad01)}, {A: makeIdent(0xff, 1), Err: stub.UniqueError(0xbad01)},
}, `errors curing dependencies: }, `errors curing dependencies:
_wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765184 injected by the test suite _wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765184 injected by the test suite
_wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765185 injected by the test suite _wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765185 injected by the test suite
_wkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765193 injected by the test suite _wkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765193 injected by the test suite
_w8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765199 injected by the test suite`, []error{ _w8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765199 injected by the test suite`, []error{
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, &pkg.CureError{A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 1}), Err: stub.UniqueError(0xbad01)}, &pkg.CureError{A: makeIdent(0xff, 1), Err: stub.UniqueError(0xbad01)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, &pkg.CureError{A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 0xf}), Err: stub.UniqueError(0xbad0f)}, &pkg.CureError{A: makeIdent(0xff, 0xf), Err: stub.UniqueError(0xbad0f)},
}}, }},
{"dedup", pkg.DependencyCureError{ {"dedup", pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, {A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
{Ident: unique.Make(pkg.ID{0xff, 0xfd}), Err: &pkg.DependencyCureError{ {A: makeIdent(0xff, 0xfd), Err: &pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, {A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
{Ident: unique.Make(pkg.ID{0xff, 0xc}), Err: &pkg.DependencyCureError{ {A: makeIdent(0xff, 0xc), Err: &pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 0xf}), Err: stub.UniqueError(0xbad0f)}, {A: makeIdent(0xff, 0xf), Err: stub.UniqueError(0xbad0f)},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
}}, }},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
}}, }},
{Ident: unique.Make(pkg.ID{0xff, 0xff}), Err: &pkg.DependencyCureError{ {A: makeIdent(0xff, 0xff), Err: &pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, {A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
{Ident: unique.Make(pkg.ID{0xff, 0xc}), Err: &pkg.DependencyCureError{ {A: makeIdent(0xff, 0xc), Err: &pkg.DependencyCureError{
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
}}, }},
{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, {A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
}}, }},
{Ident: unique.Make(pkg.ID{0xff, 0xf}), Err: stub.UniqueError(0xbad0f)}, {A: makeIdent(0xff, 0xf), Err: stub.UniqueError(0xbad0f)},
{Ident: unique.Make(pkg.ID{0xff, 1}), Err: stub.UniqueError(0xbad01)}, {A: makeIdent(0xff, 1), Err: stub.UniqueError(0xbad01)},
}, `errors curing dependencies: }, `errors curing dependencies:
_wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765184 injected by the test suite _wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765184 injected by the test suite
_wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765185 injected by the test suite _wEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765185 injected by the test suite
_wkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765193 injected by the test suite _wkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765193 injected by the test suite
_w8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765199 injected by the test suite`, []error{ _w8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: unique error 765199 injected by the test suite`, []error{
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 0}), Err: stub.UniqueError(0xbad00)}, &pkg.CureError{A: makeIdent(0xff, 0), Err: stub.UniqueError(0xbad00)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 1}), Err: stub.UniqueError(0xbad01)}, &pkg.CureError{A: makeIdent(0xff, 1), Err: stub.UniqueError(0xbad01)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 9}), Err: stub.UniqueError(0xbad09)}, &pkg.CureError{A: makeIdent(0xff, 9), Err: stub.UniqueError(0xbad09)},
&pkg.CureError{Ident: unique.Make(pkg.ID{0xff, 0xf}), Err: stub.UniqueError(0xbad0f)}, &pkg.CureError{A: makeIdent(0xff, 0xf), Err: stub.UniqueError(0xbad0f)},
}}, }},
} }
for _, tc := range testCases { for _, tc := range testCases {
@@ -1501,6 +1676,25 @@ func (a earlyFailureF) Cure(*pkg.FContext) error {
return stub.UniqueError(0xcafe) return stub.UniqueError(0xcafe)
} }
func BenchmarkEarlyDCE(b *testing.B) {
msg := message.New(log.New(os.Stderr, "dce: ", 0))
msg.SwapVerbose(testing.Verbose())
c, err := pkg.Open(b.Context(), msg, 0, 0, 0, check.MustAbs(b.TempDir()))
if err != nil {
b.Fatal(err)
}
_, _, err = c.Cure(earlyFailureF(8))
if !errors.Is(err, stub.UniqueError(0xcafe)) {
b.Fatalf("Cure: error = %v", err)
}
c.Close()
dce := err.(*pkg.DependencyCureError)
for b.Loop() {
dce.Unwrap()
}
}
func TestDependencyCureErrorEarly(t *testing.T) { func TestDependencyCureErrorEarly(t *testing.T) {
t.Parallel() t.Parallel()
@@ -1514,6 +1708,7 @@ func TestDependencyCureErrorEarly(t *testing.T) {
".": {Mode: fs.ModeDir | 0700}, ".": {Mode: fs.ModeDir | 0700},
"checksum": {Mode: fs.ModeDir | 0700}, "checksum": {Mode: fs.ModeDir | 0700},
"identifier": {Mode: fs.ModeDir | 0700}, "identifier": {Mode: fs.ModeDir | 0700},
"substitute": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
}) })

View File

@@ -83,6 +83,8 @@ func TestTar(t *testing.T) {
"identifier/W5S65DEhawz_WKaok5NjUKLmnD9dNl5RPauNJjcOVcB3VM4eGhSaLGmXbL8vZpiw": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantEncode)}, "identifier/W5S65DEhawz_WKaok5NjUKLmnD9dNl5RPauNJjcOVcB3VM4eGhSaLGmXbL8vZpiw": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantEncode)},
"identifier/rg7F1D5hwv6o4xctjD5zDq4i5MD0mArTsUIWfhUbik8xC6Bsyt3mjXXOm3goojTz": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantEncode)}, "identifier/rg7F1D5hwv6o4xctjD5zDq4i5MD0mArTsUIWfhUbik8xC6Bsyt3mjXXOm3goojTz": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantEncode)},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},
@@ -105,6 +107,8 @@ func TestTar(t *testing.T) {
"identifier/W5S65DEhawz_WKaok5NjUKLmnD9dNl5RPauNJjcOVcB3VM4eGhSaLGmXbL8vZpiw": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantExpandEncode)}, "identifier/W5S65DEhawz_WKaok5NjUKLmnD9dNl5RPauNJjcOVcB3VM4eGhSaLGmXbL8vZpiw": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantExpandEncode)},
"identifier/_v1blm2h-_KA-dVaawdpLas6MjHc6rbhhFS8JWwx8iJxZGUu8EBbRrhr5AaZ9PJL": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantExpandEncode)}, "identifier/_v1blm2h-_KA-dVaawdpLas6MjHc6rbhhFS8JWwx8iJxZGUu8EBbRrhr5AaZ9PJL": {Mode: fs.ModeSymlink | 0777, Data: []byte("../checksum/" + wantExpandEncode)},
"substitute": {Mode: fs.ModeDir | 0700},
"temp": {Mode: fs.ModeDir | 0700}, "temp": {Mode: fs.ModeDir | 0700},
"work": {Mode: fs.ModeDir | 0700}, "work": {Mode: fs.ModeDir | 0700},
}}, }},

View File

@@ -56,6 +56,8 @@ const (
Fakeroot Fakeroot
Findutils Findutils
Flex Flex
FontUtil
Freetype
Fuse Fuse
GMP GMP
GLib GLib
@@ -77,7 +79,10 @@ const (
LIT LIT
LibX11 LibX11
LibXau LibXau
LibXdmcp
LibXext LibXext
LibXfixes
LibXfont2
LibXrandr LibXrandr
LibXrender LibXrender
LibXxf86vm LibXxf86vm
@@ -87,9 +92,11 @@ const (
Libconfig Libconfig
LibdisplayInfo LibdisplayInfo
Libdrm Libdrm
Libepoxy
Libev Libev
Libexpat Libexpat
Libffi Libffi
Libfontenc
Libgd Libgd
Libglvnd Libglvnd
Libiconv Libiconv
@@ -101,17 +108,28 @@ const (
Libpsl Libpsl
Libseccomp Libseccomp
Libtasn1 Libtasn1
Libtirpc
Libtool Libtool
Libucontext Libucontext
Libunistring Libunistring
Libxshmfence Libva
LibxcbRenderUtil
LibxcbUtil
LibxcbUtilImage
LibxcbUtilKeysyms
LibxcbUtilWM
Libxcvt
Libxkbfile
Libxml2 Libxml2
Libxshmfence
Libxslt Libxslt
Libxtrans Libxtrans
LMSensors
M4 M4
MPC MPC
MPFR MPFR
Make Make
Mesa
Meson Meson
Mksh Mksh
MuslFts MuslFts
@@ -133,10 +151,12 @@ const (
PerlPodParser PerlPodParser
PerlSGMLS PerlSGMLS
PerlTermReadKey PerlTermReadKey
PerlTestCmd
PerlTextCharWidth PerlTextCharWidth
PerlTextWrapI18N PerlTextWrapI18N
PerlUnicodeLineBreak PerlUnicodeLineBreak
PerlYAMLTiny PerlYAMLTiny
Pixman
PkgConfig PkgConfig
Procps Procps
Python Python
@@ -174,14 +194,17 @@ const (
toyboxEarly toyboxEarly
Unzip Unzip
UtilLinux UtilLinux
VIM
Wayland Wayland
WaylandProtocols WaylandProtocols
XCB XCB
XCBProto XCBProto
XCBUtilKeysyms
XDGDBusProxy XDGDBusProxy
XZ XZ
Xkbcomp
XkeyboardConfig
XorgProto XorgProto
Xserver
Zlib Zlib
Zstd Zstd

View File

@@ -104,7 +104,7 @@ func newBusyboxBin() pkg.Artifact {
} }
return pkg.NewExec( return pkg.NewExec(
"busybox-bin-"+version, arch, nil, pkg.ExecTimeoutMax, false, "busybox-bin-"+version, arch, nil, pkg.ExecTimeoutMax, false, false,
fhs.AbsRoot, []string{ fhs.AbsRoot, []string{
"PATH=/system/bin", "PATH=/system/bin",
}, },

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newCurl() (pkg.Artifact, string) { func (t Toolchain) newCurl() (pkg.Artifact, string) {
const ( const (
version = "8.19.0" version = "8.20.0"
checksum = "YHuVLVVp8q_Y7-JWpID5ReNjq2Zk6t7ArHB6ngQXilp_R5l3cubdxu3UKo-xDByv" checksum = "xyHXwrngIRGMasuzhn-I5MSCOhktwINbsWt1f_LuR-5jRVvyx_g6U1EQfDLEbr9r"
) )
return t.NewPackage("curl", version, newTar( return t.NewPackage("curl", version, newTar(
"https://curl.se/download/curl-"+version+".tar.bz2", "https://curl.se/download/curl-"+version+".tar.bz2",

27
internal/rosa/freetype.go Normal file
View File

@@ -0,0 +1,27 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newFreetype() (pkg.Artifact, string) {
const (
version = "2.14.3"
checksum = "-WfLv8fVJNyCHpP_lriiDzOcVbBL9ajdQ3tl8AzIIUa9-8sVpU9irxOmSMgRHWYz"
)
return t.NewPackage("freetype", version, newTar(
"https://download.savannah.gnu.org/releases/freetype/"+
"freetype-"+version+".tar.gz",
checksum,
pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version
}
func init() {
artifactsM[Freetype] = Metadata{
f: Toolchain.newFreetype,
Name: "freetype",
Description: "a freely available software library to render fonts",
Website: "http://www.freetype.org/",
ID: 854,
}
}

View File

@@ -58,6 +58,11 @@ disable_test t2200-add-update
disable_test t0027-auto-crlf disable_test t0027-auto-crlf
disable_test t7513-interpret-trailers disable_test t7513-interpret-trailers
disable_test t7703-repack-geometric disable_test t7703-repack-geometric
disable_test t7002-mv-sparse-checkout
disable_test t1451-fsck-buffer
disable_test t4104-apply-boundary
disable_test t4200-rerere
disable_test t5515-fetch-merge-logic
`, `,
Check: []string{ Check: []string{
"-C t", "-C t",
@@ -108,7 +113,7 @@ func (t Toolchain) NewViaGit(
return t.New(strings.TrimSuffix( return t.New(strings.TrimSuffix(
path.Base(url), path.Base(url),
".git", ".git",
)+"-src-"+path.Base(rev), 0, t.AppendPresets(nil, )+"-src-"+path.Base(rev), THostNet, t.AppendPresets(nil,
NSSCACert, NSSCACert,
Git, Git,
), &checksum, nil, ` ), &checksum, nil, `

View File

@@ -91,8 +91,8 @@ func init() {
func (t Toolchain) newGlslang() (pkg.Artifact, string) { func (t Toolchain) newGlslang() (pkg.Artifact, string) {
const ( const (
version = "16.2.0" version = "16.3.0"
checksum = "6_UuF9reLRDaVkgO-9IfB3kMwme3lQZM8LL8YsJwPdUFkrjzxJtf2A9X3w9nFxj2" checksum = "xyqDf8k3-D0_BXHGi0uLgMglnJ05Rf3j73QgbDs3sGtKNdBIQhY8JfqX1NcNoJQN"
) )
return t.NewPackage("glslang", version, newFromGitHub( return t.NewPackage("glslang", version, newFromGitHub(
"KhronosGroup/glslang", "KhronosGroup/glslang",
@@ -189,9 +189,18 @@ func (t Toolchain) newSPIRVLLVMTranslator() (pkg.Artifact, string) {
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
{"remove-early-prefix", `diff --git a/CMakeLists.txt b/CMakeLists.txt {"remove-early-prefix", `diff --git a/CMakeLists.txt b/CMakeLists.txt
index c000a77e..86f79b03 100644 index c000a77e..f18f3fde 100644
--- a/CMakeLists.txt --- a/CMakeLists.txt
+++ b/CMakeLists.txt +++ b/CMakeLists.txt
@@ -164,7 +164,7 @@ install(
${LLVM_SPIRV_INCLUDE_DIRS}/LLVMSPIRVOpts.h
${LLVM_SPIRV_INCLUDE_DIRS}/LLVMSPIRVExtensions.inc
DESTINATION
- ${CMAKE_INSTALL_PREFIX}/include/LLVMSPIRVLib
+ include/LLVMSPIRVLib
)
configure_file(LLVMSPIRVLib.pc.in ${CMAKE_BINARY_DIR}/LLVMSPIRVLib.pc @ONLY)
@@ -172,5 +172,5 @@ install( @@ -172,5 +172,5 @@ install(
FILES FILES
${CMAKE_BINARY_DIR}/LLVMSPIRVLib.pc ${CMAKE_BINARY_DIR}/LLVMSPIRVLib.pc
@@ -199,7 +208,7 @@ index c000a77e..86f79b03 100644
- ${CMAKE_INSTALL_PREFIX}/lib${LLVM_LIBDIR_SUFFIX}/pkgconfig - ${CMAKE_INSTALL_PREFIX}/lib${LLVM_LIBDIR_SUFFIX}/pkgconfig
+ lib${LLVM_LIBDIR_SUFFIX}/pkgconfig + lib${LLVM_LIBDIR_SUFFIX}/pkgconfig
) )
`}, ;`},
}, },
// litArgs emits shell syntax // litArgs emits shell syntax

View File

@@ -1142,8 +1142,8 @@ func init() {
func (t Toolchain) newGCC() (pkg.Artifact, string) { func (t Toolchain) newGCC() (pkg.Artifact, string) {
const ( const (
version = "15.2.0" version = "16.1.0"
checksum = "TXJ5WrbXlGLzy1swghQTr4qxgDCyIZFgJry51XEPTBZ8QYbVmFeB4lZbSMtPJ-a1" checksum = "4ASoWbxaA2FW7PAB0zzHDPC5XnNhyaAyjtDPpGzceSLeYnEIXsNYZR3PA_Zu5P0K"
) )
var configureExtra []KV var configureExtra []KV

View File

@@ -7,8 +7,8 @@ import (
func (t Toolchain) newGLib() (pkg.Artifact, string) { func (t Toolchain) newGLib() (pkg.Artifact, string) {
const ( const (
version = "2.88.0" version = "2.88.1"
checksum = "T79Cg4z6j-sDZ2yIwvbY4ccRv2-fbwbqgcw59F5NQ6qJT6z4v261vbYp3dHO6Ma3" checksum = "Rkszn6W4RHjyspyqfXdVAVawdwDJCuS0Zu0f7qot7tbJhnw2fUDoUUJB40m-1MCX"
) )
return t.NewPackage("glib", version, t.newTagRemote( return t.NewPackage("glib", version, t.newTagRemote(
"https://gitlab.gnome.org/GNOME/glib.git", "https://gitlab.gnome.org/GNOME/glib.git",

View File

@@ -96,9 +96,13 @@ mkdir -p /work/system/bin/
} }
artifactsM[HakureiDist] = Metadata{ artifactsM[HakureiDist] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) { f: func(t Toolchain) (pkg.Artifact, string) {
name := "all"
if presetOpts&OptSkipCheck != 0 {
name = "make"
}
return t.newHakurei("-dist", ` return t.newHakurei("-dist", `
export HAKUREI_VERSION export HAKUREI_VERSION
DESTDIR=/work /usr/src/hakurei/all.sh DESTDIR=/work /usr/src/hakurei/`+name+`.sh
`, true), hakureiVersion `, true), hakureiVersion
}, },

View File

@@ -4,13 +4,13 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
const hakureiVersion = "0.4.1" const hakureiVersion = "0.4.2"
// hakureiSource is the source code of a hakurei release. // hakureiSource is the source code of a hakurei release.
var hakureiSource = newTar( var hakureiSource = newTar(
"https://git.gensokyo.uk/rosa/hakurei/archive/"+ "https://git.gensokyo.uk/rosa/hakurei/archive/"+
"v"+hakureiVersion+".tar.gz", "v"+hakureiVersion+".tar.gz",
"8bHvZcjUQOXUPbKL-qq99pHFTPnn-h7j1fkJudbGs8waLm3OmkI6eHfQev5bug2y", "jadgaOrxv5ABGvzQ_Rk0aPGz7U8K-427TbMhQNQ32scSizEnlR44Pu7NoWYWVZWq",
pkg.TarGzip, pkg.TarGzip,
) )

View File

@@ -2,12 +2,12 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
const kernelVersion = "6.12.84" const kernelVersion = "6.12.87"
var kernelSource = newTar( var kernelSource = newTar(
"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+ "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+
"snapshot/linux-"+kernelVersion+".tar.gz", "snapshot/linux-"+kernelVersion+".tar.gz",
"GJLUEu68r3DpLYoTcMl4wA_ThMBs_Zwc0gZsp82ii_3AOfcVxpI639IKfq2jAAY2", "QTl5teIy0K5KsOLYGHQ3FbnPCZNRH2bySXVzghiOoHDdM3zAcSPUkmdly85lMzHx",
pkg.TarGzip, pkg.TarGzip,
) )
@@ -29,8 +29,22 @@ func init() {
} }
func (t Toolchain) newKernelHeaders() (pkg.Artifact, string) { func (t Toolchain) newKernelHeaders() (pkg.Artifact, string) {
const checksum = "lCmBNcMeUmXifg0vecKOPy3GAaFcJSmOPnf3wit9xYTDSTsFADPt1xxUFfmTn1fD"
return t.NewPackage("kernel-headers", kernelVersion, kernelSource, &PackageAttr{ return t.NewPackage("kernel-headers", kernelVersion, kernelSource, &PackageAttr{
Flag: TEarly, Flag: TEarly,
KnownChecksum: new(mustDecode(checksum)),
Paths: []pkg.ExecPath{
// updated manually for API changes
pkg.Path(AbsUsrSrc.Append("version.h"), false, pkg.NewFile(
"version.h", []byte(`#define LINUX_VERSION_CODE 396372
#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + ((c) > 255 ? 255 : (c)))
#define LINUX_VERSION_MAJOR 6
#define LINUX_VERSION_PATCHLEVEL 12
#define LINUX_VERSION_SUBLEVEL 84
`),
)),
},
}, &MakeHelper{ }, &MakeHelper{
SkipConfigure: true, SkipConfigure: true,
@@ -43,7 +57,11 @@ func (t Toolchain) newKernelHeaders() (pkg.Artifact, string) {
"INSTALL_HDR_PATH=/work/system", "INSTALL_HDR_PATH=/work/system",
"headers_install", "headers_install",
}, },
Install: "# headers installed during make", Install: `
cat \
/usr/src/version.h > \
/work/system/include/linux/version.h
`,
}, },
Rsync, Rsync,
), kernelVersion ), kernelVersion

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newLibmd() (pkg.Artifact, string) { func (t Toolchain) newLibmd() (pkg.Artifact, string) {
const ( const (
version = "1.1.0" version = "1.2.0"
checksum = "9apYqPPZm0j5HQT8sCsVIhnVIqRD7XgN7kPIaTwTqnTuUq5waUAMq4M7ev8CODJ1" checksum = "1rJ6joAO0wwMZvSfnRNkc1MOhywyAq7SM8VmF92NvDtv7Qdl1LRbjm5fg_DFFtGj"
) )
return t.NewPackage("libmd", version, t.newTagRemote( return t.NewPackage("libmd", version, t.newTagRemote(
"https://git.hadrons.org/git/libmd.git", "https://git.hadrons.org/git/libmd.git",

33
internal/rosa/libepoxy.go Normal file
View File

@@ -0,0 +1,33 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newLibepoxy() (pkg.Artifact, string) {
const (
version = "1.5.10"
checksum = "OHI8wshrlGw6BMGrmSyejJtwzM2gPhyFJrTsKxULyKMmYrfgcOe7Iw2ibVoUND_Q"
)
return t.NewPackage("libepoxy", version, newFromGitHub(
"anholt/libepoxy",
version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dglx", "no"},
{"Degl", "no"},
},
},
LibX11,
), version
}
func init() {
artifactsM[Libepoxy] = Metadata{
f: Toolchain.newLibepoxy,
Name: "libepoxy",
Description: "a library for handling OpenGL function pointer management",
Website: "https://github.com/anholt/libepoxy",
ID: 6090,
}
}

View File

@@ -8,8 +8,8 @@ import (
func (t Toolchain) newLibexpat() (pkg.Artifact, string) { func (t Toolchain) newLibexpat() (pkg.Artifact, string) {
const ( const (
version = "2.8.0" version = "2.8.1"
checksum = "pnwZ_JSif-OfoWIwk2JYXWHagOWMA3Sh-Ea0p-4Rz9U9mDEeAebhyvnfD7OYOMCk" checksum = "iMEtbOJhQfGof2GxSlxffQSI1va_NDDQ9VIuqcPbNZ0291Dr8wttD5QecYyjIQap"
) )
return t.NewPackage("libexpat", version, newFromGitHubRelease( return t.NewPackage("libexpat", version, newFromGitHubRelease(
"libexpat/libexpat", "libexpat/libexpat",

44
internal/rosa/libtirpc.go Normal file
View File

@@ -0,0 +1,44 @@
package rosa
import (
"strings"
"hakurei.app/internal/pkg"
)
func (t Toolchain) newLibtirpc() (pkg.Artifact, string) {
const (
version = "1.3.7"
checksum = "nzFfu7LNvnSNiNAryD1vtnNWnU-Xqee8KqfXUKoBf5yjb5-dkeRkYuRijdCoYLof"
)
return t.NewPackage("libtirpc", version, t.newTagRemote(
"git://linux-nfs.org/~steved/libtirpc",
"libtirpc-"+
strings.Join(strings.SplitN(version, ".", 3), "-"),
checksum,
), nil, &MakeHelper{
Generate: "sh -e ./bootstrap",
Configure: []KV{
{"CFLAGS", `"$(pkg-config --cflags libbsd-overlay) ${CFLAGS:-}"`},
{"disable-gssapi"},
},
},
Automake,
Libtool,
PkgConfig,
Libbsd,
KernelHeaders,
), version
}
func init() {
artifactsM[Libtirpc] = Metadata{
f: Toolchain.newLibtirpc,
Name: "libtirpc",
Description: "a port of Suns Transport-Independent RPC library to Linux",
Website: "https://sourceforge.net/projects/libtirpc/",
ID: 1740,
}
}

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newLibucontext() (pkg.Artifact, string) { func (t Toolchain) newLibucontext() (pkg.Artifact, string) {
const ( const (
version = "1.5" version = "1.5.1"
checksum = "Ggk7FMmDNBdCx1Z9PcNWWW6LSpjGYssn2vU0GK5BLXJYw7ZxZbA2m_eSgT9TFnIG" checksum = "mUgeyJknjMxT-5fORzz-rqhZfP3Y7EZGBhOwvhuX7MsF4Pk9wkuwtrLf5IML-jWu"
) )
return t.NewPackage("libucontext", version, newFromGitHub( return t.NewPackage("libucontext", version, newFromGitHub(
"kaniini/libucontext", "kaniini/libucontext",

View File

@@ -274,7 +274,9 @@ func (t Toolchain) newLLVM() (pkg.Artifact, string) {
} }
version := t.Version(llvmSource) version := t.Version(llvmSource)
return t.NewPackage("llvm", version, t.Load(llvmSource), nil, &CMakeHelper{ return t.NewPackage("llvm", version, t.Load(llvmSource), &PackageAttr{
Flag: TExclusive,
}, &CMakeHelper{
Append: []string{"llvm"}, Append: []string{"llvm"},
Cache: cache, Cache: cache,
@@ -318,8 +320,8 @@ ninja ` + jobsFlagE + ` check-all
} }
func init() { func init() {
const ( const (
version = "22.1.4" version = "22.1.5"
checksum = "Bk3t-tV5sD5T0bqefFMcLeFuAwXnhFipywZmqst5hAZs97QQWGKB_5XyAFjj5tDB" checksum = "32gOaLPHcUlo3hkdk5RbFumTE01XKeCAYZcpvn8IDHF95egXVfDFSl6eZL3ChMen"
) )
artifactsM[llvmSource] = Metadata{ artifactsM[llvmSource] = Metadata{

View File

@@ -0,0 +1,59 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newLMSensors() (pkg.Artifact, string) {
const (
version = "3-6-2"
checksum = "7JYNutrihe-FP6r3ftf96uFZJJWPfxnBHL0ALSMA-vovaXVRr-sAjlLitw7WWpCI"
)
return t.NewPackage("lm_sensors", version, newFromGitHub(
"lm-sensors/lm-sensors",
"V"+version,
checksum,
), &PackageAttr{
Writable: true,
Chmod: true,
EnterSource: true,
ScriptEarly: `
ln -s \
../../system/bin/perl \
/usr/bin/
`,
}, &MakeHelper{
InPlace: true,
SkipConfigure: true,
Make: []string{
"CC=cc",
"ETCDIR=/system/etc",
"PREFIX=/system",
},
Check: []string{
"CC=cc",
"check",
},
Install: "make DESTDIR=/work PREFIX=/system install",
},
Perl,
PerlTestCmd,
M4,
Bison,
Flex,
), version
}
func init() {
artifactsM[LMSensors] = Metadata{
f: Toolchain.newLMSensors,
Name: "lm_sensors",
Description: "user-space support for hardware monitoring drivers",
Website: "https://hwmon.wiki.kernel.org/lm_sensors",
ID: 1831,
}
}

View File

@@ -1,6 +1,10 @@
package rosa package rosa
import "hakurei.app/internal/pkg" import (
"strings"
"hakurei.app/internal/pkg"
)
func (t Toolchain) newLibglvnd() (pkg.Artifact, string) { func (t Toolchain) newLibglvnd() (pkg.Artifact, string) {
const ( const (
@@ -12,8 +16,22 @@ func (t Toolchain) newLibglvnd() (pkg.Artifact, string) {
"glvnd/libglvnd", "glvnd/libglvnd",
"v"+version, "v"+version,
checksum, checksum,
), nil, (*MesonHelper)(nil), ), nil, &MesonHelper{
Setup: []KV{
{"Dx11", "enabled"},
{"Dglx", "enabled"},
},
ScriptCompiled: `
export DISPLAY=':0'
Xvfb &
XVFB_PID="$!"
trap 'kill $XVFB_PID && wait $XVFB_PID' EXIT
`,
},
Binutils, // symbols check fail with llvm nm Binutils, // symbols check fail with llvm nm
Xserver, // test suite wants X server
LibXext,
), version ), version
} }
func init() { func init() {
@@ -24,6 +42,10 @@ func init() {
Description: "The GL Vendor-Neutral Dispatch library", Description: "The GL Vendor-Neutral Dispatch library",
Website: "https://gitlab.freedesktop.org/glvnd/libglvnd", Website: "https://gitlab.freedesktop.org/glvnd/libglvnd",
Dependencies: P{
LibXext,
},
ID: 12098, ID: 12098,
} }
} }
@@ -64,3 +86,180 @@ func init() {
ID: 1596, ID: 1596,
} }
} }
func (t Toolchain) newLibva() (pkg.Artifact, string) {
const (
version = "2.23.0"
checksum = "UmF5tPyWIG_w5kiR3KFpoYbF7UUcaak5tyc-RhOheNTwQlLkPlifreFYCM9FQxbq"
)
return t.NewPackage("libva", version, newFromGitHub(
"intel/libva",
version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dwith_x11", "yes"},
{"Dwith_glx", "yes"},
{"Dwith_wayland", "yes"},
},
},
Libdrm,
LibXfixes,
Libglvnd,
Wayland,
KernelHeaders,
), version
}
func init() {
artifactsM[Libva] = Metadata{
f: Toolchain.newLibva,
Name: "libva",
Description: "an implementation for VA-API (Video Acceleration API)",
Website: "https://01.org/vaapi",
Dependencies: P{
Libdrm,
LibXfixes,
Libglvnd,
Wayland,
},
ID: 1752,
}
}
func (t Toolchain) newMesa() (pkg.Artifact, string) {
const (
version = "26.1.0"
checksum = "zU0fjqevySBaoi_5SLW3e2UffmGeBdxOuHmAHTH68n2hV-sjYoqg30koLqFXuk5y"
)
return t.NewPackage("mesa", version, newFromGitLab(
"gitlab.freedesktop.org",
"mesa/mesa",
"mesa-"+version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dplatforms", "x11,wayland"},
{"Dvideo-codecs", "all"},
{"Dglvnd", "enabled"},
{"Dgbm", "enabled"},
{"Dgallium-drivers", strings.Join([]string{
"asahi", // Apple AGX
"crocus", // Intel legacy
"etnaviv", // Vivante GPU designs (mostly NXP/Marvell SoCs)
"freedreno", // Qualcomm Adreno (all Qualcomm SoCs)
"i915", // Intel extra legacy
"iris", // new Intel (Broadwell+)
"lima", // ARM Mali 4xx
"llvmpipe", // software renderer
"nouveau", // Nvidia
"panfrost", // ARM Mali Midgard and up (T/G series)
"r300", // very old AMD
"r600", // less old AMD
"radeonsi", // new AMD (GCN+)
"softpipe", // older software renderer
"svga", // VMWare virtualized GPU
"tegra", // Nvidia Tegra SoCs
"v3d", // Broadcom VC5 (Raspberry Pi 4)
"vc4", // Broadcom VC4 (Raspberry Pi 0-3)
"virgl", // QEMU virtualized GPU (aka VirGL)
"zink", // generic OpenGL over Vulkan, experimental
// d3d12: WSL emulated GPU (aka Dozen)
// ethosu: accelerator
// rocket: accelerator
}, ",")},
{"Dvulkan-drivers", strings.Join([]string{
"amd", // AMD (aka RADV)
"broadcom", // Broadcom VC5 (Raspberry Pi 4, aka V3D)
"freedreno", // Qualcomm Adreno (all Qualcomm SoCs)
"intel", // new Intel (aka ANV)
"intel_hasvk", // Intel Haswell/Broadwell, "legacy" Vulkan driver (https://www.phoronix.com/news/Intel-HasVK-Drop-Dead-Code)
"panfrost", // ARM Mali Midgard and up (T/G series)
"swrast", // software renderer (aka Lavapipe)
"virtio", // QEMU virtualized GPU (aka VirGL)
"imagination", // PowerVR Rogue
"asahi", // Apple AGX
"gfxstream", // Android virtualized GPU
// nouveau: Nouveau (aka NVK), requires rust
// microsoft-experimental: WSL virtualized GPU (aka DZN/Dozen)
// kosmickrisp: macOS-specific
}, ",")},
{"Dvulkan-layers", strings.Join([]string{
"device-select",
"intel-nullhw",
"overlay",
"screenshot",
"anti-lag",
"vram-report-limit",
}, ",")},
{"Dfreedreno-kmds", "msm,virtio"},
{"Damdgpu-virtio", "true"},
},
},
M4,
PythonPackaging,
PythonMako,
PythonPyYAML,
PythonPycparser,
Glslang,
SPIRVLLVMTranslator,
Zlib,
Zstd,
Gzip,
Ncurses,
Libglvnd,
Libexpat,
Libva,
Libdrm,
Elfutils,
Bison,
Flex,
LMSensors,
Libconfig,
LibdisplayInfo,
Wayland,
WaylandProtocols,
Libxshmfence,
LibXxf86vm,
LibXrandr,
LibxcbUtilKeysyms,
Libpng,
Libarchive,
KernelHeaders,
), version
}
func init() {
artifactsM[Mesa] = Metadata{
f: Toolchain.newMesa,
Name: "mesa",
Description: "open source implementations of OpenGL, OpenGL ES, Vulkan, OpenCL, and more",
Website: "https://mesa3d.org",
Dependencies: P{
Libdrm,
Elfutils,
LMSensors,
LibdisplayInfo,
Wayland,
Libxshmfence,
LibXxf86vm,
LibXrandr,
LibxcbUtilKeysyms,
Libpng,
},
ID: 1970,
latest: (*Versions).getStable,
}
}

View File

@@ -382,3 +382,27 @@ func init() {
ID: 3549, ID: 3549,
} }
} }
func (t Toolchain) newPerlTestCmd() (pkg.Artifact, string) {
const (
version = "1.09"
checksum = "gpGUwyC9IozDiYSgW_kXARNfXsTPFa6cTowJmmCBbPqcs2-pONZca_SB06FGy-7H"
)
return t.newViaPerlMakeMaker("Test::Cmd", version, newFromCPAN(
"NEILB",
"Test-Cmd",
version,
checksum,
), nil), version
}
func init() {
artifactsM[PerlTestCmd] = Metadata{
f: Toolchain.newPerlTestCmd,
Name: "perl-Test::Cmd",
Description: "portable testing of commands and scripts",
Website: "https://metacpan.org/release/Test-Cmd",
ID: 6014,
}
}

31
internal/rosa/pixman.go Normal file
View File

@@ -0,0 +1,31 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newPixman() (pkg.Artifact, string) {
const (
version = "0.46.4"
checksum = "iECDxLG9SxUrvGHqeDoaBa-b3uqdT5DC4zudjtrwb8Wodq82pyacmFNEAo4SDsiE"
)
return t.NewPackage("pixman", version, newFromGitLab(
"gitlab.freedesktop.org",
"pixman/pixman",
"pixman-"+version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dtests", "enabled"},
},
}), version
}
func init() {
artifactsM[Pixman] = Metadata{
f: Toolchain.newPixman,
Name: "pixman",
Description: "a low-level software library for pixel manipulation",
Website: "https://pixman.org/",
ID: 3648,
}
}

View File

@@ -9,8 +9,8 @@ import (
func (t Toolchain) newPython() (pkg.Artifact, string) { func (t Toolchain) newPython() (pkg.Artifact, string) {
const ( const (
version = "3.14.4" version = "3.14.5"
checksum = "X0VRAAGOlCVldh4J9tRAE-YrJtDvqfQTJaqxKPXNX6YTPlwpR9GwA5WRIZDO-63s" checksum = "zYIpDlk2ftZ-UVGCQS1rthle2OHoyXV653ztWiopKV1NhmIJf1K2hHbkwM4DozQ9"
) )
return t.NewPackage("python", version, newTar( return t.NewPackage("python", version, newTar(
"https://www.python.org/ftp/python/"+version+ "https://www.python.org/ftp/python/"+version+
@@ -395,8 +395,8 @@ func init() {
func init() { func init() {
const ( const (
version = "2026.4.28.13" version = "2026.5.7.17"
checksum = "Z3MbmMXtmWHCM3-EvJehb9MzDqX7Ce_Xg86D5g5nxFRWMKqwHwnQ8R-AlKf-32HU" checksum = "1Fcps0gK9P4ofwGL8MISN9k1Q40-quxX7NDpIna50TmziBNrZy-0Vz0I9yIeHCoP"
) )
artifactsM[PythonTroveClassifiers] = newPythonPackage( artifactsM[PythonTroveClassifiers] = newPythonPackage(
"trove-classifiers", 88298, "trove-classifiers", 88298,

View File

@@ -220,6 +220,8 @@ const (
TEarly TEarly
// TNoToolchain excludes the LLVM toolchain. // TNoToolchain excludes the LLVM toolchain.
TNoToolchain TNoToolchain
// THostNet arranges for a [pkg.KindExecNet] to be created.
THostNet
) )
var ( var (
@@ -325,7 +327,9 @@ mkdir -vp /work/system/bin
} }
return pkg.NewExec( return pkg.NewExec(
name, arch, knownChecksum, pkg.ExecTimeoutMax, flag&TExclusive != 0, name, arch, knownChecksum, pkg.ExecTimeoutMax,
flag&THostNet != 0,
flag&TExclusive != 0,
fhs.AbsRoot, env, fhs.AbsRoot, env,
AbsSystem.Append("bin", "sh"), AbsSystem.Append("bin", "sh"),
[]string{"sh", absCureScript.String()}, []string{"sh", absCureScript.String()},
@@ -408,6 +412,9 @@ type Helper interface {
// PackageAttr holds build-system-agnostic attributes. // PackageAttr holds build-system-agnostic attributes.
type PackageAttr struct { type PackageAttr struct {
// Measure output if populated. Required by [THostNet].
KnownChecksum *pkg.Checksum
// Mount the source tree writable. // Mount the source tree writable.
Writable bool Writable bool
// Do not pass through [Toolchain.NewPatchedSource]. // Do not pass through [Toolchain.NewPatchedSource].
@@ -545,7 +552,7 @@ cd '/usr/src/` + name + `/'
name+"-"+version, name+"-"+version,
attr.Flag, attr.Flag,
extraRes, extraRes,
nil, attr.KnownChecksum,
attr.Env, attr.Env,
scriptEarly+helper.script(name), scriptEarly+helper.script(name),
slices.Concat(attr.Paths, []pkg.ExecPath{ slices.Concat(attr.Paths, []pkg.ExecPath{

View File

@@ -4,8 +4,8 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newTamaGo() (pkg.Artifact, string) { func (t Toolchain) newTamaGo() (pkg.Artifact, string) {
const ( const (
version = "1.26.2" version = "1.26.3"
checksum = "5xlhWq2NGhYCjt0y73QkydJ386lxg6-HkiO84ne6ByQSJBDat7-HSVzNA6jy7Laz" checksum = "-nH3MjAzDDLTeJ2hRKYJcJwo5-Ikci4zOHfB8j1vKn7zrF9TS6zYaoLi8qohGwAE"
) )
return t.New("tamago-go"+version, 0, t.AppendPresets(nil, return t.New("tamago-go"+version, 0, t.AppendPresets(nil,
Bash, Bash,

45
internal/rosa/vim.go Normal file
View File

@@ -0,0 +1,45 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newVIM() (pkg.Artifact, string) {
const (
version = "9.2.0461"
checksum = "18Rr_5oIf_PkKuqVkN4CMZIGkZEgpN1vamlrsvPLBjn4mN98CRuoJmhzRZ7MoVYM"
)
return t.NewPackage("vim", version, newFromGitHub(
"vim/vim",
"v"+version,
checksum,
), &PackageAttr{
Chmod: true,
Writable: true,
EnterSource: true,
}, &MakeHelper{
InPlace: true,
Configure: []KV{
{"with-tlib", "ncursesw"},
},
Check: []string{"test"},
// very expensive
SkipCheck: true,
},
Ncurses,
), version
}
func init() {
artifactsM[VIM] = Metadata{
f: Toolchain.newVIM,
Name: "vim",
Description: "a greatly improved version of the good old UNIX editor Vi",
Website: "https://www.vim.org",
Dependencies: P{
Ncurses,
},
ID: 5092,
}
}

View File

@@ -186,12 +186,12 @@ func init() {
} }
} }
func (t Toolchain) newXCBUtilKeysyms() (pkg.Artifact, string) { func (t Toolchain) newLibxcbUtilKeysyms() (pkg.Artifact, string) {
const ( const (
version = "0.4.1" version = "0.4.1"
checksum = "-EEje12UEjtFBuIjb6Fy4cxEghV20BXwQ1BLvhtvSuVcrFkp_X-ZHRM48wAspXZ4" checksum = "-EEje12UEjtFBuIjb6Fy4cxEghV20BXwQ1BLvhtvSuVcrFkp_X-ZHRM48wAspXZ4"
) )
return t.NewPackage("xcb-util-keysyms", version, newTar( return t.NewPackage("libxcb-util-keysyms", version, newTar(
"https://xcb.freedesktop.org/dist/xcb-util-keysyms-"+version+".tar.gz", "https://xcb.freedesktop.org/dist/xcb-util-keysyms-"+version+".tar.gz",
checksum, checksum,
pkg.TarGzip, pkg.TarGzip,
@@ -202,10 +202,10 @@ func (t Toolchain) newXCBUtilKeysyms() (pkg.Artifact, string) {
), version ), version
} }
func init() { func init() {
artifactsM[XCBUtilKeysyms] = Metadata{ artifactsM[LibxcbUtilKeysyms] = Metadata{
f: Toolchain.newXCBUtilKeysyms, f: Toolchain.newLibxcbUtilKeysyms,
Name: "xcb-util-keysyms", Name: "libxcb-util-keysyms",
Description: "standard X key constants and conversion to/from keycodes", Description: "standard X key constants and conversion to/from keycodes",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-keysyms", Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-keysyms",
@@ -217,6 +217,135 @@ func init() {
} }
} }
func (t Toolchain) newLibxcbUtilImage() (pkg.Artifact, string) {
const (
version = "0.4.1"
checksum = "47pvXmFwaUBZIrJ4CE9xjTQIFyxeqoNAL-DshlB11GZ_jjI3G1a6KF0K7mtBQ1E7"
)
return t.NewPackage("libxcb-util-image", version, newTar(
"https://xcb.freedesktop.org/dist/xcb-util-image-"+version+".tar.gz",
checksum,
pkg.TarGzip,
), nil, (*MakeHelper)(nil),
PkgConfig,
LibxcbUtil,
), version
}
func init() {
artifactsM[LibxcbUtilImage] = Metadata{
f: Toolchain.newLibxcbUtilImage,
Name: "libxcb-util-image",
Description: "XCB port of Xlib's XImage and XShmImage functions",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-image",
Dependencies: P{
LibxcbUtil,
},
ID: 5168,
}
}
func (t Toolchain) newLibxcbUtilWM() (pkg.Artifact, string) {
const (
version = "0.4.2"
checksum = "g0VZgMU9hcIgyXb3XxBR9xqsvUMBd9qt_Dbmwoj2h5y24pODr_S_D0DhRsuXUNjF"
)
return t.NewPackage("libxcb-util-wm", version, newTar(
"https://xcb.freedesktop.org/dist/xcb-util-wm-"+version+".tar.gz",
checksum,
pkg.TarGzip,
), nil, (*MakeHelper)(nil),
M4,
PkgConfig,
XCB,
), version
}
func init() {
artifactsM[LibxcbUtilWM] = Metadata{
f: Toolchain.newLibxcbUtilWM,
Name: "libxcb-util-wm",
Description: "XCB client and window-manager helpers for ICCCM & EWMH",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-wm",
Dependencies: P{
XCB,
},
ID: 5170,
}
}
func (t Toolchain) newLibxcbUtil() (pkg.Artifact, string) {
const (
version = "0.4.1"
checksum = "YMXGQUQbF6PoEAGflvYnJYsLWjti6sL_ifY47wIXTNGVM3tQ8u41nkBYN4K1D5CD"
)
return t.NewPackage("libxcb-util", version, newTar(
"https://www.x.org/releases/individual/xcb/"+
"xcb-util-"+version+".tar.gz",
checksum,
pkg.TarGzip,
), nil, (*MakeHelper)(nil),
PkgConfig,
utilMacros,
XCB,
), version
}
func init() {
artifactsM[LibxcbUtil] = Metadata{
f: Toolchain.newLibxcbUtil,
Name: "libxcb-util",
Description: "XCB utility libraries",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-util",
Dependencies: P{
XCB,
},
ID: 5165,
}
}
func (t Toolchain) newLibxcbRenderUtil() (pkg.Artifact, string) {
const (
version = "0.3.10"
checksum = "n08L8PyCoOd7v2vb6fSVq5Pq6JtteXVh9K2wrQMTNwGMf_Fjpi6i3HWF-TMFSVTI"
)
return t.NewPackage("libxcb-render-util", version, newTar(
"https://www.x.org/releases/individual/xcb/"+
"xcb-util-renderutil-"+version+".tar.gz",
checksum,
pkg.TarGzip,
), nil, (*MakeHelper)(nil),
PkgConfig,
utilMacros,
XCB,
), version
}
func init() {
artifactsM[LibxcbRenderUtil] = Metadata{
f: Toolchain.newLibxcbRenderUtil,
Name: "libxcb-render-util",
Description: "XCB convenience functions for the Render extension",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcb-render-util",
Dependencies: P{
XCB,
},
ID: 5169,
}
}
func (t Toolchain) newLibX11() (pkg.Artifact, string) { func (t Toolchain) newLibX11() (pkg.Artifact, string) {
const ( const (
version = "1.8.13" version = "1.8.13"
@@ -298,6 +427,44 @@ func init() {
} }
} }
func (t Toolchain) newLibXfixes() (pkg.Artifact, string) {
const (
version = "6.0.2"
checksum = "_-kJfKZ7cE3NNeMr6NLSXCmsyP7MVEHVPLNfxatz2qBy3_fZJvPMQwZNOC9y6V5L"
)
return t.NewPackage("libXfixes", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libxfixes",
"libXfixes-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
utilMacros,
XorgProto,
LibX11,
), version
}
func init() {
artifactsM[LibXfixes] = Metadata{
f: Toolchain.newLibXfixes,
Name: "libXfixes",
Description: "Xlib-based library for the XFIXES Extension",
Website: "https://www.freedesktop.org/wiki/Software/FixesExt/",
Dependencies: P{
LibX11,
},
ID: 1775,
}
}
func (t Toolchain) newLibXrender() (pkg.Artifact, string) { func (t Toolchain) newLibXrender() (pkg.Artifact, string) {
const ( const (
version = "0.9.12" version = "0.9.12"
@@ -446,6 +613,264 @@ func init() {
} }
} }
func (t Toolchain) newFontUtil() (pkg.Artifact, string) {
const (
version = "1.4.2"
checksum = "YWiaIxkq-N2yNdbGa_RF1S0UkQq6xsgoRT73WZP2DOmyH_CJ0TAkpQjId657MQmh"
)
return t.NewPackage("font-util", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/font/util",
"font-util-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
utilMacros,
), version
}
func init() {
artifactsM[FontUtil] = Metadata{
f: Toolchain.newFontUtil,
Name: "font-util",
Description: "X.Org font package creation/installation utilities",
Website: "https://gitlab.freedesktop.org/xorg/font/util",
ID: 15055,
}
}
func (t Toolchain) newLibfontenc() (pkg.Artifact, string) {
const (
version = "1.1.9"
checksum = "XqosXfbVwaoYzG9vVyRCl3eatwjASoJdLZsxQ37NN8S_jTyqNmbxpRSJGImJj7RS"
)
return t.NewPackage("libfontenc", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libfontenc",
"libfontenc-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
utilMacros,
FontUtil,
XorgProto,
), version
}
func init() {
artifactsM[Libfontenc] = Metadata{
f: Toolchain.newLibfontenc,
Name: "libfontenc",
Description: "X font encoding library",
Website: "https://gitlab.freedesktop.org/xorg/lib/libfontenc",
ID: 1613,
}
}
func (t Toolchain) newLibxkbfile() (pkg.Artifact, string) {
const (
version = "1.2.0"
checksum = "WUtph1ab0AyATahlwljchBxZJcpjYrjyhCK9DW2VO0uXEXaN22GWmUaibcA83i_B"
)
return t.NewPackage("libxkbfile", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libxkbfile",
"libxkbfile-"+version,
checksum,
), nil, (*MesonHelper)(nil),
LibX11,
), version
}
func init() {
artifactsM[Libxkbfile] = Metadata{
f: Toolchain.newLibxkbfile,
Name: "libxkbfile",
Description: "XKB file handling routines",
Website: "http://www.x.org/wiki/XKB",
Dependencies: P{
LibX11,
},
ID: 1781,
}
}
func (t Toolchain) newXkbcomp() (pkg.Artifact, string) {
const (
version = "1.5.0"
checksum = "ttICW8ZPbljI-nw2kknvxFhwFoDK40iAMBeZDLAHYsHf3B6UPO_zc9TpzZYRRyZH"
)
return t.NewPackage("xkbcomp", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/app/xkbcomp",
"xkbcomp-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
Bison,
utilMacros,
Libxkbfile,
), version
}
func init() {
artifactsM[Xkbcomp] = Metadata{
f: Toolchain.newXkbcomp,
Name: "xkbcomp",
Description: "XKB keyboard description compiler",
Website: "http://www.x.org/wiki/XKB",
Dependencies: P{
Libxkbfile,
},
ID: 15018,
}
}
func (t Toolchain) newLibXfont2() (pkg.Artifact, string) {
const (
version = "2.0.7"
checksum = "jv9BZNA02493KB8j1lfAErF5SA3ZFcAhm3_UVJ--Bp1maz-vNprl_wXpkHApBi9M"
)
return t.NewPackage("libXfont2", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libxfont",
"libXfont2-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
utilMacros,
Freetype,
XorgProto,
Libxtrans,
Libfontenc,
), version
}
func init() {
artifactsM[LibXfont2] = Metadata{
f: Toolchain.newLibXfont2,
Name: "libXfont2",
Description: "X font handling library for server & utilities",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxfont",
Dependencies: P{
Freetype,
Libfontenc,
},
ID: 17165,
}
}
func (t Toolchain) newLibxcvt() (pkg.Artifact, string) {
const (
version = "0.1.3"
checksum = "IfIA7SxlHMWh681e1AgYmZcRAfkZd5LlzmqcMRifNY5nNVRrUx_wnoaidAv0Yu03"
)
return t.NewPackage("libxcvt", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libxcvt",
"libxcvt-"+version,
checksum,
), nil, (*MesonHelper)(nil)), version
}
func init() {
artifactsM[Libxcvt] = Metadata{
f: Toolchain.newLibxcvt,
Name: "libxcvt",
Description: "VESA CVT standard timing modeline generation library & utility",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxcvt",
ID: 235147,
}
}
func (t Toolchain) newLibXdmcp() (pkg.Artifact, string) {
const (
version = "1.1.5"
checksum = "N6AJSv9pmeBedFn8KuSIOUGvTken4rkypNWVE2KfPlliwkfIbhfXrt5YHZkBMUHp"
)
return t.NewPackage("libXdmcp", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libxdmcp",
"libXdmcp-"+version,
checksum,
), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh",
},
Automake,
Libtool,
PkgConfig,
utilMacros,
XorgProto,
), version
}
func init() {
artifactsM[LibXdmcp] = Metadata{
f: Toolchain.newLibXdmcp,
Name: "libXdmcp",
Description: "X Display Manager Control Protocol library",
Website: "https://gitlab.freedesktop.org/xorg/lib/libxdmcp",
ID: 1772,
}
}
func (t Toolchain) newXkeyboardConfig() (pkg.Artifact, string) {
const (
version = "2.47"
checksum = "E03PsPIaRrxPAuKgDGSQyPiJB49wXtyyvdV0lVx3_G-pelMMlaFLkoTDHTHG_qgA"
)
return t.NewPackage("xkeyboard-config", version, newFromGitLab(
"gitlab.freedesktop.org",
"xkeyboard-config/xkeyboard-config",
"xkeyboard-config-"+version,
checksum,
), nil, (*MesonHelper)(nil),
Perl,
), version
}
func init() {
artifactsM[XkeyboardConfig] = Metadata{
f: Toolchain.newXkeyboardConfig,
Name: "xkeyboard-config",
Description: "the non-arch keyboard configuration database for X Window",
Website: "https://www.freedesktop.org/wiki/Software/XKeyboardConfig/",
ID: 5191,
}
}
func (t Toolchain) newLibpciaccess() (pkg.Artifact, string) { func (t Toolchain) newLibpciaccess() (pkg.Artifact, string) {
const ( const (
version = "0.19" version = "0.19"
@@ -477,3 +902,87 @@ func init() {
ID: 1703, ID: 1703,
} }
} }
func (t Toolchain) newXserver() (pkg.Artifact, string) {
const (
version = "21.1.22"
checksum = "prLT2wKecBu5m9w1ThgIt0GvenNpjKXoOyvTiMA1oQTlP0QHh6QiWsdvH3OmUwNo"
)
return t.NewPackage("xserver", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/xserver",
"xorg-server-"+version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dxorg", "true"},
{"Dxephyr", "true"},
{"Dxnest", "true"},
{"Dipv6", "false"},
{"Dudev", "false"},
{"Dudev_kms", "false"},
{"Dglx", "false"},
// ../../usr/src/xserver/glamor/glamor_glx.c:24:10: fatal error: 'epoxy/glx.h' file not found
{"Dglamor", "false"},
},
},
Gawk,
XorgProto,
Libxtrans,
Libxshmfence,
Pixman,
Libbsd,
Xkbcomp,
XkeyboardConfig,
LibXfont2,
DBus,
FontUtil,
Libxcvt,
LibXext,
Libmd,
LibXdmcp,
Libtirpc,
Libepoxy,
LibxcbUtil,
LibxcbUtilImage,
LibxcbUtilWM,
LibxcbUtilKeysyms,
LibxcbRenderUtil,
Libpciaccess,
KernelHeaders,
), version
}
func init() {
artifactsM[Xserver] = Metadata{
f: Toolchain.newXserver,
Name: "xserver",
Description: "X server",
Website: "https://gitlab.freedesktop.org/xorg/xserver",
Dependencies: P{
Xkbcomp,
XkeyboardConfig,
XCB,
Pixman,
Libmd,
Libbsd,
Libtirpc,
Libxcvt,
LibXdmcp,
LibXfont2,
Libpciaccess,
// Xephyr
LibxcbUtilImage,
LibxcbUtilWM,
LibxcbUtilKeysyms,
LibxcbRenderUtil,
},
ID: 5250,
}
}