73 Commits

Author SHA1 Message Date
992139c75d internal/rosa/python: extra script after install
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m49s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m52s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m24s
Test / Flake checks (push) Successful in 1m21s
This is generally for test suite, due to the lack of standard or widely agreed upon convention.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:35:24 +09:00
57c69b533e internal/rosa/meson: migrate to helper
All checks were successful
Test / Create distribution (push) Successful in 1m2s
Test / Sandbox (push) Successful in 2m44s
Test / ShareFS (push) Successful in 3m47s
Test / Sandbox (race detector) (push) Successful in 5m15s
Test / Hakurei (race detector) (push) Successful in 6m25s
Test / Hakurei (push) Successful in 2m51s
Test / Flake checks (push) Successful in 1m21s
This also migrates to source from the Microsoft Github release.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:16:22 +09:00
6f0c2a80f2 internal/rosa/python: migrate setuptools to helper
All checks were successful
Test / Create distribution (push) Successful in 1m33s
Test / Sandbox (push) Successful in 3m18s
Test / Hakurei (push) Successful in 4m25s
Test / ShareFS (push) Successful in 4m29s
Test / Sandbox (race detector) (push) Successful in 5m46s
Test / Hakurei (race detector) (push) Successful in 6m54s
Test / Flake checks (push) Successful in 1m24s
This is much cleaner, and should be functionally equivalent.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:04:19 +09:00
08dfefb28d internal/rosa/python: pip helper
All checks were successful
Test / Create distribution (push) Successful in 1m29s
Test / Sandbox (push) Successful in 3m12s
Test / ShareFS (push) Successful in 4m25s
Test / Sandbox (race detector) (push) Successful in 5m44s
Test / Hakurei (race detector) (push) Successful in 6m55s
Test / Hakurei (push) Successful in 2m39s
Test / Flake checks (push) Successful in 1m23s
Binary pip releases are not considered acceptable, this more generic helper is required for building from source.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-19 00:03:36 +09:00
b081629662 internal/rosa/libxml2: 2.15.2 to 2.15.3
All checks were successful
Test / Create distribution (push) Successful in 1m11s
Test / Sandbox (push) Successful in 3m3s
Test / ShareFS (push) Successful in 4m34s
Test / Sandbox (race detector) (push) Successful in 5m46s
Test / Hakurei (race detector) (push) Successful in 6m58s
Test / Hakurei (push) Successful in 2m27s
Test / Flake checks (push) Successful in 1m15s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:05:49 +09:00
fba541f301 internal/rosa/nss: 3.122.1 to 3.123
All checks were successful
Test / Sandbox (push) Successful in 2m1s
Test / Create distribution (push) Successful in 1m1s
Test / Hakurei (push) Successful in 4m36s
Test / ShareFS (push) Successful in 3m3s
Test / Sandbox (race detector) (push) Successful in 5m19s
Test / Hakurei (race detector) (push) Successful in 6m43s
Test / Flake checks (push) Successful in 1m22s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:05:23 +09:00
5f0da3d5c2 internal/rosa/gnu: mpc 1.4.0 to 1.4.1
All checks were successful
Test / Create distribution (push) Successful in 36s
Test / Sandbox (push) Successful in 3m5s
Test / Hakurei (push) Successful in 4m22s
Test / ShareFS (push) Successful in 3m55s
Test / Sandbox (race detector) (push) Successful in 5m46s
Test / Hakurei (race detector) (push) Successful in 7m2s
Test / Flake checks (push) Successful in 1m24s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:04:33 +09:00
4d5841dd62 internal/rosa: elfutils 0.194 to 0.195
All checks were successful
Test / Create distribution (push) Successful in 44s
Test / Sandbox (push) Successful in 3m4s
Test / ShareFS (push) Successful in 4m32s
Test / Sandbox (race detector) (push) Successful in 5m41s
Test / Hakurei (race detector) (push) Successful in 6m53s
Test / Hakurei (push) Successful in 2m31s
Test / Flake checks (push) Successful in 1m20s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 09:03:49 +09:00
9e752b588a internal/pkg: drop cached error on cancel
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m45s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m17s
Test / Hakurei (race detector) (push) Successful in 6m22s
Test / Flake checks (push) Successful in 1m23s
This avoids disabling the artifact when using the individual cancel method. Unfortunately this makes the method blocking.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 03:24:48 +09:00
27b1aaae38 internal/pkg: pending error alongside done channel
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m44s
Test / ShareFS (push) Successful in 3m48s
Test / Hakurei (push) Successful in 3m52s
Test / Sandbox (race detector) (push) Successful in 5m21s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m20s
This significantly simplifies synchronisation of access to identErr.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 03:10:37 +09:00
9e18de1dc2 internal/pkg: flush cached errors on abort
All checks were successful
Test / Create distribution (push) Successful in 1m2s
Test / Sandbox (push) Successful in 2m43s
Test / Hakurei (push) Successful in 3m49s
Test / ShareFS (push) Successful in 3m48s
Test / Sandbox (race detector) (push) Successful in 5m14s
Test / Hakurei (race detector) (push) Successful in 6m17s
Test / Flake checks (push) Successful in 1m17s
This avoids disabling the artifact until cache is reopened. The same has to be implemented for Cancel in a future change.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-18 02:59:44 +09:00
b80ea91a42 cmd/mbf: abort remote cures
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m53s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m17s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 1m15s
This command arranges for all pending cures to be aborted. It does not wait for cures to complete.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:47:02 +09:00
30a9dfa4b8 internal/pkg: abort all pending cures
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m48s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m53s
Test / Sandbox (race detector) (push) Successful in 5m27s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 1m31s
This cancels all current pending cures without closing the cache.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:40:35 +09:00
8d657b6fdf cmd/mbf: cancel remote cure
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m43s
Test / Hakurei (push) Successful in 3m49s
Test / ShareFS (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m19s
Test / Flake checks (push) Successful in 1m22s
This exposes the new fine-grained cancel API in cmd/mbf.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 22:00:04 +09:00
ae9b9adfd2 internal/rosa: retry in SIGSEGV test
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m47s
Test / Hakurei (push) Successful in 3m46s
Test / ShareFS (push) Successful in 3m48s
Test / Sandbox (race detector) (push) Successful in 5m10s
Test / Hakurei (race detector) (push) Successful in 6m19s
Test / Flake checks (push) Successful in 1m21s
Munmap is not always immediate.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 20:45:19 +09:00
dd6a480a21 cmd/mbf: handle flags in serve
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m47s
Test / ShareFS (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m13s
Test / Hakurei (race detector) (push) Successful in 6m27s
Test / Hakurei (push) Successful in 2m38s
Test / Flake checks (push) Successful in 1m21s
This enables easier expansion of the protocol.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 20:14:09 +09:00
3942272c30 internal/pkg: fine-grained cancellation
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m46s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m49s
Test / Sandbox (race detector) (push) Successful in 5m16s
Test / Hakurei (race detector) (push) Successful in 6m34s
Test / Flake checks (push) Successful in 1m21s
This enables a specific artifact to be targeted for cancellation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 19:33:21 +09:00
9036986156 cmd/mbf: optionally ignore reply
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m57s
Test / Hakurei (push) Successful in 3m52s
Test / ShareFS (push) Successful in 4m58s
Test / Sandbox (race detector) (push) Successful in 5m17s
Test / Hakurei (race detector) (push) Successful in 6m29s
Test / Flake checks (push) Successful in 1m47s
An acknowledgement is not always required in this use case. This change also adds 64 bits of connection configuration for future expansion.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 16:46:49 +09:00
a394971dd7 cmd/mbf: do not abort cache acquisition during testing
All checks were successful
Test / Create distribution (push) Successful in 1m15s
Test / Sandbox (push) Successful in 2m59s
Test / Hakurei (push) Successful in 3m58s
Test / ShareFS (push) Successful in 4m7s
Test / Sandbox (race detector) (push) Successful in 5m29s
Test / Hakurei (race detector) (push) Successful in 6m32s
Test / Flake checks (push) Successful in 1m22s
This can sometimes fire during testing due to how short the test is.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 02:06:51 +09:00
9daba60809 cmd/mbf: daemon command
All checks were successful
Test / Create distribution (push) Successful in 1m9s
Test / Sandbox (push) Successful in 2m49s
Test / ShareFS (push) Successful in 4m0s
Test / Sandbox (race detector) (push) Successful in 5m25s
Test / Hakurei (race detector) (push) Successful in 6m27s
Test / Hakurei (push) Successful in 3m53s
Test / Flake checks (push) Successful in 1m25s
This services internal/pkg artifact IR with Rosa OS extensions originating from another process.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 02:05:59 +09:00
bcd79a22ff cmd/mbf: do not open cache for IR encoding
All checks were successful
Test / Create distribution (push) Successful in 1m7s
Test / Sandbox (push) Successful in 2m52s
Test / Hakurei (push) Successful in 3m55s
Test / ShareFS (push) Successful in 3m56s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m28s
Test / Flake checks (push) Successful in 1m24s
This can now be allocated independently.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 01:04:39 +09:00
0ff7ab915b internal/pkg: move IR primitives out of cache
All checks were successful
Test / Create distribution (push) Successful in 1m1s
Test / Sandbox (push) Successful in 51s
Test / Hakurei (push) Successful in 2m43s
Test / ShareFS (push) Successful in 2m39s
Test / Sandbox (race detector) (push) Successful in 5m17s
Test / Hakurei (race detector) (push) Successful in 6m22s
Test / Flake checks (push) Successful in 1m24s
These are memory management and caching primitives. Having them as part of Cache is cumbersome and requires a temporary directory that is never used. This change isolates them from Cache to enable independent use.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-17 01:02:13 +09:00
823575acac cmd/mbf: move info command
All checks were successful
Test / Create distribution (push) Successful in 1m0s
Test / Sandbox (push) Successful in 2m45s
Test / Hakurei (push) Successful in 3m47s
Test / ShareFS (push) Successful in 3m44s
Test / Sandbox (race detector) (push) Successful in 5m14s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 1m19s
This is cleaner with less shared state.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-16 17:43:52 +09:00
136bc0917b cmd/mbf: optionally open cache
All checks were successful
Test / Sandbox (push) Successful in 3m13s
Test / Create distribution (push) Successful in 1m1s
Test / ShareFS (push) Successful in 2m42s
Test / Hakurei (push) Successful in 2m49s
Test / Sandbox (race detector) (push) Successful in 5m6s
Test / Hakurei (race detector) (push) Successful in 6m35s
Test / Flake checks (push) Successful in 1m27s
Some commands do not require the cache. This change also makes acquisition of locked cache cancelable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-16 15:59:34 +09:00
d6b082dd0b internal/rosa/ninja: bootstrap with verbose output
All checks were successful
Test / ShareFS (push) Successful in 41s
Test / Sandbox (push) Successful in 47s
Test / Hakurei (push) Successful in 51s
Test / Create distribution (push) Successful in 1m2s
Test / Sandbox (race detector) (push) Successful in 2m33s
Test / Hakurei (race detector) (push) Successful in 3m45s
Test / Flake checks (push) Successful in 2m28s
This otherwise outputs nothing, and appears to hang until the (fully single-threaded) bootstrap completes.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:19:05 +09:00
89d6d9576b internal/rosa/make: optionally format value as is
All checks were successful
Test / Create distribution (push) Successful in 1m22s
Test / ShareFS (push) Successful in 4m37s
Test / Sandbox (race detector) (push) Successful in 6m27s
Test / Hakurei (race detector) (push) Successful in 11m35s
Test / Sandbox (push) Successful in 3m51s
Test / Hakurei (push) Successful in 6m59s
Test / Flake checks (push) Successful in 3m5s
This enables correct formatting for awkward configure scripts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:17:58 +09:00
fafce04a5d internal/rosa/kernel: firmware 20260309 to 20260410
All checks were successful
Test / Create distribution (push) Successful in 1m42s
Test / Sandbox (push) Successful in 4m43s
Test / Hakurei (push) Successful in 9m0s
Test / ShareFS (push) Successful in 6m37s
Test / Sandbox (race detector) (push) Successful in 7m41s
Test / Hakurei (race detector) (push) Successful in 11m49s
Test / Flake checks (push) Successful in 3m23s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:16:47 +09:00
5d760a1db9 internal/rosa/kernel: 6.12.80 to 6.12.81
All checks were successful
Test / Create distribution (push) Successful in 3m56s
Test / Sandbox (push) Successful in 6m45s
Test / Sandbox (race detector) (push) Successful in 8m8s
Test / Hakurei (push) Successful in 9m17s
Test / ShareFS (push) Successful in 7m49s
Test / Hakurei (race detector) (push) Successful in 11m30s
Test / Flake checks (push) Successful in 2m26s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:16:30 +09:00
d197e40b2a internal/rosa/python: mako 1.3.10 to 1.3.11
All checks were successful
Test / Create distribution (push) Successful in 1m9s
Test / ShareFS (push) Successful in 8m17s
Test / Sandbox (race detector) (push) Successful in 8m23s
Test / Hakurei (race detector) (push) Successful in 11m2s
Test / Sandbox (push) Successful in 3m54s
Test / Hakurei (push) Successful in 6m57s
Test / Flake checks (push) Successful in 3m15s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:21:54 +09:00
2008902247 internal/rosa/python: packaging 26.0 to 26.1
All checks were successful
Test / Create distribution (push) Successful in 1m7s
Test / Sandbox (push) Successful in 2m58s
Test / Hakurei (push) Successful in 5m25s
Test / ShareFS (push) Successful in 6m57s
Test / Sandbox (race detector) (push) Successful in 3m28s
Test / Hakurei (race detector) (push) Successful in 6m50s
Test / Flake checks (push) Successful in 3m20s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:15:18 +09:00
30ac985fd2 internal/rosa/meson: 1.10.2 to 1.11.0
All checks were successful
Test / Create distribution (push) Successful in 1m7s
Test / Sandbox (push) Successful in 3m0s
Test / Hakurei (push) Successful in 4m52s
Test / ShareFS (push) Successful in 3m42s
Test / Sandbox (race detector) (push) Successful in 5m35s
Test / Hakurei (race detector) (push) Successful in 11m4s
Test / Flake checks (push) Successful in 2m25s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:14:52 +09:00
e9fec368f8 internal/rosa/nss: 3.122 to 3.122.1
All checks were successful
Test / Create distribution (push) Successful in 37s
Test / Sandbox (push) Successful in 2m55s
Test / ShareFS (push) Successful in 3m58s
Test / Sandbox (race detector) (push) Successful in 5m14s
Test / Hakurei (race detector) (push) Successful in 7m5s
Test / Hakurei (push) Successful in 4m35s
Test / Flake checks (push) Successful in 3m29s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:13:45 +09:00
46add42f58 internal/rosa/openssl: disable building docs
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 3m8s
Test / Hakurei (push) Successful in 4m28s
Test / ShareFS (push) Successful in 4m48s
Test / Sandbox (race detector) (push) Successful in 5m58s
Test / Hakurei (race detector) (push) Successful in 7m12s
Test / Flake checks (push) Successful in 2m25s
These take very long and are never used in the Rosa OS environment.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:13:18 +09:00
377b61e342 internal/rosa/openssl: do not double test job count
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m53s
Test / Hakurei (push) Successful in 4m16s
Test / ShareFS (push) Successful in 4m25s
Test / Sandbox (race detector) (push) Successful in 5m40s
Test / Hakurei (race detector) (push) Successful in 7m3s
Test / Flake checks (push) Successful in 2m23s
The test suite is racy, this reduces flakiness.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 22:12:36 +09:00
520c36db6d internal/rosa: respect preferred job count
All checks were successful
Test / Create distribution (push) Successful in 3m52s
Test / Sandbox (push) Successful in 9m15s
Test / ShareFS (push) Successful in 14m28s
Test / Sandbox (race detector) (push) Successful in 5m18s
Test / Hakurei (push) Successful in 8m11s
Test / Hakurei (race detector) (push) Successful in 9m23s
Test / Flake checks (push) Successful in 3m23s
This discontinues use of nproc, and also overrides detection behaviour in ninja.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 18:49:36 +09:00
3352bb975b internal/pkg: job count in container environment
All checks were successful
Test / Create distribution (push) Successful in 2m6s
Test / Sandbox (push) Successful in 4m51s
Test / Hakurei (push) Successful in 6m5s
Test / ShareFS (push) Successful in 6m16s
Test / Sandbox (race detector) (push) Successful in 7m15s
Test / Hakurei (race detector) (push) Successful in 8m19s
Test / Flake checks (push) Successful in 2m7s
This exposes preferred job count to the container initial process.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 15:49:21 +09:00
f7f48d57e9 internal/pkg: pass impure job count
All checks were successful
Test / Create distribution (push) Successful in 1m6s
Test / Sandbox (push) Successful in 2m58s
Test / Hakurei (push) Successful in 3m56s
Test / ShareFS (push) Successful in 4m6s
Test / Sandbox (race detector) (push) Successful in 5m24s
Test / Hakurei (race detector) (push) Successful in 6m49s
Test / Flake checks (push) Successful in 1m27s
This is cleaner than checking cpu count during cure, it is impossible to avoid impurity in both situations but this is configurable.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-15 15:36:44 +09:00
5c2345128e internal/rosa/llvm: autodetect stage0 target
All checks were successful
Test / Create distribution (push) Successful in 1m34s
Test / Sandbox (push) Successful in 4m11s
Test / Hakurei (push) Successful in 5m26s
Test / ShareFS (push) Successful in 5m35s
Test / Sandbox (race detector) (push) Successful in 6m37s
Test / Hakurei (race detector) (push) Successful in 7m49s
Test / Flake checks (push) Successful in 2m26s
This is fine, now that stages beyond stage0 have explicit target.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 03:10:26 +09:00
78f9676b1f internal/rosa/llvm: centralise llvm source
All checks were successful
Test / Create distribution (push) Successful in 2m25s
Test / Sandbox (push) Successful in 4m36s
Test / ShareFS (push) Successful in 6m11s
Test / Sandbox (race detector) (push) Successful in 7m20s
Test / Hakurei (race detector) (push) Successful in 11m45s
Test / Hakurei (push) Successful in 4m5s
Test / Flake checks (push) Successful in 1m41s
This avoids having to sidestep the NewPackage name formatting machinery to take the cache fast path.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 03:03:06 +09:00
5b5b676132 internal/rosa/cmake: remove variant
All checks were successful
Test / Create distribution (push) Successful in 1m21s
Test / Sandbox (race detector) (push) Successful in 9m30s
Test / ShareFS (push) Successful in 9m35s
Test / Sandbox (push) Successful in 3m6s
Test / Hakurei (push) Successful in 4m24s
Test / Hakurei (race detector) (push) Successful in 5m23s
Test / Flake checks (push) Successful in 1m35s
This has no effect outside formatting of name and is a remnant of the old llvm helpers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:57:47 +09:00
78383fb6e8 internal/rosa/llvm: migrate libclc
All checks were successful
Test / Create distribution (push) Successful in 1m19s
Test / Sandbox (push) Successful in 3m20s
Test / Hakurei (push) Successful in 4m32s
Test / ShareFS (push) Successful in 4m42s
Test / Sandbox (race detector) (push) Successful in 5m55s
Test / Hakurei (race detector) (push) Successful in 7m3s
Test / Flake checks (push) Successful in 1m27s
This eliminates newLLVMVariant.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:40:13 +09:00
e97f6a393f internal/rosa/llvm: migrate runtimes and clang
All checks were successful
Test / Create distribution (push) Successful in 1m56s
Test / ShareFS (push) Successful in 8m1s
Test / Sandbox (race detector) (push) Successful in 8m31s
Test / Hakurei (race detector) (push) Successful in 9m57s
Test / Sandbox (push) Successful in 2m13s
Test / Hakurei (push) Successful in 3m17s
Test / Flake checks (push) Successful in 1m38s
This eliminates most newLLVM family of functions.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 02:07:13 +09:00
eeffefd22b internal/rosa/llvm: migrate compiler-rt helper
All checks were successful
Test / Create distribution (push) Successful in 1m16s
Test / Sandbox (push) Successful in 3m23s
Test / Hakurei (push) Successful in 4m29s
Test / ShareFS (push) Successful in 4m34s
Test / Sandbox (race detector) (push) Successful in 5m51s
Test / Hakurei (race detector) (push) Successful in 6m58s
Test / Flake checks (push) Successful in 1m26s
This also removes unused dependencies.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 01:12:56 +09:00
ac825640ab internal/rosa/llvm: migrate musl
All checks were successful
Test / Create distribution (push) Successful in 2m48s
Test / Sandbox (push) Successful in 6m4s
Test / Hakurei (push) Successful in 7m14s
Test / ShareFS (push) Successful in 7m23s
Test / Sandbox (race detector) (push) Successful in 9m34s
Test / Hakurei (race detector) (push) Successful in 4m2s
Test / Flake checks (push) Successful in 1m33s
This removes the pointless special treatment given to musl.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:35:42 +09:00
a7f7ce1795 internal/rosa/llvm: migrate compiler-rt
All checks were successful
Test / Create distribution (push) Successful in 1m24s
Test / Sandbox (push) Successful in 3m33s
Test / Hakurei (push) Successful in 6m17s
Test / ShareFS (push) Successful in 6m14s
Test / Hakurei (race detector) (push) Successful in 8m34s
Test / Sandbox (race detector) (push) Successful in 5m39s
Test / Flake checks (push) Successful in 1m40s
The newLLVM family of functions predate the package system. This change migrates compiler-rt without changing any resulting artifacts.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:19:33 +09:00
38c639e35c internal/rosa/llvm: remove project/runtime helper
All checks were successful
Test / Create distribution (push) Successful in 2m48s
Test / Sandbox (push) Successful in 7m21s
Test / ShareFS (push) Successful in 8m34s
Test / Sandbox (race detector) (push) Successful in 9m6s
Test / Hakurei (race detector) (push) Successful in 10m14s
Test / Hakurei (push) Successful in 3m30s
Test / Flake checks (push) Successful in 2m54s
More remnants from early days, these are not reusable at all but that was not known at the time.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-14 00:03:23 +09:00
b2cb13e94c internal/rosa/llvm: centralise patches
All checks were successful
Test / Create distribution (push) Successful in 1m18s
Test / Sandbox (push) Successful in 3m19s
Test / Hakurei (push) Successful in 4m19s
Test / ShareFS (push) Successful in 4m30s
Test / Sandbox (race detector) (push) Successful in 5m48s
Test / Hakurei (race detector) (push) Successful in 8m20s
Test / Flake checks (push) Successful in 4m32s
This enables easier reuse of the patchset.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 23:52:44 +09:00
46f98d12d6 internal/rosa/llvm: remove conditional flags in helper
All checks were successful
Test / Create distribution (push) Successful in 1m15s
Test / Sandbox (push) Successful in 3m13s
Test / Hakurei (push) Successful in 4m17s
Test / ShareFS (push) Successful in 4m27s
Test / Sandbox (race detector) (push) Successful in 5m41s
Test / Hakurei (race detector) (push) Successful in 6m50s
Test / Flake checks (push) Successful in 1m27s
The llvm helper is a remnant from very early days, and ended up not being very useful, but was never removed. This change begins its removal, without changing the resulting artifacts for now.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 23:38:11 +09:00
503c7f953c internal/rosa/x: libpciaccess artifact
All checks were successful
Test / Create distribution (push) Successful in 1m17s
Test / Sandbox (push) Successful in 3m19s
Test / Hakurei (push) Successful in 7m51s
Test / ShareFS (push) Successful in 8m40s
Test / Sandbox (race detector) (push) Successful in 4m32s
Test / Hakurei (race detector) (push) Successful in 4m14s
Test / Flake checks (push) Successful in 1m34s
Required by userspace gpu drivers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 19:04:38 +09:00
15c9f6545d internal/rosa/perl: populate anitya identifiers
All checks were successful
Test / Create distribution (push) Successful in 1m23s
Test / Sandbox (push) Successful in 3m27s
Test / Hakurei (push) Successful in 4m29s
Test / ShareFS (push) Successful in 4m37s
Test / Sandbox (race detector) (push) Successful in 5m59s
Test / Hakurei (race detector) (push) Successful in 7m3s
Test / Flake checks (push) Successful in 1m28s
These are also tracked by Anitya.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 18:44:43 +09:00
83b0e32c55 internal/rosa: helpers for common url formats
All checks were successful
Test / Create distribution (push) Successful in 1m20s
Test / Sandbox (push) Successful in 3m16s
Test / ShareFS (push) Successful in 4m39s
Test / Sandbox (race detector) (push) Successful in 5m49s
Test / Hakurei (race detector) (push) Successful in 6m59s
Test / Hakurei (push) Successful in 3m13s
Test / Flake checks (push) Successful in 1m37s
This cleans up call site of NewPackage.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 18:02:57 +09:00
eeaf26e7a2 internal/rosa: wrapper around git helper
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m46s
Test / Hakurei (push) Successful in 3m43s
Test / ShareFS (push) Successful in 3m50s
Test / Sandbox (race detector) (push) Successful in 5m21s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 1m23s
This results in much cleaner call site for the majority of use cases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 15:20:51 +09:00
b587caf2e8 internal/rosa: assume file source is xz-compressed
All checks were successful
Test / Create distribution (push) Successful in 1m4s
Test / Sandbox (push) Successful in 2m51s
Test / Hakurei (push) Successful in 3m54s
Test / ShareFS (push) Successful in 3m55s
Test / Sandbox (race detector) (push) Successful in 5m15s
Test / Hakurei (race detector) (push) Successful in 6m20s
Test / Flake checks (push) Successful in 1m22s
XZ happens to be the only widely-used format that is awful to deal with, everything else is natively supported.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 15:07:30 +09:00
f1c2ca4928 internal/rosa/mesa: libdrm artifact
All checks were successful
Test / Create distribution (push) Successful in 1m23s
Test / Sandbox (push) Successful in 3m21s
Test / Hakurei (push) Successful in 4m34s
Test / ShareFS (push) Successful in 4m41s
Test / Sandbox (race detector) (push) Successful in 6m0s
Test / Hakurei (race detector) (push) Successful in 7m3s
Test / Flake checks (push) Successful in 1m29s
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:27:09 +09:00
0ca301219f internal/rosa/python: pyyaml artifact
All checks were successful
Test / Create distribution (push) Successful in 1m39s
Test / Sandbox (push) Successful in 4m9s
Test / Hakurei (push) Successful in 5m59s
Test / ShareFS (push) Successful in 6m2s
Test / Sandbox (race detector) (push) Successful in 7m0s
Test / Hakurei (race detector) (push) Successful in 8m19s
Test / Flake checks (push) Successful in 1m54s
Mesa unfortunately requires this horrible format.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:18:47 +09:00
e2199e1276 internal/rosa/python: mako artifact
All checks were successful
Test / Create distribution (push) Successful in 1m42s
Test / Sandbox (push) Successful in 4m3s
Test / Hakurei (push) Successful in 5m56s
Test / ShareFS (push) Successful in 6m2s
Test / Sandbox (race detector) (push) Successful in 6m59s
Test / Hakurei (race detector) (push) Successful in 8m18s
Test / Flake checks (push) Successful in 1m53s
This unfortunately pulls in platform-specific package.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:11:38 +09:00
86eacb3208 cmd/mbf: checksum command
All checks were successful
Test / Create distribution (push) Successful in 1m19s
Test / Sandbox (push) Successful in 3m16s
Test / Hakurei (push) Successful in 4m36s
Test / ShareFS (push) Successful in 4m43s
Test / Sandbox (race detector) (push) Successful in 5m55s
Test / Hakurei (race detector) (push) Successful in 7m11s
Test / Flake checks (push) Successful in 1m31s
This computes and encodes sha384 checksum of data streamed from standard input.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 03:09:21 +09:00
8541bdd858 internal/rosa: wrap per-arch values
All checks were successful
Test / Create distribution (push) Successful in 1m16s
Test / Sandbox (push) Successful in 3m12s
Test / Hakurei (push) Successful in 4m35s
Test / ShareFS (push) Successful in 4m38s
Test / Sandbox (race detector) (push) Successful in 5m52s
Test / Hakurei (race detector) (push) Successful in 7m9s
Test / Flake checks (push) Successful in 1m31s
This is cleaner syntax in some specific cases.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:59:55 +09:00
46be0b0dc8 internal/rosa/nss: buildcatrust 0.4.0 to 0.5.1
All checks were successful
Test / Create distribution (push) Successful in 1m51s
Test / Sandbox (push) Successful in 7m29s
Test / ShareFS (push) Successful in 9m35s
Test / Sandbox (race detector) (push) Successful in 9m52s
Test / Hakurei (race detector) (push) Successful in 4m13s
Test / Hakurei (push) Successful in 2m53s
Test / Flake checks (push) Successful in 1m31s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:18:21 +09:00
cbe37e87e7 internal/rosa/python: pytest 9.0.2 to 9.0.3
All checks were successful
Test / Create distribution (push) Successful in 1m27s
Test / ShareFS (push) Successful in 7m24s
Test / Sandbox (race detector) (push) Successful in 8m9s
Test / Hakurei (race detector) (push) Successful in 11m13s
Test / Sandbox (push) Successful in 2m57s
Test / Hakurei (push) Successful in 4m20s
Test / Flake checks (push) Successful in 1m36s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:18:02 +09:00
66d741fb07 internal/rosa/python: pygments 2.19.2 to 2.20.0
All checks were successful
Test / Create distribution (push) Successful in 2m13s
Test / ShareFS (push) Successful in 9m37s
Test / Sandbox (race detector) (push) Successful in 9m41s
Test / Hakurei (race detector) (push) Successful in 11m21s
Test / Sandbox (push) Successful in 2m40s
Test / Hakurei (push) Successful in 4m2s
Test / Flake checks (push) Successful in 1m37s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:13:04 +09:00
0d449011f6 internal/rosa/python: use predictable URLs
All checks were successful
Test / Create distribution (push) Successful in 1m18s
Test / Sandbox (push) Successful in 3m25s
Test / Hakurei (push) Successful in 5m54s
Test / ShareFS (push) Successful in 5m53s
Test / Sandbox (race detector) (push) Successful in 7m30s
Test / Hakurei (race detector) (push) Successful in 9m6s
Test / Flake checks (push) Successful in 2m53s
This is much cleaner and more maintainable than specifying URL prefix manually. This change also populates Anitya project identifiers.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 02:08:22 +09:00
46428ed85d internal/rosa/python: url pip wheel helper
All checks were successful
Test / Create distribution (push) Successful in 1m36s
Test / Sandbox (push) Successful in 3m34s
Test / Hakurei (push) Successful in 4m54s
Test / ShareFS (push) Successful in 4m55s
Test / Sandbox (race detector) (push) Successful in 6m9s
Test / Hakurei (race detector) (push) Successful in 4m44s
Test / Flake checks (push) Successful in 3m0s
This enables a cleaner higher-level helper.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-13 01:59:28 +09:00
081d6b463c internal/rosa/llvm: libclc artifact
All checks were successful
Test / Create distribution (push) Successful in 1m6s
Test / Sandbox (push) Successful in 2m48s
Test / Hakurei (push) Successful in 3m51s
Test / ShareFS (push) Successful in 3m54s
Test / Sandbox (race detector) (push) Successful in 5m19s
Test / Hakurei (race detector) (push) Successful in 6m21s
Test / Flake checks (push) Successful in 1m24s
This is built independently of llvm build system to avoid having to build llvm again.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:57:04 +09:00
11b3171180 internal/rosa/glslang: glslang artifact
All checks were successful
Test / Create distribution (push) Successful in 1m2s
Test / Sandbox (push) Successful in 2m56s
Test / ShareFS (push) Successful in 3m50s
Test / Hakurei (push) Successful in 4m3s
Test / Sandbox (race detector) (push) Successful in 5m12s
Test / Hakurei (race detector) (push) Successful in 6m20s
Test / Flake checks (push) Successful in 1m21s
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:34:17 +09:00
adbb84c3dd internal/rosa/glslang: spirv-tools artifact
All checks were successful
Test / Create distribution (push) Successful in 1m3s
Test / Sandbox (push) Successful in 2m47s
Test / Hakurei (push) Successful in 3m48s
Test / ShareFS (push) Successful in 3m49s
Test / Sandbox (race detector) (push) Successful in 5m20s
Test / Hakurei (race detector) (push) Successful in 6m24s
Test / Flake checks (push) Successful in 1m31s
Required by glslang.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:27:49 +09:00
1084e31d95 internal/rosa/glslang: spirv-headers artifact
All checks were successful
Test / Create distribution (push) Successful in 1m2s
Test / Sandbox (push) Successful in 2m42s
Test / Hakurei (push) Successful in 3m45s
Test / ShareFS (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m18s
Test / Hakurei (race detector) (push) Successful in 6m25s
Test / Flake checks (push) Successful in 1m21s
Required by spirv-tools.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 22:16:29 +09:00
27a1b8fe0a internal/rosa/mesa: libglvnd artifact
All checks were successful
Test / Create distribution (push) Successful in 1m5s
Test / Sandbox (push) Successful in 2m45s
Test / Hakurei (push) Successful in 3m51s
Test / ShareFS (push) Successful in 3m51s
Test / Sandbox (race detector) (push) Successful in 5m14s
Test / Hakurei (race detector) (push) Successful in 6m18s
Test / Flake checks (push) Successful in 1m22s
Required by mesa.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 21:27:30 +09:00
b2141a41d7 internal/rosa/dbus: xdg-dbus-proxy artifact
All checks were successful
Test / Create distribution (push) Successful in 1m13s
Test / Sandbox (push) Successful in 3m0s
Test / Hakurei (push) Successful in 4m8s
Test / ShareFS (push) Successful in 4m19s
Test / Sandbox (race detector) (push) Successful in 5m31s
Test / Hakurei (race detector) (push) Successful in 6m35s
Test / Flake checks (push) Successful in 1m22s
This is currently a hakurei runtime dependency, but will eventually be removed.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 19:41:49 +09:00
c0dff5bc87 internal/rosa/gnu: gcc set with-multilib-list as needed
All checks were successful
Test / Create distribution (push) Successful in 1m22s
Test / Sandbox (push) Successful in 3m19s
Test / Hakurei (push) Successful in 6m57s
Test / ShareFS (push) Successful in 6m54s
Test / Sandbox (race detector) (push) Successful in 7m38s
Test / Hakurei (race detector) (push) Successful in 8m50s
Test / Flake checks (push) Successful in 2m8s
This breaks riscv64.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 18:03:45 +09:00
04513c0510 internal/rosa/gnu: gmp explicit CC
All checks were successful
Test / Create distribution (push) Successful in 1m25s
Test / Sandbox (push) Successful in 3m20s
Test / Hakurei (push) Successful in 4m26s
Test / ShareFS (push) Successful in 4m23s
Test / Sandbox (race detector) (push) Successful in 5m50s
Test / Hakurei (race detector) (push) Successful in 6m51s
Test / Flake checks (push) Successful in 1m54s
The configure script is hard coded to use gcc without fallback on riscv64.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-12 17:25:15 +09:00
28ebf973d6 nix: add sharefs supplementary group
All checks were successful
Test / Sandbox (push) Successful in 1m1s
Test / Hakurei (push) Successful in 1m5s
Test / Sandbox (race detector) (push) Successful in 1m2s
Test / Hakurei (race detector) (push) Successful in 1m10s
Test / Create distribution (push) Successful in 1m19s
Test / ShareFS (push) Successful in 3m6s
Test / Flake checks (push) Successful in 1m33s
This works around vfs inode file attribute race.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-11 23:28:18 +09:00
41aeb404ec internal/rosa/hakurei: 0.3.7 to 0.4.0
All checks were successful
Test / Create distribution (push) Successful in 1m16s
Test / Sandbox (push) Successful in 3m9s
Test / Hakurei (push) Successful in 4m27s
Test / ShareFS (push) Successful in 4m32s
Test / Sandbox (race detector) (push) Successful in 5m37s
Test / Hakurei (race detector) (push) Successful in 6m49s
Test / Flake checks (push) Successful in 1m28s
Signed-off-by: Ophestra <cat@gensokyo.uk>
2026-04-11 10:53:29 +09:00
91 changed files with 2940 additions and 1492 deletions

94
cmd/mbf/cache.go Normal file
View File

@@ -0,0 +1,94 @@
package main
import (
"context"
"os"
"path/filepath"
"testing"
"hakurei.app/check"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
// cache refers to an instance of [pkg.Cache] that might be open.
type cache struct {
ctx context.Context
msg message.Msg
// Should generally not be used directly.
c *pkg.Cache
cures, jobs int
hostAbstract, idle bool
base string
}
// open opens the underlying [pkg.Cache].
func (cache *cache) open() (err error) {
if cache.c != nil {
return os.ErrInvalid
}
if cache.base == "" {
cache.base = "cache"
}
var base *check.Absolute
if cache.base, err = filepath.Abs(cache.base); err != nil {
return
} else if base, err = check.NewAbs(cache.base); err != nil {
return
}
var flags int
if cache.idle {
flags |= pkg.CSchedIdle
}
if cache.hostAbstract {
flags |= pkg.CHostAbstract
}
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-cache.ctx.Done():
if testing.Testing() {
return
}
os.Exit(2)
case <-done:
return
}
}()
cache.msg.Verbosef("opening cache at %s", base)
cache.c, err = pkg.Open(
cache.ctx,
cache.msg,
flags,
cache.cures,
cache.jobs,
base,
)
return
}
// Close closes the underlying [pkg.Cache] if it is open.
func (cache *cache) Close() {
if cache.c != nil {
cache.c.Close()
}
}
// Do calls f on the underlying cache and returns its error value.
func (cache *cache) Do(f func(cache *pkg.Cache) error) error {
if cache.c == nil {
if err := cache.open(); err != nil {
return err
}
}
return f(cache.c)
}

37
cmd/mbf/cache_test.go Normal file
View File

@@ -0,0 +1,37 @@
package main
import (
"log"
"os"
"testing"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
func TestCache(t *testing.T) {
t.Parallel()
cm := cache{
ctx: t.Context(),
msg: message.New(log.New(os.Stderr, "check: ", 0)),
base: t.TempDir(),
hostAbstract: true, idle: true,
}
defer cm.Close()
cm.Close()
if err := cm.open(); err != nil {
t.Fatalf("open: error = %v", err)
}
if err := cm.open(); err != os.ErrInvalid {
t.Errorf("(duplicate) open: error = %v", err)
}
if err := cm.Do(func(cache *pkg.Cache) error {
return cache.Scrub(0)
}); err != nil {
t.Errorf("Scrub: error = %v", err)
}
}

343
cmd/mbf/daemon.go Normal file
View File

@@ -0,0 +1,343 @@
package main
import (
"context"
"encoding/binary"
"errors"
"io"
"log"
"math"
"net"
"os"
"sync"
"syscall"
"testing"
"time"
"unique"
"hakurei.app/check"
"hakurei.app/internal/pkg"
)
// daemonTimeout is the maximum amount of time cureFromIR will wait on I/O.
const daemonTimeout = 30 * time.Second
// daemonDeadline returns the deadline corresponding to daemonTimeout, or the
// zero value when running in a test.
func daemonDeadline() time.Time {
if testing.Testing() {
return time.Time{}
}
return time.Now().Add(daemonTimeout)
}
const (
// remoteNoReply notifies that the client will not receive a cure reply.
remoteNoReply = 1 << iota
)
// cureFromIR services an IR curing request.
func cureFromIR(
cache *pkg.Cache,
conn net.Conn,
flags uint64,
) (pkg.Artifact, error) {
a, decodeErr := cache.NewDecoder(conn).Decode()
if decodeErr != nil {
_, err := conn.Write([]byte("\x00" + decodeErr.Error()))
return nil, errors.Join(decodeErr, err, conn.Close())
}
pathname, _, cureErr := cache.Cure(a)
if flags&remoteNoReply != 0 {
return a, errors.Join(cureErr, conn.Close())
}
if err := conn.SetWriteDeadline(daemonDeadline()); err != nil {
return a, errors.Join(cureErr, err, conn.Close())
}
if cureErr != nil {
_, err := conn.Write([]byte("\x00" + cureErr.Error()))
return a, errors.Join(cureErr, err, conn.Close())
}
_, err := conn.Write([]byte(pathname.String()))
if testing.Testing() && errors.Is(err, io.ErrClosedPipe) {
return a, nil
}
return a, errors.Join(err, conn.Close())
}
const (
// specialCancel is a message consisting of a single identifier referring
// to a curing artifact to be cancelled.
specialCancel = iota
// specialAbort requests for all pending cures to be aborted. It has no
// message body.
specialAbort
// remoteSpecial denotes a special message with custom layout.
remoteSpecial = math.MaxUint64
)
// writeSpecialHeader writes the header of a remoteSpecial message.
func writeSpecialHeader(conn net.Conn, kind uint64) error {
var sh [16]byte
binary.LittleEndian.PutUint64(sh[:], remoteSpecial)
binary.LittleEndian.PutUint64(sh[8:], kind)
if n, err := conn.Write(sh[:]); err != nil {
return err
} else if n != len(sh) {
return io.ErrShortWrite
}
return nil
}
// cancelIdent reads an identifier from conn and cancels the corresponding cure.
func cancelIdent(
cache *pkg.Cache,
conn net.Conn,
) (*pkg.ID, bool, error) {
var ident pkg.ID
if _, err := io.ReadFull(conn, ident[:]); err != nil {
return nil, false, errors.Join(err, conn.Close())
} else if err = conn.Close(); err != nil {
return nil, false, err
}
return &ident, cache.Cancel(unique.Make(ident)), nil
}
// serve services connections from a [net.UnixListener].
func serve(
ctx context.Context,
log *log.Logger,
cm *cache,
ul *net.UnixListener,
) error {
ul.SetUnlinkOnClose(true)
if cm.c == nil {
if err := cm.open(); err != nil {
return errors.Join(err, ul.Close())
}
}
var wg sync.WaitGroup
defer wg.Wait()
wg.Go(func() {
for {
if ctx.Err() != nil {
break
}
conn, err := ul.AcceptUnix()
if err != nil {
if !errors.Is(err, os.ErrDeadlineExceeded) {
log.Println(err)
}
continue
}
wg.Go(func() {
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-ctx.Done():
_ = conn.SetDeadline(time.Now())
case <-done:
return
}
}()
if _err := conn.SetReadDeadline(daemonDeadline()); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
var word [8]byte
if _, _err := io.ReadFull(conn, word[:]); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
flags := binary.LittleEndian.Uint64(word[:])
if flags == remoteSpecial {
if _, _err := io.ReadFull(conn, word[:]); _err != nil {
log.Println(_err)
if _err = conn.Close(); _err != nil {
log.Println(_err)
}
return
}
switch special := binary.LittleEndian.Uint64(word[:]); special {
default:
log.Printf("invalid special %d", special)
case specialCancel:
if id, ok, _err := cancelIdent(cm.c, conn); _err != nil {
log.Println(_err)
} else if !ok {
log.Println(
"attempting to cancel invalid artifact",
pkg.Encode(*id),
)
} else {
log.Println(
"cancelled artifact",
pkg.Encode(*id),
)
}
case specialAbort:
if _err := conn.Close(); _err != nil {
log.Println(_err)
}
log.Println("aborting all pending cures")
cm.c.Abort()
}
return
}
if a, _err := cureFromIR(cm.c, conn, flags); _err != nil {
log.Println(_err)
} else {
log.Printf(
"fulfilled artifact %s",
pkg.Encode(cm.c.Ident(a).Value()),
)
}
})
}
})
<-ctx.Done()
if err := ul.SetDeadline(time.Now()); err != nil {
return errors.Join(err, ul.Close())
}
wg.Wait()
return ul.Close()
}
// dial wraps [net.DialUnix] with a context.
func dial(ctx context.Context, addr *net.UnixAddr) (
done chan<- struct{},
conn *net.UnixConn,
err error,
) {
conn, err = net.DialUnix("unix", nil, addr)
if err != nil {
return
}
d := make(chan struct{})
done = d
go func() {
select {
case <-ctx.Done():
_ = conn.SetDeadline(time.Now())
case <-d:
return
}
}()
return
}
// cureRemote cures a [pkg.Artifact] on a daemon.
func cureRemote(
ctx context.Context,
addr *net.UnixAddr,
a pkg.Artifact,
flags uint64,
) (*check.Absolute, error) {
if flags == remoteSpecial {
return nil, syscall.EINVAL
}
done, conn, err := dial(ctx, addr)
if err != nil {
return nil, err
}
defer close(done)
if n, flagErr := conn.Write(binary.LittleEndian.AppendUint64(nil, flags)); flagErr != nil {
return nil, errors.Join(flagErr, conn.Close())
} else if n != 8 {
return nil, errors.Join(io.ErrShortWrite, conn.Close())
}
if err = pkg.NewIR().EncodeAll(conn, a); err != nil {
return nil, errors.Join(err, conn.Close())
} else if err = conn.CloseWrite(); err != nil {
return nil, errors.Join(err, conn.Close())
}
if flags&remoteNoReply != 0 {
return nil, conn.Close()
}
payload, recvErr := io.ReadAll(conn)
if err = errors.Join(recvErr, conn.Close()); err != nil {
if errors.Is(err, os.ErrDeadlineExceeded) {
if cancelErr := ctx.Err(); cancelErr != nil {
err = cancelErr
}
}
return nil, err
}
if len(payload) > 0 && payload[0] == 0 {
return nil, errors.New(string(payload[1:]))
}
var p *check.Absolute
p, err = check.NewAbs(string(payload))
return p, err
}
// cancelRemote cancels a [pkg.Artifact] curing on a daemon.
func cancelRemote(
ctx context.Context,
addr *net.UnixAddr,
a pkg.Artifact,
) error {
done, conn, err := dial(ctx, addr)
if err != nil {
return err
}
defer close(done)
if err = writeSpecialHeader(conn, specialCancel); err != nil {
return errors.Join(err, conn.Close())
}
var n int
id := pkg.NewIR().Ident(a).Value()
if n, err = conn.Write(id[:]); err != nil {
return errors.Join(err, conn.Close())
} else if n != len(id) {
return errors.Join(io.ErrShortWrite, conn.Close())
}
return conn.Close()
}
// abortRemote aborts all [pkg.Artifact] curing on a daemon.
func abortRemote(
ctx context.Context,
addr *net.UnixAddr,
) error {
done, conn, err := dial(ctx, addr)
if err != nil {
return err
}
defer close(done)
err = writeSpecialHeader(conn, specialAbort)
return errors.Join(err, conn.Close())
}

146
cmd/mbf/daemon_test.go Normal file
View File

@@ -0,0 +1,146 @@
package main
import (
"bytes"
"context"
"errors"
"io"
"log"
"net"
"os"
"path/filepath"
"slices"
"strings"
"testing"
"time"
"hakurei.app/check"
"hakurei.app/internal/pkg"
"hakurei.app/message"
)
func TestNoReply(t *testing.T) {
t.Parallel()
if !daemonDeadline().IsZero() {
t.Fatal("daemonDeadline did not return the zero value")
}
c, err := pkg.Open(
t.Context(),
message.New(log.New(os.Stderr, "cir: ", 0)),
0, 0, 0,
check.MustAbs(t.TempDir()),
)
if err != nil {
t.Fatalf("Open: error = %v", err)
}
defer c.Close()
client, server := net.Pipe()
done := make(chan struct{})
go func() {
defer close(done)
go func() {
<-t.Context().Done()
if _err := client.SetDeadline(time.Now()); _err != nil && !errors.Is(_err, io.ErrClosedPipe) {
panic(_err)
}
}()
if _err := c.EncodeAll(
client,
pkg.NewFile("check", []byte{0}),
); _err != nil {
panic(_err)
} else if _err = client.Close(); _err != nil {
panic(_err)
}
}()
a, cureErr := cureFromIR(c, server, remoteNoReply)
if cureErr != nil {
t.Fatalf("cureFromIR: error = %v", cureErr)
}
<-done
wantIdent := pkg.MustDecode("fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG")
if gotIdent := c.Ident(a).Value(); gotIdent != wantIdent {
t.Errorf(
"cureFromIR: %s, want %s",
pkg.Encode(gotIdent), pkg.Encode(wantIdent),
)
}
}
func TestDaemon(t *testing.T) {
t.Parallel()
var buf bytes.Buffer
logger := log.New(&buf, "daemon: ", 0)
addr := net.UnixAddr{
Name: filepath.Join(t.TempDir(), "daemon"),
Net: "unix",
}
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
cm := cache{
ctx: ctx,
msg: message.New(logger),
base: t.TempDir(),
}
defer cm.Close()
ul, err := net.ListenUnix("unix", &addr)
if err != nil {
t.Fatalf("ListenUnix: error = %v", err)
}
done := make(chan struct{})
go func() {
defer close(done)
if _err := serve(ctx, logger, &cm, ul); _err != nil {
panic(_err)
}
}()
if err = cancelRemote(ctx, &addr, pkg.NewFile("nonexistent", nil)); err != nil {
t.Fatalf("cancelRemote: error = %v", err)
}
if err = abortRemote(ctx, &addr); err != nil {
t.Fatalf("abortRemote: error = %v", err)
}
// keep this last for synchronisation
var p *check.Absolute
p, err = cureRemote(ctx, &addr, pkg.NewFile("check", []byte{0}), 0)
if err != nil {
t.Fatalf("cureRemote: error = %v", err)
}
cancel()
<-done
const want = "fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG"
if got := filepath.Base(p.String()); got != want {
t.Errorf("cureRemote: %s, want %s", got, want)
}
wantLog := []string{
"",
"daemon: aborting all pending cures",
"daemon: attempting to cancel invalid artifact kQm9fmnCmXST1-MMmxzcau2oKZCXXrlZydo4PkeV5hO_2PKfeC8t98hrbV_ZZx_j",
"daemon: fulfilled artifact fiZf-ZY_Yq6qxJNrHbMiIPYCsGkUiKCRsZrcSELXTqZWtCnESlHmzV5ThhWWGGYG",
}
gotLog := strings.Split(buf.String(), "\n")
slices.Sort(gotLog)
if !slices.Equal(gotLog, wantLog) {
t.Errorf(
"serve: logged\n%s\nwant\n%s",
strings.Join(gotLog, "\n"), strings.Join(wantLog, "\n"),
)
}
}

127
cmd/mbf/info.go Normal file
View File

@@ -0,0 +1,127 @@
package main
import (
"errors"
"fmt"
"io"
"os"
"strings"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
)
// commandInfo implements the info subcommand.
func commandInfo(
cm *cache,
args []string,
w io.Writer,
writeStatus bool,
reportPath string,
) (err error) {
if len(args) == 0 {
return errors.New("info requires at least 1 argument")
}
var r *rosa.Report
if reportPath != "" {
if r, err = rosa.OpenReport(reportPath); err != nil {
return err
}
defer func() {
if closeErr := r.Close(); err == nil {
err = closeErr
}
}()
defer r.HandleAccess(&err)()
}
// recovered by HandleAccess
mustPrintln := func(a ...any) {
if _, _err := fmt.Fprintln(w, a...); _err != nil {
panic(_err)
}
}
mustPrint := func(a ...any) {
if _, _err := fmt.Fprint(w, a...); _err != nil {
panic(_err)
}
}
for i, name := range args {
if p, ok := rosa.ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
var suffix string
if version := rosa.Std.Version(p); version != rosa.Unversioned {
suffix += "-" + version
}
mustPrintln("name : " + name + suffix)
meta := rosa.GetMetadata(p)
mustPrintln("description : " + meta.Description)
if meta.Website != "" {
mustPrintln("website : " +
strings.TrimSuffix(meta.Website, "/"))
}
if len(meta.Dependencies) > 0 {
mustPrint("depends on :")
for _, d := range meta.Dependencies {
s := rosa.GetMetadata(d).Name
if version := rosa.Std.Version(d); version != rosa.Unversioned {
s += "-" + version
}
mustPrint(" " + s)
}
mustPrintln()
}
const statusPrefix = "status : "
if writeStatus {
if r == nil {
var f io.ReadSeekCloser
err = cm.Do(func(cache *pkg.Cache) (err error) {
f, err = cache.OpenStatus(rosa.Std.Load(p))
return
})
if err != nil {
if errors.Is(err, os.ErrNotExist) {
mustPrintln(
statusPrefix + "not yet cured",
)
} else {
return
}
} else {
mustPrint(statusPrefix)
_, err = io.Copy(w, f)
if err = errors.Join(err, f.Close()); err != nil {
return
}
}
} else if err = cm.Do(func(cache *pkg.Cache) (err error) {
status, n := r.ArtifactOf(cache.Ident(rosa.Std.Load(p)))
if status == nil {
mustPrintln(
statusPrefix + "not in report",
)
} else {
mustPrintln("size :", n)
mustPrint(statusPrefix)
if _, err = w.Write(status); err != nil {
return
}
}
return
}); err != nil {
return
}
}
if i != len(args)-1 {
mustPrintln()
}
}
}
return nil
}

170
cmd/mbf/info_test.go Normal file
View File

@@ -0,0 +1,170 @@
package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"reflect"
"strings"
"syscall"
"testing"
"unsafe"
"hakurei.app/internal/pkg"
"hakurei.app/internal/rosa"
"hakurei.app/message"
)
func TestInfo(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
args []string
status map[string]string
report string
want string
wantErr any
}{
{"qemu", []string{"qemu"}, nil, "", `
name : qemu-` + rosa.Std.Version(rosa.QEMU) + `
description : a generic and open source machine emulator and virtualizer
website : https://www.qemu.org
depends on : glib-` + rosa.Std.Version(rosa.GLib) + ` zstd-` + rosa.Std.Version(rosa.Zstd) + `
`, nil},
{"multi", []string{"hakurei", "hakurei-dist"}, nil, "", `
name : hakurei-` + rosa.Std.Version(rosa.Hakurei) + `
description : low-level userspace tooling for Rosa OS
website : https://hakurei.app
name : hakurei-dist-` + rosa.Std.Version(rosa.HakureiDist) + `
description : low-level userspace tooling for Rosa OS (distribution tarball)
website : https://hakurei.app
`, nil},
{"nonexistent", []string{"zlib", "\x00"}, nil, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
`, fmt.Errorf("unknown artifact %q", "\x00")},
{"status cache", []string{"zlib", "zstd"}, map[string]string{
"zstd": "internal/pkg (amd64) on satori\n",
"hakurei": "internal/pkg (amd64) on satori\n\n",
}, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
status : not yet cured
name : zstd-` + rosa.Std.Version(rosa.Zstd) + `
description : a fast compression algorithm
website : https://facebook.github.io/zstd
status : internal/pkg (amd64) on satori
`, nil},
{"status cache perm", []string{"zlib"}, map[string]string{
"zlib": "\x00",
}, "", `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
`, func(cm *cache) error {
return &os.PathError{
Op: "open",
Path: filepath.Join(cm.base, "status", pkg.Encode(cm.c.Ident(rosa.Std.Load(rosa.Zlib)).Value())),
Err: syscall.EACCES,
}
}},
{"status report", []string{"zlib"}, nil, strings.Repeat("\x00", len(pkg.Checksum{})+8), `
name : zlib-` + rosa.Std.Version(rosa.Zlib) + `
description : lossless data-compression library
website : https://zlib.net
status : not in report
`, nil},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
cm *cache
buf strings.Builder
rp string
)
if tc.status != nil || tc.report != "" {
cm = &cache{
ctx: context.Background(),
msg: message.New(log.New(os.Stderr, "info: ", 0)),
base: t.TempDir(),
}
defer cm.Close()
}
if tc.report != "" {
rp = filepath.Join(t.TempDir(), "report")
if err := os.WriteFile(
rp,
unsafe.Slice(unsafe.StringData(tc.report), len(tc.report)),
0400,
); err != nil {
t.Fatal(err)
}
}
if tc.status != nil {
for name, status := range tc.status {
p, ok := rosa.ResolveName(name)
if !ok {
t.Fatalf("invalid name %q", name)
}
perm := os.FileMode(0400)
if status == "\x00" {
perm = 0
}
if err := cm.Do(func(cache *pkg.Cache) error {
return os.WriteFile(filepath.Join(
cm.base,
"status",
pkg.Encode(cache.Ident(rosa.Std.Load(p)).Value()),
), unsafe.Slice(unsafe.StringData(status), len(status)), perm)
}); err != nil {
t.Fatalf("Do: error = %v", err)
}
}
}
var wantErr error
switch c := tc.wantErr.(type) {
case error:
wantErr = c
case func(cm *cache) error:
wantErr = c(cm)
default:
if tc.wantErr != nil {
t.Fatalf("invalid wantErr %#v", tc.wantErr)
}
}
if err := commandInfo(
cm,
tc.args,
&buf,
cm != nil,
rp,
); !reflect.DeepEqual(err, wantErr) {
t.Fatalf("commandInfo: error = %v, want %v", err, wantErr)
}
if got := buf.String(); got != strings.TrimPrefix(tc.want, "\n") {
t.Errorf("commandInfo:\n%s\nwant\n%s", got, tc.want)
}
})
}
}

View File

@@ -14,16 +14,17 @@ package main
import ( import (
"context" "context"
"crypto/sha512"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"log" "log"
"net"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"runtime" "runtime"
"strconv" "strconv"
"strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"syscall" "syscall"
@@ -53,14 +54,13 @@ func main() {
log.Fatal("this program must not run as root") log.Fatal("this program must not run as root")
} }
var cache *pkg.Cache
ctx, stop := signal.NotifyContext(context.Background(), ctx, stop := signal.NotifyContext(context.Background(),
syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP) syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
defer stop() defer stop()
var cm cache
defer func() { defer func() {
if cache != nil { cm.Close()
cache.Close()
}
if r := recover(); r != nil { if r := recover(); r != nil {
fmt.Println(r) fmt.Println(r)
@@ -70,60 +70,65 @@ func main() {
var ( var (
flagQuiet bool flagQuiet bool
flagCures int
flagBase string
flagIdle bool
flagHostAbstract bool addr net.UnixAddr
) )
c := command.New(os.Stderr, log.Printf, "mbf", func([]string) (err error) { c := command.New(os.Stderr, log.Printf, "mbf", func([]string) error {
msg.SwapVerbose(!flagQuiet) msg.SwapVerbose(!flagQuiet)
cm.ctx, cm.msg = ctx, msg
cm.base = os.ExpandEnv(cm.base)
flagBase = os.ExpandEnv(flagBase) addr.Net = "unix"
if flagBase == "" { addr.Name = os.ExpandEnv(addr.Name)
flagBase = "cache" if addr.Name == "" {
addr.Name = "daemon"
} }
var base *check.Absolute return nil
if flagBase, err = filepath.Abs(flagBase); err != nil {
return
} else if base, err = check.NewAbs(flagBase); err != nil {
return
}
var flags int
if flagIdle {
flags |= pkg.CSchedIdle
}
if flagHostAbstract {
flags |= pkg.CHostAbstract
}
cache, err = pkg.Open(ctx, msg, flags, flagCures, base)
return
}).Flag( }).Flag(
&flagQuiet, &flagQuiet,
"q", command.BoolFlag(false), "q", command.BoolFlag(false),
"Do not print cure messages", "Do not print cure messages",
).Flag( ).Flag(
&flagCures, &cm.cures,
"cures", command.IntFlag(0), "cures", command.IntFlag(0),
"Maximum number of dependencies to cure at any given time", "Maximum number of dependencies to cure at any given time",
).Flag( ).Flag(
&flagBase, &cm.jobs,
"jobs", command.IntFlag(0),
"Preferred number of jobs to run, when applicable",
).Flag(
&cm.base,
"d", command.StringFlag("$MBF_CACHE_DIR"), "d", command.StringFlag("$MBF_CACHE_DIR"),
"Directory to store cured artifacts", "Directory to store cured artifacts",
).Flag( ).Flag(
&flagIdle, &cm.idle,
"sched-idle", command.BoolFlag(false), "sched-idle", command.BoolFlag(false),
"Set SCHED_IDLE scheduling policy", "Set SCHED_IDLE scheduling policy",
).Flag( ).Flag(
&flagHostAbstract, &cm.hostAbstract,
"host-abstract", command.BoolFlag( "host-abstract", command.BoolFlag(
os.Getenv("MBF_HOST_ABSTRACT") != "", os.Getenv("MBF_HOST_ABSTRACT") != "",
), ),
"Do not restrict networked cure containers from connecting to host "+ "Do not restrict networked cure containers from connecting to host "+
"abstract UNIX sockets", "abstract UNIX sockets",
).Flag(
&addr.Name,
"socket", command.StringFlag("$MBF_DAEMON_SOCKET"),
"Pathname of socket to bind to",
)
c.NewCommand(
"checksum", "Compute checksum of data read from standard input",
func([]string) error {
go func() { <-ctx.Done(); os.Exit(1) }()
h := sha512.New384()
if _, err := io.Copy(h, os.Stdin); err != nil {
return err
}
log.Println(pkg.Encode(pkg.Checksum(h.Sum(nil))))
return nil
},
) )
{ {
@@ -137,7 +142,9 @@ func main() {
if flagShifts < 0 || flagShifts > 31 { if flagShifts < 0 || flagShifts > 31 {
flagShifts = 12 flagShifts = 12
} }
return cache.Scrub(runtime.NumCPU() << flagShifts) return cm.Do(func(cache *pkg.Cache) error {
return cache.Scrub(runtime.NumCPU() << flagShifts)
})
}, },
).Flag( ).Flag(
&flagShifts, &flagShifts,
@@ -155,105 +162,17 @@ func main() {
"info", "info",
"Display out-of-band metadata of an artifact", "Display out-of-band metadata of an artifact",
func(args []string) (err error) { func(args []string) (err error) {
if len(args) == 0 { return commandInfo(&cm, args, os.Stdout, flagStatus, flagReport)
return errors.New("info requires at least 1 argument")
}
var r *rosa.Report
if flagReport != "" {
if r, err = rosa.OpenReport(flagReport); err != nil {
return err
}
defer func() {
if closeErr := r.Close(); err == nil {
err = closeErr
}
}()
defer r.HandleAccess(&err)()
}
for i, name := range args {
if p, ok := rosa.ResolveName(name); !ok {
return fmt.Errorf("unknown artifact %q", name)
} else {
var suffix string
if version := rosa.Std.Version(p); version != rosa.Unversioned {
suffix += "-" + version
}
fmt.Println("name : " + name + suffix)
meta := rosa.GetMetadata(p)
fmt.Println("description : " + meta.Description)
if meta.Website != "" {
fmt.Println("website : " +
strings.TrimSuffix(meta.Website, "/"))
}
if len(meta.Dependencies) > 0 {
fmt.Print("depends on :")
for _, d := range meta.Dependencies {
s := rosa.GetMetadata(d).Name
if version := rosa.Std.Version(d); version != rosa.Unversioned {
s += "-" + version
}
fmt.Print(" " + s)
}
fmt.Println()
}
const statusPrefix = "status : "
if flagStatus {
if r == nil {
var f io.ReadSeekCloser
f, err = cache.OpenStatus(rosa.Std.Load(p))
if err != nil {
if errors.Is(err, os.ErrNotExist) {
fmt.Println(
statusPrefix + "not yet cured",
)
} else {
return
}
} else {
fmt.Print(statusPrefix)
_, err = io.Copy(os.Stdout, f)
if err = errors.Join(err, f.Close()); err != nil {
return
}
}
} else {
status, n := r.ArtifactOf(cache.Ident(rosa.Std.Load(p)))
if status == nil {
fmt.Println(
statusPrefix + "not in report",
)
} else {
fmt.Println("size :", n)
fmt.Print(statusPrefix)
if _, err = os.Stdout.Write(status); err != nil {
return
}
}
}
}
if i != len(args)-1 {
fmt.Println()
}
}
}
return nil
}, },
). ).Flag(
Flag( &flagStatus,
&flagStatus, "status", command.BoolFlag(false),
"status", command.BoolFlag(false), "Display cure status if available",
"Display cure status if available", ).Flag(
). &flagReport,
Flag( "report", command.StringFlag(""),
&flagReport, "Load cure status from this report file instead of cache",
"report", command.StringFlag(""), )
"Load cure status from this report file instead of cache",
)
} }
c.NewCommand( c.NewCommand(
@@ -287,7 +206,9 @@ func main() {
if ext.Isatty(int(w.Fd())) { if ext.Isatty(int(w.Fd())) {
return errors.New("output appears to be a terminal") return errors.New("output appears to be a terminal")
} }
return rosa.WriteReport(msg, w, cache) return cm.Do(func(cache *pkg.Cache) error {
return rosa.WriteReport(msg, w, cache)
})
}, },
) )
@@ -350,14 +271,26 @@ func main() {
" package(s) are out of date")) " package(s) are out of date"))
} }
return errors.Join(errs...) return errors.Join(errs...)
}). }).Flag(
Flag( &flagJobs,
&flagJobs, "j", command.IntFlag(32),
"j", command.IntFlag(32), "Maximum number of simultaneous connections",
"Maximum number of simultaneous connections", )
)
} }
c.NewCommand(
"daemon",
"Service artifact IR with Rosa OS extensions",
func(args []string) error {
ul, err := net.ListenUnix("unix", &addr)
if err != nil {
return err
}
log.Printf("listening on pathname socket at %s", addr.Name)
return serve(ctx, log.Default(), &cm, ul)
},
)
{ {
var ( var (
flagGentoo string flagGentoo string
@@ -382,25 +315,37 @@ func main() {
rosa.SetGentooStage3(flagGentoo, checksum) rosa.SetGentooStage3(flagGentoo, checksum)
} }
_, _, _, stage1 := (t - 2).NewLLVM()
_, _, _, stage2 := (t - 1).NewLLVM()
_, _, _, stage3 := t.NewLLVM()
var ( var (
pathname *check.Absolute pathname *check.Absolute
checksum [2]unique.Handle[pkg.Checksum] checksum [2]unique.Handle[pkg.Checksum]
) )
if pathname, _, err = cache.Cure(stage1); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, _, err = cache.Cure(
(t - 2).Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage1:", pathname) log.Println("stage1:", pathname)
if pathname, checksum[0], err = cache.Cure(stage2); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, checksum[0], err = cache.Cure(
(t - 1).Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage2:", pathname) log.Println("stage2:", pathname)
if pathname, checksum[1], err = cache.Cure(stage3); err != nil { if err = cm.Do(func(cache *pkg.Cache) (err error) {
return err pathname, checksum[1], err = cache.Cure(
t.Load(rosa.Clang),
)
return
}); err != nil {
return
} }
log.Println("stage3:", pathname) log.Println("stage3:", pathname)
@@ -417,39 +362,41 @@ func main() {
} }
if flagStage0 { if flagStage0 {
if pathname, _, err = cache.Cure( if err = cm.Do(func(cache *pkg.Cache) (err error) {
t.Load(rosa.Stage0), pathname, _, err = cache.Cure(
); err != nil { t.Load(rosa.Stage0),
return err )
return
}); err != nil {
return
} }
log.Println(pathname) log.Println(pathname)
} }
return return
}, },
). ).Flag(
Flag( &flagGentoo,
&flagGentoo, "gentoo", command.StringFlag(""),
"gentoo", command.StringFlag(""), "Bootstrap from a Gentoo stage3 tarball",
"Bootstrap from a Gentoo stage3 tarball", ).Flag(
). &flagChecksum,
Flag( "checksum", command.StringFlag(""),
&flagChecksum, "Checksum of Gentoo stage3 tarball",
"checksum", command.StringFlag(""), ).Flag(
"Checksum of Gentoo stage3 tarball", &flagStage0,
). "stage0", command.BoolFlag(false),
Flag( "Create bootstrap stage0 tarball",
&flagStage0, )
"stage0", command.BoolFlag(false),
"Create bootstrap stage0 tarball",
)
} }
{ {
var ( var (
flagDump string flagDump string
flagEnter bool flagEnter bool
flagExport string flagExport string
flagRemote bool
flagNoReply bool
) )
c.NewCommand( c.NewCommand(
"cure", "cure",
@@ -465,7 +412,11 @@ func main() {
switch { switch {
default: default:
pathname, _, err := cache.Cure(rosa.Std.Load(p)) var pathname *check.Absolute
err := cm.Do(func(cache *pkg.Cache) (err error) {
pathname, _, err = cache.Cure(rosa.Std.Load(p))
return
})
if err != nil { if err != nil {
return err return err
} }
@@ -505,7 +456,7 @@ func main() {
return err return err
} }
if err = cache.EncodeAll(f, rosa.Std.Load(p)); err != nil { if err = pkg.NewIR().EncodeAll(f, rosa.Std.Load(p)); err != nil {
_ = f.Close() _ = f.Close()
return err return err
} }
@@ -513,33 +464,68 @@ func main() {
return f.Close() return f.Close()
case flagEnter: case flagEnter:
return cache.EnterExec( return cm.Do(func(cache *pkg.Cache) error {
ctx, return cache.EnterExec(
rosa.Std.Load(p), ctx,
true, os.Stdin, os.Stdout, os.Stderr, rosa.Std.Load(p),
rosa.AbsSystem.Append("bin", "mksh"), true, os.Stdin, os.Stdout, os.Stderr,
"sh", rosa.AbsSystem.Append("bin", "mksh"),
) "sh",
)
})
case flagRemote:
var flags uint64
if flagNoReply {
flags |= remoteNoReply
}
a := rosa.Std.Load(p)
pathname, err := cureRemote(ctx, &addr, a, flags)
if !flagNoReply && err == nil {
log.Println(pathname)
}
if errors.Is(err, context.Canceled) {
cc, cancel := context.WithDeadline(context.Background(), daemonDeadline())
defer cancel()
if _err := cancelRemote(cc, &addr, a); _err != nil {
log.Println(err)
}
}
return err
} }
}, },
). ).Flag(
Flag( &flagDump,
&flagDump, "dump", command.StringFlag(""),
"dump", command.StringFlag(""), "Write IR to specified pathname and terminate",
"Write IR to specified pathname and terminate", ).Flag(
). &flagExport,
Flag( "export", command.StringFlag(""),
&flagExport, "Export cured artifact to specified pathname",
"export", command.StringFlag(""), ).Flag(
"Export cured artifact to specified pathname", &flagEnter,
). "enter", command.BoolFlag(false),
Flag( "Enter cure container with an interactive shell",
&flagEnter, ).Flag(
"enter", command.BoolFlag(false), &flagRemote,
"Enter cure container with an interactive shell", "daemon", command.BoolFlag(false),
) "Cure artifact on the daemon",
).Flag(
&flagNoReply,
"no-reply", command.BoolFlag(false),
"Do not receive a reply from the daemon",
)
} }
c.NewCommand(
"abort",
"Abort all pending cures on the daemon",
func([]string) error { return abortRemote(ctx, &addr) },
)
{ {
var ( var (
flagNet bool flagNet bool
@@ -551,7 +537,7 @@ func main() {
"shell", "shell",
"Interactive shell in the specified Rosa OS environment", "Interactive shell in the specified Rosa OS environment",
func(args []string) error { func(args []string) error {
presets := make([]rosa.PArtifact, len(args)) presets := make([]rosa.PArtifact, len(args)+3)
for i, arg := range args { for i, arg := range args {
p, ok := rosa.ResolveName(arg) p, ok := rosa.ResolveName(arg)
if !ok { if !ok {
@@ -559,21 +545,24 @@ func main() {
} }
presets[i] = p presets[i] = p
} }
base := rosa.Clang
if !flagWithToolchain {
base = rosa.Musl
}
presets = append(presets,
base,
rosa.Mksh,
rosa.Toybox,
)
root := make(pkg.Collect, 0, 6+len(args)) root := make(pkg.Collect, 0, 6+len(args))
root = rosa.Std.AppendPresets(root, presets...) root = rosa.Std.AppendPresets(root, presets...)
if flagWithToolchain { if err := cm.Do(func(cache *pkg.Cache) error {
musl, compilerRT, runtimes, clang := (rosa.Std - 1).NewLLVM() _, _, err := cache.Cure(&root)
root = append(root, musl, compilerRT, runtimes, clang) return err
} else { }); err == nil {
root = append(root, rosa.Std.Load(rosa.Musl))
}
root = append(root,
rosa.Std.Load(rosa.Mksh),
rosa.Std.Load(rosa.Toybox),
)
if _, _, err := cache.Cure(&root); err == nil {
return errors.New("unreachable") return errors.New("unreachable")
} else if !pkg.IsCollected(err) { } else if !pkg.IsCollected(err) {
return err return err
@@ -585,11 +574,22 @@ func main() {
} }
cured := make(map[pkg.Artifact]cureRes) cured := make(map[pkg.Artifact]cureRes)
for _, a := range root { for _, a := range root {
pathname, checksum, err := cache.Cure(a) if err := cm.Do(func(cache *pkg.Cache) error {
if err != nil { pathname, checksum, err := cache.Cure(a)
if err == nil {
cured[a] = cureRes{pathname, checksum}
}
return err
}); err != nil {
return err
}
}
// explicitly open for direct error-free use from this point
if cm.c == nil {
if err := cm.open(); err != nil {
return err return err
} }
cured[a] = cureRes{pathname, checksum}
} }
layers := pkg.PromoteLayers(root, func(a pkg.Artifact) ( layers := pkg.PromoteLayers(root, func(a pkg.Artifact) (
@@ -599,7 +599,7 @@ func main() {
res := cured[a] res := cured[a]
return res.pathname, res.checksum return res.pathname, res.checksum
}, func(i int, d pkg.Artifact) { }, func(i int, d pkg.Artifact) {
r := pkg.Encode(cache.Ident(d).Value()) r := pkg.Encode(cm.c.Ident(d).Value())
if s, ok := d.(fmt.Stringer); ok { if s, ok := d.(fmt.Stringer); ok {
if name := s.String(); name != "" { if name := s.String(); name != "" {
r += "-" + name r += "-" + name
@@ -663,22 +663,19 @@ func main() {
} }
return z.Wait() return z.Wait()
}, },
). ).Flag(
Flag( &flagNet,
&flagNet, "net", command.BoolFlag(false),
"net", command.BoolFlag(false), "Share host net namespace",
"Share host net namespace", ).Flag(
). &flagSession,
Flag( "session", command.BoolFlag(true),
&flagSession, "Retain session",
"session", command.BoolFlag(true), ).Flag(
"Retain session", &flagWithToolchain,
). "with-toolchain", command.BoolFlag(false),
Flag( "Include the stage2 LLVM toolchain",
&flagWithToolchain, )
"with-toolchain", command.BoolFlag(false),
"Include the stage2 LLVM toolchain",
)
} }
@@ -689,9 +686,7 @@ func main() {
) )
c.MustParse(os.Args[1:], func(err error) { c.MustParse(os.Args[1:], func(err error) {
if cache != nil { cm.Close()
cache.Close()
}
if w, ok := err.(interface{ Unwrap() []error }); !ok { if w, ok := err.(interface{ Unwrap() []error }); !ok {
log.Fatal(err) log.Fatal(err)
} else { } else {

View File

@@ -27,6 +27,11 @@ import (
// AbsWork is the container pathname [TContext.GetWorkDir] is mounted on. // AbsWork is the container pathname [TContext.GetWorkDir] is mounted on.
var AbsWork = fhs.AbsRoot.Append("work/") var AbsWork = fhs.AbsRoot.Append("work/")
// EnvJobs is the name of the environment variable holding a decimal
// representation of the preferred job count. Its value must not affect cure
// outcome.
const EnvJobs = "CURE_JOBS"
// ExecPath is a slice of [Artifact] and the [check.Absolute] pathname to make // ExecPath is a slice of [Artifact] and the [check.Absolute] pathname to make
// it available at under in the container. // it available at under in the container.
type ExecPath struct { type ExecPath struct {
@@ -397,7 +402,7 @@ const SeccompPresets = std.PresetStrict &
func (a *execArtifact) makeContainer( func (a *execArtifact) makeContainer(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags int, flags, jobs int,
hostNet bool, hostNet bool,
temp, work *check.Absolute, temp, work *check.Absolute,
getArtifact GetArtifactFunc, getArtifact GetArtifactFunc,
@@ -432,8 +437,8 @@ func (a *execArtifact) makeContainer(
z.Hostname = "cure-net" z.Hostname = "cure-net"
} }
z.Uid, z.Gid = (1<<10)-1, (1<<10)-1 z.Uid, z.Gid = (1<<10)-1, (1<<10)-1
z.Dir, z.Path, z.Args = a.dir, a.path, a.args
z.Dir, z.Env, z.Path, z.Args = a.dir, a.env, a.path, a.args z.Env = slices.Concat(a.env, []string{EnvJobs + "=" + strconv.Itoa(jobs)})
z.Grow(len(a.paths) + 4) z.Grow(len(a.paths) + 4)
for i, b := range a.paths { for i, b := range a.paths {
@@ -563,6 +568,7 @@ func (c *Cache) EnterExec(
z, err = e.makeContainer( z, err = e.makeContainer(
ctx, c.msg, ctx, c.msg,
c.flags, c.flags,
c.jobs,
hostNet, hostNet,
temp, work, temp, work,
func(a Artifact) (*check.Absolute, unique.Handle[Checksum]) { func(a Artifact) (*check.Absolute, unique.Handle[Checksum]) {
@@ -602,7 +608,7 @@ func (a *execArtifact) cure(f *FContext, hostNet bool) (err error) {
msg := f.GetMessage() msg := f.GetMessage()
var z *container.Container var z *container.Container
if z, err = a.makeContainer( if z, err = a.makeContainer(
ctx, msg, f.cache.flags, hostNet, ctx, msg, f.cache.flags, f.GetJobs(), hostNet,
f.GetTempDir(), f.GetWorkDir(), f.GetTempDir(), f.GetWorkDir(),
f.GetArtifact, f.GetArtifact,
f.cache.Ident, f.cache.Ident,

View File

@@ -3,7 +3,6 @@ package pkg
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"context"
"crypto/sha512" "crypto/sha512"
"encoding/binary" "encoding/binary"
"errors" "errors"
@@ -11,6 +10,7 @@ import (
"io" "io"
"slices" "slices"
"strconv" "strconv"
"sync"
"syscall" "syscall"
"unique" "unique"
"unsafe" "unsafe"
@@ -39,22 +39,45 @@ func panicToError(errP *error) {
} }
} }
// irCache implements [IRCache].
type irCache struct {
// Artifact to [unique.Handle] of identifier cache.
artifact sync.Map
// Identifier free list, must not be accessed directly.
identPool sync.Pool
}
// zeroIRCache returns the initialised value of irCache.
func zeroIRCache() irCache {
return irCache{
identPool: sync.Pool{New: func() any { return new(extIdent) }},
}
}
// IRCache provides memory management and caching primitives for IR and
// identifier operations against [Artifact] implementations.
//
// The zero value is not safe for use.
type IRCache struct{ irCache }
// NewIR returns the address of a new [IRCache].
func NewIR() *IRCache {
return &IRCache{zeroIRCache()}
}
// IContext is passed to [Artifact.Params] and provides methods for writing // IContext is passed to [Artifact.Params] and provides methods for writing
// values to the IR writer. It does not expose the underlying [io.Writer]. // values to the IR writer. It does not expose the underlying [io.Writer].
// //
// IContext is valid until [Artifact.Params] returns. // IContext is valid until [Artifact.Params] returns.
type IContext struct { type IContext struct {
// Address of underlying [Cache], should be zeroed or made unusable after // Address of underlying irCache, should be zeroed or made unusable after
// [Artifact.Params] returns and must not be exposed directly. // [Artifact.Params] returns and must not be exposed directly.
cache *Cache ic *irCache
// Written to by various methods, should be zeroed after [Artifact.Params] // Written to by various methods, should be zeroed after [Artifact.Params]
// returns and must not be exposed directly. // returns and must not be exposed directly.
w io.Writer w io.Writer
} }
// Unwrap returns the underlying [context.Context].
func (i *IContext) Unwrap() context.Context { return i.cache.ctx }
// irZero is a zero IR word. // irZero is a zero IR word.
var irZero [wordSize]byte var irZero [wordSize]byte
@@ -136,11 +159,11 @@ func (i *IContext) mustWrite(p []byte) {
// WriteIdent is not defined for an [Artifact] not part of the slice returned by // WriteIdent is not defined for an [Artifact] not part of the slice returned by
// [Artifact.Dependencies]. // [Artifact.Dependencies].
func (i *IContext) WriteIdent(a Artifact) { func (i *IContext) WriteIdent(a Artifact) {
buf := i.cache.getIdentBuf() buf := i.ic.getIdentBuf()
defer i.cache.putIdentBuf(buf) defer i.ic.putIdentBuf(buf)
IRKindIdent.encodeHeader(0).put(buf[:]) IRKindIdent.encodeHeader(0).put(buf[:])
*(*ID)(buf[wordSize:]) = i.cache.Ident(a).Value() *(*ID)(buf[wordSize:]) = i.ic.Ident(a).Value()
i.mustWrite(buf[:]) i.mustWrite(buf[:])
} }
@@ -183,19 +206,19 @@ func (i *IContext) WriteString(s string) {
// Encode writes a deterministic, efficient representation of a to w and returns // Encode writes a deterministic, efficient representation of a to w and returns
// the first non-nil error encountered while writing to w. // the first non-nil error encountered while writing to w.
func (c *Cache) Encode(w io.Writer, a Artifact) (err error) { func (ic *irCache) Encode(w io.Writer, a Artifact) (err error) {
deps := a.Dependencies() deps := a.Dependencies()
idents := make([]*extIdent, len(deps)) idents := make([]*extIdent, len(deps))
for i, d := range deps { for i, d := range deps {
dbuf, did := c.unsafeIdent(d, true) dbuf, did := ic.unsafeIdent(d, true)
if dbuf == nil { if dbuf == nil {
dbuf = c.getIdentBuf() dbuf = ic.getIdentBuf()
binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind())) binary.LittleEndian.PutUint64(dbuf[:], uint64(d.Kind()))
*(*ID)(dbuf[wordSize:]) = did.Value() *(*ID)(dbuf[wordSize:]) = did.Value()
} else { } else {
c.storeIdent(d, dbuf) ic.storeIdent(d, dbuf)
} }
defer c.putIdentBuf(dbuf) defer ic.putIdentBuf(dbuf)
idents[i] = dbuf idents[i] = dbuf
} }
slices.SortFunc(idents, func(a, b *extIdent) int { slices.SortFunc(idents, func(a, b *extIdent) int {
@@ -221,10 +244,10 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
} }
func() { func() {
i := IContext{c, w} i := IContext{ic, w}
defer panicToError(&err) defer panicToError(&err)
defer func() { i.cache, i.w = nil, nil }() defer func() { i.ic, i.w = nil, nil }()
a.Params(&i) a.Params(&i)
}() }()
@@ -233,7 +256,7 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
} }
var f IREndFlag var f IREndFlag
kcBuf := c.getIdentBuf() kcBuf := ic.getIdentBuf()
sz := wordSize sz := wordSize
if kc, ok := a.(KnownChecksum); ok { if kc, ok := a.(KnownChecksum); ok {
f |= IREndKnownChecksum f |= IREndKnownChecksum
@@ -243,13 +266,13 @@ func (c *Cache) Encode(w io.Writer, a Artifact) (err error) {
IRKindEnd.encodeHeader(uint32(f)).put(kcBuf[:]) IRKindEnd.encodeHeader(uint32(f)).put(kcBuf[:])
_, err = w.Write(kcBuf[:sz]) _, err = w.Write(kcBuf[:sz])
c.putIdentBuf(kcBuf) ic.putIdentBuf(kcBuf)
return return
} }
// encodeAll implements EncodeAll by recursively encoding dependencies and // encodeAll implements EncodeAll by recursively encoding dependencies and
// performs deduplication by value via the encoded map. // performs deduplication by value via the encoded map.
func (c *Cache) encodeAll( func (ic *irCache) encodeAll(
w io.Writer, w io.Writer,
a Artifact, a Artifact,
encoded map[Artifact]struct{}, encoded map[Artifact]struct{},
@@ -259,13 +282,13 @@ func (c *Cache) encodeAll(
} }
for _, d := range a.Dependencies() { for _, d := range a.Dependencies() {
if err = c.encodeAll(w, d, encoded); err != nil { if err = ic.encodeAll(w, d, encoded); err != nil {
return return
} }
} }
encoded[a] = struct{}{} encoded[a] = struct{}{}
return c.Encode(w, a) return ic.Encode(w, a)
} }
// EncodeAll writes a self-describing IR stream of a to w and returns the first // EncodeAll writes a self-describing IR stream of a to w and returns the first
@@ -283,8 +306,8 @@ func (c *Cache) encodeAll(
// the ident cache, nor does it contribute identifiers it computes back to the // the ident cache, nor does it contribute identifiers it computes back to the
// ident cache. Because of this, multiple invocations of EncodeAll will have // ident cache. Because of this, multiple invocations of EncodeAll will have
// similar cost and does not amortise when combined with a call to Cure. // similar cost and does not amortise when combined with a call to Cure.
func (c *Cache) EncodeAll(w io.Writer, a Artifact) error { func (ic *irCache) EncodeAll(w io.Writer, a Artifact) error {
return c.encodeAll(w, a, make(map[Artifact]struct{})) return ic.encodeAll(w, a, make(map[Artifact]struct{}))
} }
// ErrRemainingIR is returned for a [IRReadFunc] that failed to call // ErrRemainingIR is returned for a [IRReadFunc] that failed to call

View File

@@ -8,7 +8,6 @@ import (
"testing" "testing"
"testing/fstest" "testing/fstest"
"unique" "unique"
"unsafe"
"hakurei.app/check" "hakurei.app/check"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
@@ -33,20 +32,14 @@ func TestHTTPGet(t *testing.T) {
checkWithCache(t, []cacheTestCase{ checkWithCache(t, []cacheTestCase{
{"direct", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"direct", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext r := newRContext(t, c)
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
var got []byte var got []byte
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -65,7 +58,7 @@ func TestHTTPGet(t *testing.T) {
wantErrMismatch := &pkg.ChecksumMismatchError{ wantErrMismatch := &pkg.ChecksumMismatchError{
Got: testdataChecksum.Value(), Got: testdataChecksum.Value(),
} }
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -76,7 +69,7 @@ func TestHTTPGet(t *testing.T) {
} }
// check fallback validation // check fallback validation
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) { } else if err = rc.Close(); !reflect.DeepEqual(err, wantErrMismatch) {
t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch) t.Fatalf("Close: error = %#v, want %#v", err, wantErrMismatch)
@@ -89,18 +82,13 @@ func TestHTTPGet(t *testing.T) {
pkg.Checksum{}, pkg.Checksum{},
) )
wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound) wantErrNotFound := pkg.ResponseStatusError(http.StatusNotFound)
if _, err := f.Cure(&r); !reflect.DeepEqual(err, wantErrNotFound) { if _, err := f.Cure(r); !reflect.DeepEqual(err, wantErrNotFound) {
t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound) t.Fatalf("Cure: error = %#v, want %#v", err, wantErrNotFound)
} }
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")}, }, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"cure", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"cure", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var r pkg.RContext r := newRContext(t, c)
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
f := pkg.NewHTTPGet( f := pkg.NewHTTPGet(
&client, &client,
@@ -120,7 +108,7 @@ func TestHTTPGet(t *testing.T) {
} }
var got []byte var got []byte
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)
@@ -136,7 +124,7 @@ func TestHTTPGet(t *testing.T) {
"file:///testdata", "file:///testdata",
testdataChecksum.Value(), testdataChecksum.Value(),
) )
if rc, err := f.Cure(&r); err != nil { if rc, err := f.Cure(r); err != nil {
t.Fatalf("Cure: error = %v", err) t.Fatalf("Cure: error = %v", err)
} else if got, err = io.ReadAll(rc); err != nil { } else if got, err = io.ReadAll(rc); err != nil {
t.Fatalf("ReadAll: error = %v", err) t.Fatalf("ReadAll: error = %v", err)

View File

@@ -72,6 +72,10 @@ func MustDecode(s string) (checksum Checksum) {
// common holds elements and receives methods shared between different contexts. // common holds elements and receives methods shared between different contexts.
type common struct { type common struct {
// Context specific to this [Artifact]. The toplevel context in [Cache] must
// not be exposed directly.
ctx context.Context
// Address of underlying [Cache], should be zeroed or made unusable after // Address of underlying [Cache], should be zeroed or made unusable after
// Cure returns and must not be exposed directly. // Cure returns and must not be exposed directly.
cache *Cache cache *Cache
@@ -183,11 +187,15 @@ func (t *TContext) destroy(errP *error) {
} }
// Unwrap returns the underlying [context.Context]. // Unwrap returns the underlying [context.Context].
func (c *common) Unwrap() context.Context { return c.cache.ctx } func (c *common) Unwrap() context.Context { return c.ctx }
// GetMessage returns [message.Msg] held by the underlying [Cache]. // GetMessage returns [message.Msg] held by the underlying [Cache].
func (c *common) GetMessage() message.Msg { return c.cache.msg } func (c *common) GetMessage() message.Msg { return c.cache.msg }
// GetJobs returns the preferred number of jobs to run, when applicable. Its
// value must not affect cure outcome.
func (c *common) GetJobs() int { return c.cache.jobs }
// GetWorkDir returns a pathname to a directory which [Artifact] is expected to // GetWorkDir returns a pathname to a directory which [Artifact] is expected to
// write its output to. This is not the final resting place of the [Artifact] // write its output to. This is not the final resting place of the [Artifact]
// and this pathname should not be directly referred to in the final contents. // and this pathname should not be directly referred to in the final contents.
@@ -207,11 +215,11 @@ func (t *TContext) GetTempDir() *check.Absolute { return t.temp }
// [ChecksumMismatchError], or the underlying implementation may block on Close. // [ChecksumMismatchError], or the underlying implementation may block on Close.
func (c *common) Open(a Artifact) (r io.ReadCloser, err error) { func (c *common) Open(a Artifact) (r io.ReadCloser, err error) {
if f, ok := a.(FileArtifact); ok { if f, ok := a.(FileArtifact); ok {
return c.cache.openFile(f) return c.cache.openFile(c.ctx, f)
} }
var pathname *check.Absolute var pathname *check.Absolute
if pathname, _, err = c.cache.Cure(a); err != nil { if pathname, _, err = c.cache.cure(a, true); err != nil {
return return
} }
@@ -372,6 +380,9 @@ type KnownChecksum interface {
} }
// FileArtifact refers to an [Artifact] backed by a single file. // FileArtifact refers to an [Artifact] backed by a single file.
//
// FileArtifact does not support fine-grained cancellation. Its context is
// inherited from the first [TrivialArtifact] or [FloodArtifact] that opens it.
type FileArtifact interface { type FileArtifact interface {
// Cure returns [io.ReadCloser] of the full contents of [FileArtifact]. If // Cure returns [io.ReadCloser] of the full contents of [FileArtifact]. If
// [FileArtifact] implements [KnownChecksum], Cure is responsible for // [FileArtifact] implements [KnownChecksum], Cure is responsible for
@@ -531,6 +542,30 @@ const (
CHostAbstract CHostAbstract
) )
// toplevel holds [context.WithCancel] over caller-supplied context, where all
// [Artifact] context are derived from.
type toplevel struct {
ctx context.Context
cancel context.CancelFunc
}
// newToplevel returns the address of a new toplevel via ctx.
func newToplevel(ctx context.Context) *toplevel {
var t toplevel
t.ctx, t.cancel = context.WithCancel(ctx)
return &t
}
// pendingCure provides synchronisation and cancellation for pending cures.
type pendingCure struct {
// Closed on cure completion.
done <-chan struct{}
// Error outcome, safe to access after done is closed.
err error
// Cancels the corresponding cure.
cancel context.CancelFunc
}
// Cache is a support layer that implementations of [Artifact] can use to store // Cache is a support layer that implementations of [Artifact] can use to store
// cured [Artifact] data in a content addressed fashion. // cured [Artifact] data in a content addressed fashion.
type Cache struct { type Cache struct {
@@ -538,11 +573,10 @@ type Cache struct {
// implementation and receives an equal amount of elements after. // implementation and receives an equal amount of elements after.
cures chan struct{} cures chan struct{}
// [context.WithCancel] over caller-supplied context, used by [Artifact] and // Parent context which toplevel was derived from.
// all dependency curing goroutines. parent context.Context
ctx context.Context // For deriving curing context, must not be accessed directly.
// Cancels ctx. toplevel atomic.Pointer[toplevel]
cancel context.CancelFunc
// For waiting on dependency curing goroutines. // For waiting on dependency curing goroutines.
wg sync.WaitGroup wg sync.WaitGroup
// Reports new cures and passed to [Artifact]. // Reports new cures and passed to [Artifact].
@@ -552,11 +586,11 @@ type Cache struct {
base *check.Absolute base *check.Absolute
// Immutable cure options set by [Open]. // Immutable cure options set by [Open].
flags int flags int
// Immutable job count, when applicable.
jobs int
// Artifact to [unique.Handle] of identifier cache. // Must not be exposed directly.
artifact sync.Map irCache
// Identifier free list, must not be accessed directly.
identPool sync.Pool
// Synchronises access to dirChecksum. // Synchronises access to dirChecksum.
checksumMu sync.RWMutex checksumMu sync.RWMutex
@@ -566,9 +600,11 @@ type Cache struct {
// Identifier to error pair for unrecoverably faulted [Artifact]. // Identifier to error pair for unrecoverably faulted [Artifact].
identErr map[unique.Handle[ID]]error identErr map[unique.Handle[ID]]error
// Pending identifiers, accessed through Cure for entries not in ident. // Pending identifiers, accessed through Cure for entries not in ident.
identPending map[unique.Handle[ID]]<-chan struct{} identPending map[unique.Handle[ID]]*pendingCure
// Synchronises access to ident and corresponding filesystem entries. // Synchronises access to ident and corresponding filesystem entries.
identMu sync.RWMutex identMu sync.RWMutex
// Synchronises entry into Abort and Cure.
abortMu sync.RWMutex
// Synchronises entry into exclusive artifacts for the cure method. // Synchronises entry into exclusive artifacts for the cure method.
exclMu sync.Mutex exclMu sync.Mutex
@@ -577,8 +613,10 @@ type Cache struct {
// Unlocks the on-filesystem cache. Must only be called from Close. // Unlocks the on-filesystem cache. Must only be called from Close.
unlock func() unlock func()
// Synchronises calls to Close. // Whether [Cache] is considered closed.
closeOnce sync.Once closed bool
// Synchronises calls to Abort and Close.
closeMu sync.Mutex
// Whether EnterExec has not yet returned. // Whether EnterExec has not yet returned.
inExec atomic.Bool inExec atomic.Bool
@@ -588,24 +626,24 @@ type Cache struct {
type extIdent [wordSize + len(ID{})]byte type extIdent [wordSize + len(ID{})]byte
// getIdentBuf returns the address of an extIdent for Ident. // getIdentBuf returns the address of an extIdent for Ident.
func (c *Cache) getIdentBuf() *extIdent { return c.identPool.Get().(*extIdent) } func (ic *irCache) getIdentBuf() *extIdent { return ic.identPool.Get().(*extIdent) }
// putIdentBuf adds buf to identPool. // putIdentBuf adds buf to identPool.
func (c *Cache) putIdentBuf(buf *extIdent) { c.identPool.Put(buf) } func (ic *irCache) putIdentBuf(buf *extIdent) { ic.identPool.Put(buf) }
// storeIdent adds an [Artifact] to the artifact cache. // storeIdent adds an [Artifact] to the artifact cache.
func (c *Cache) storeIdent(a Artifact, buf *extIdent) unique.Handle[ID] { func (ic *irCache) storeIdent(a Artifact, buf *extIdent) unique.Handle[ID] {
idu := unique.Make(ID(buf[wordSize:])) idu := unique.Make(ID(buf[wordSize:]))
c.artifact.Store(a, idu) ic.artifact.Store(a, idu)
return idu return idu
} }
// Ident returns the identifier of an [Artifact]. // Ident returns the identifier of an [Artifact].
func (c *Cache) Ident(a Artifact) unique.Handle[ID] { func (ic *irCache) Ident(a Artifact) unique.Handle[ID] {
buf, idu := c.unsafeIdent(a, false) buf, idu := ic.unsafeIdent(a, false)
if buf != nil { if buf != nil {
idu = c.storeIdent(a, buf) idu = ic.storeIdent(a, buf)
c.putIdentBuf(buf) ic.putIdentBuf(buf)
} }
return idu return idu
} }
@@ -613,17 +651,17 @@ func (c *Cache) Ident(a Artifact) unique.Handle[ID] {
// unsafeIdent implements Ident but returns the underlying buffer for a newly // unsafeIdent implements Ident but returns the underlying buffer for a newly
// computed identifier. Callers must return this buffer to identPool. encodeKind // computed identifier. Callers must return this buffer to identPool. encodeKind
// is only a hint, kind may still be encoded in the buffer. // is only a hint, kind may still be encoded in the buffer.
func (c *Cache) unsafeIdent(a Artifact, encodeKind bool) ( func (ic *irCache) unsafeIdent(a Artifact, encodeKind bool) (
buf *extIdent, buf *extIdent,
idu unique.Handle[ID], idu unique.Handle[ID],
) { ) {
if id, ok := c.artifact.Load(a); ok { if id, ok := ic.artifact.Load(a); ok {
idu = id.(unique.Handle[ID]) idu = id.(unique.Handle[ID])
return return
} }
if ki, ok := a.(KnownIdent); ok { if ki, ok := a.(KnownIdent); ok {
buf = c.getIdentBuf() buf = ic.getIdentBuf()
if encodeKind { if encodeKind {
binary.LittleEndian.PutUint64(buf[:], uint64(a.Kind())) binary.LittleEndian.PutUint64(buf[:], uint64(a.Kind()))
} }
@@ -631,9 +669,9 @@ func (c *Cache) unsafeIdent(a Artifact, encodeKind bool) (
return return
} }
buf = c.getIdentBuf() buf = ic.getIdentBuf()
h := sha512.New384() h := sha512.New384()
if err := c.Encode(h, a); err != nil { if err := ic.Encode(h, a); err != nil {
// unreachable // unreachable
panic(err) panic(err)
} }
@@ -1002,7 +1040,11 @@ func (c *Cache) Scrub(checks int) error {
// loadOrStoreIdent attempts to load a cached [Artifact] by its identifier or // loadOrStoreIdent attempts to load a cached [Artifact] by its identifier or
// wait for a pending [Artifact] to cure. If neither is possible, the current // wait for a pending [Artifact] to cure. If neither is possible, the current
// identifier is stored in identPending and a non-nil channel is returned. // identifier is stored in identPending and a non-nil channel is returned.
//
// Since identErr is treated as grow-only, loadOrStoreIdent must not be entered
// without holding a read lock on abortMu.
func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) ( func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) (
ctx context.Context,
done chan<- struct{}, done chan<- struct{},
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
@@ -1019,20 +1061,23 @@ func (c *Cache) loadOrStoreIdent(id unique.Handle[ID]) (
return return
} }
var notify <-chan struct{} var pending *pendingCure
if notify, ok = c.identPending[id]; ok { if pending, ok = c.identPending[id]; ok {
c.identMu.Unlock() c.identMu.Unlock()
<-notify <-pending.done
c.identMu.RLock() c.identMu.RLock()
if checksum, ok = c.ident[id]; !ok { if checksum, ok = c.ident[id]; !ok {
err = c.identErr[id] err = pending.err
} }
c.identMu.RUnlock() c.identMu.RUnlock()
return return
} }
d := make(chan struct{}) d := make(chan struct{})
c.identPending[id] = d pending = &pendingCure{done: d}
ctx, pending.cancel = context.WithCancel(c.toplevel.Load().ctx)
c.wg.Add(1)
c.identPending[id] = pending
c.identMu.Unlock() c.identMu.Unlock()
done = d done = d
return return
@@ -1048,21 +1093,62 @@ func (c *Cache) finaliseIdent(
) { ) {
c.identMu.Lock() c.identMu.Lock()
if err != nil { if err != nil {
c.identPending[id].err = err
c.identErr[id] = err c.identErr[id] = err
} else { } else {
c.ident[id] = checksum c.ident[id] = checksum
} }
delete(c.identPending, id) delete(c.identPending, id)
c.identMu.Unlock() c.identMu.Unlock()
c.wg.Done()
close(done) close(done)
} }
// Done returns a channel that is closed when the ongoing cure of an [Artifact]
// referred to by the specified identifier completes. Done may return nil if
// no ongoing cure of the specified identifier exists.
func (c *Cache) Done(id unique.Handle[ID]) <-chan struct{} {
c.identMu.RLock()
pending, ok := c.identPending[id]
c.identMu.RUnlock()
if !ok || pending == nil {
return nil
}
return pending.done
}
// Cancel cancels the ongoing cure of an [Artifact] referred to by the specified
// identifier. Cancel returns whether the [context.CancelFunc] has been killed.
// Cancel returns after the cure is complete.
func (c *Cache) Cancel(id unique.Handle[ID]) bool {
c.identMu.RLock()
pending, ok := c.identPending[id]
c.identMu.RUnlock()
if !ok || pending == nil || pending.cancel == nil {
return false
}
pending.cancel()
<-pending.done
c.abortMu.Lock()
c.identMu.Lock()
delete(c.identErr, id)
c.identMu.Unlock()
c.abortMu.Unlock()
return true
}
// openFile tries to load [FileArtifact] from [Cache], and if that fails, // openFile tries to load [FileArtifact] from [Cache], and if that fails,
// obtains it via [FileArtifact.Cure] instead. Notably, it does not cure // obtains it via [FileArtifact.Cure] instead. Notably, it does not cure
// [FileArtifact] to the filesystem. If err is nil, the caller is responsible // [FileArtifact] to the filesystem. If err is nil, the caller is responsible
// for closing the resulting [io.ReadCloser]. // for closing the resulting [io.ReadCloser].
func (c *Cache) openFile(f FileArtifact) (r io.ReadCloser, err error) { //
// The context must originate from loadOrStoreIdent to enable cancellation.
func (c *Cache) openFile(
ctx context.Context,
f FileArtifact,
) (r io.ReadCloser, err error) {
if kc, ok := f.(KnownChecksum); c.flags&CAssumeChecksum != 0 && ok { if kc, ok := f.(KnownChecksum); c.flags&CAssumeChecksum != 0 && ok {
c.checksumMu.RLock() c.checksumMu.RLock()
r, err = os.Open(c.base.Append( r, err = os.Open(c.base.Append(
@@ -1093,7 +1179,7 @@ func (c *Cache) openFile(f FileArtifact) (r io.ReadCloser, err error) {
} }
}() }()
} }
return f.Cure(&RContext{common{c}}) return f.Cure(&RContext{common{ctx, c}})
} }
return return
} }
@@ -1241,12 +1327,11 @@ func (c *Cache) Cure(a Artifact) (
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
err error, err error,
) { ) {
select { c.abortMu.RLock()
case <-c.ctx.Done(): defer c.abortMu.RUnlock()
err = c.ctx.Err()
return
default: if err = c.toplevel.Load().ctx.Err(); err != nil {
return
} }
return c.cure(a, true) return c.cure(a, true)
@@ -1332,15 +1417,16 @@ func (c *Cache) enterCure(a Artifact, curesExempt bool) error {
return nil return nil
} }
ctx := c.toplevel.Load().ctx
select { select {
case c.cures <- struct{}{}: case c.cures <- struct{}{}:
return nil return nil
case <-c.ctx.Done(): case <-ctx.Done():
if a.IsExclusive() { if a.IsExclusive() {
c.exclMu.Unlock() c.exclMu.Unlock()
} }
return c.ctx.Err() return ctx.Err()
} }
} }
@@ -1438,7 +1524,8 @@ func (r *RContext) NewMeasuredReader(
return r.cache.newMeasuredReader(rc, checksum) return r.cache.newMeasuredReader(rc, checksum)
} }
// cure implements Cure without checking the full dependency graph. // cure implements Cure without acquiring a read lock on abortMu. cure must not
// be entered during Abort.
func (c *Cache) cure(a Artifact, curesExempt bool) ( func (c *Cache) cure(a Artifact, curesExempt bool) (
pathname *check.Absolute, pathname *check.Absolute,
checksum unique.Handle[Checksum], checksum unique.Handle[Checksum],
@@ -1457,8 +1544,11 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
} }
}() }()
var done chan<- struct{} var (
done, checksum, err = c.loadOrStoreIdent(id) ctx context.Context
done chan<- struct{}
)
ctx, done, checksum, err = c.loadOrStoreIdent(id)
if done == nil { if done == nil {
return return
} else { } else {
@@ -1571,7 +1661,7 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
if err = c.enterCure(a, curesExempt); err != nil { if err = c.enterCure(a, curesExempt); err != nil {
return return
} }
r, err = f.Cure(&RContext{common{c}}) r, err = f.Cure(&RContext{common{ctx, c}})
if err == nil { if err == nil {
if checksumPathname == nil || c.flags&CValidateKnown != 0 { if checksumPathname == nil || c.flags&CValidateKnown != 0 {
h := sha512.New384() h := sha512.New384()
@@ -1651,7 +1741,7 @@ func (c *Cache) cure(a Artifact, curesExempt bool) (
c.base.Append(dirWork, ids), c.base.Append(dirWork, ids),
c.base.Append(dirTemp, ids), c.base.Append(dirTemp, ids),
ids, nil, nil, nil, ids, nil, nil, nil,
common{c}, common{ctx, c},
} }
switch ca := a.(type) { switch ca := a.(type) {
case TrivialArtifact: case TrivialArtifact:
@@ -1802,14 +1892,42 @@ func (c *Cache) OpenStatus(a Artifact) (r io.ReadSeekCloser, err error) {
return return
} }
// Abort cancels all pending cures and waits for them to clean up, but does not
// close the cache.
func (c *Cache) Abort() {
c.closeMu.Lock()
defer c.closeMu.Unlock()
if c.closed {
return
}
c.toplevel.Load().cancel()
c.abortMu.Lock()
defer c.abortMu.Unlock()
// holding abortMu, identPending stays empty
c.wg.Wait()
c.identMu.Lock()
c.toplevel.Store(newToplevel(c.parent))
clear(c.identErr)
c.identMu.Unlock()
}
// Close cancels all pending cures and waits for them to clean up. // Close cancels all pending cures and waits for them to clean up.
func (c *Cache) Close() { func (c *Cache) Close() {
c.closeOnce.Do(func() { c.closeMu.Lock()
c.cancel() defer c.closeMu.Unlock()
c.wg.Wait()
close(c.cures) if c.closed {
c.unlock() return
}) }
c.closed = true
c.toplevel.Load().cancel()
c.wg.Wait()
close(c.cures)
c.unlock()
} }
// Open returns the address of a newly opened instance of [Cache]. // Open returns the address of a newly opened instance of [Cache].
@@ -1818,7 +1936,7 @@ func (c *Cache) Close() {
// caller-supplied value, however direct calls to [Cache.Cure] is not subject // caller-supplied value, however direct calls to [Cache.Cure] is not subject
// to this limitation. // to this limitation.
// //
// A cures value of 0 or lower is equivalent to the value returned by // A cures or jobs value of 0 or lower is equivalent to the value returned by
// [runtime.NumCPU]. // [runtime.NumCPU].
// //
// A successful call to Open guarantees exclusive access to the on-filesystem // A successful call to Open guarantees exclusive access to the on-filesystem
@@ -1828,10 +1946,10 @@ func (c *Cache) Close() {
func Open( func Open(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
) (*Cache, error) { ) (*Cache, error) {
return open(ctx, msg, flags, cures, base, true) return open(ctx, msg, flags, cures, jobs, base, true)
} }
// open implements Open but allows omitting the [lockedfile] lock when called // open implements Open but allows omitting the [lockedfile] lock when called
@@ -1839,13 +1957,16 @@ func Open(
func open( func open(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
lock bool, lock bool,
) (*Cache, error) { ) (*Cache, error) {
if cures < 1 { if cures < 1 {
cures = runtime.NumCPU() cures = runtime.NumCPU()
} }
if jobs < 1 {
jobs = runtime.NumCPU()
}
for _, name := range []string{ for _, name := range []string{
dirIdentifier, dirIdentifier,
@@ -1860,22 +1981,25 @@ func open(
} }
c := Cache{ c := Cache{
parent: ctx,
cures: make(chan struct{}, cures), cures: make(chan struct{}, cures),
flags: flags, flags: flags,
jobs: jobs,
msg: msg, msg: msg,
base: base, base: base,
identPool: sync.Pool{New: func() any { return new(extIdent) }}, irCache: zeroIRCache(),
ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]), ident: make(map[unique.Handle[ID]]unique.Handle[Checksum]),
identErr: make(map[unique.Handle[ID]]error), identErr: make(map[unique.Handle[ID]]error),
identPending: make(map[unique.Handle[ID]]<-chan struct{}), identPending: make(map[unique.Handle[ID]]*pendingCure),
brPool: sync.Pool{New: func() any { return new(bufio.Reader) }}, brPool: sync.Pool{New: func() any { return new(bufio.Reader) }},
bwPool: sync.Pool{New: func() any { return new(bufio.Writer) }}, bwPool: sync.Pool{New: func() any { return new(bufio.Writer) }},
} }
c.ctx, c.cancel = context.WithCancel(ctx) c.toplevel.Store(newToplevel(ctx))
if lock || !testing.Testing() { if lock || !testing.Testing() {
if unlock, err := lockedfile.MutexAt( if unlock, err := lockedfile.MutexAt(

View File

@@ -16,6 +16,7 @@ import (
"path/filepath" "path/filepath"
"reflect" "reflect"
"strconv" "strconv"
"sync"
"syscall" "syscall"
"testing" "testing"
"unique" "unique"
@@ -35,11 +36,28 @@ import (
func unsafeOpen( func unsafeOpen(
ctx context.Context, ctx context.Context,
msg message.Msg, msg message.Msg,
flags, cures int, flags, cures, jobs int,
base *check.Absolute, base *check.Absolute,
lock bool, lock bool,
) (*pkg.Cache, error) ) (*pkg.Cache, error)
// newRContext returns the address of a new [pkg.RContext] unsafely created for
// the specified [testing.TB].
func newRContext(tb testing.TB, c *pkg.Cache) *pkg.RContext {
var r pkg.RContext
rContextVal := reflect.ValueOf(&r).Elem().FieldByName("ctx")
reflect.NewAt(
rContextVal.Type(),
unsafe.Pointer(rContextVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(tb.Context()))
rCacheVal := reflect.ValueOf(&r).Elem().FieldByName("cache")
reflect.NewAt(
rCacheVal.Type(),
unsafe.Pointer(rCacheVal.UnsafeAddr()),
).Elem().Set(reflect.ValueOf(c))
return &r
}
func TestMain(m *testing.M) { container.TryArgv0(nil); os.Exit(m.Run()) } func TestMain(m *testing.M) { container.TryArgv0(nil); os.Exit(m.Run()) }
// overrideIdent overrides the ID method of [Artifact]. // overrideIdent overrides the ID method of [Artifact].
@@ -230,7 +248,7 @@ func TestIdent(t *testing.T) {
var cache *pkg.Cache var cache *pkg.Cache
if a, err := check.NewAbs(t.TempDir()); err != nil { if a, err := check.NewAbs(t.TempDir()); err != nil {
t.Fatal(err) t.Fatal(err)
} else if cache, err = pkg.Open(t.Context(), msg, 0, 0, a); err != nil { } else if cache, err = pkg.Open(t.Context(), msg, 0, 0, 0, a); err != nil {
t.Fatal(err) t.Fatal(err)
} }
t.Cleanup(cache.Close) t.Cleanup(cache.Close)
@@ -304,7 +322,7 @@ func checkWithCache(t *testing.T, testCases []cacheTestCase) {
} }
var scrubFunc func() error // scrub after hashing var scrubFunc func() error // scrub after hashing
if c, err := pkg.Open(t.Context(), msg, flags, 1<<4, base); err != nil { if c, err := pkg.Open(t.Context(), msg, flags, 1<<4, 0, base); err != nil {
t.Fatalf("Open: error = %v", err) t.Fatalf("Open: error = %v", err)
} else { } else {
t.Cleanup(c.Close) t.Cleanup(c.Close)
@@ -606,7 +624,7 @@ func TestCache(t *testing.T) {
if c0, err := unsafeOpen( if c0, err := unsafeOpen(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, base, false, 0, 0, 0, base, false,
); err != nil { ); err != nil {
t.Fatalf("open: error = %v", err) t.Fatalf("open: error = %v", err)
} else { } else {
@@ -876,17 +894,69 @@ func TestCache(t *testing.T) {
t.Fatalf("Scrub: error = %#v, want %#v", err, wantErrScrub) t.Fatalf("Scrub: error = %#v, want %#v", err, wantErrScrub)
} }
identPendingVal := reflect.ValueOf(c).Elem().FieldByName("identPending") notify := c.Done(unique.Make(pkg.ID{0xff}))
identPending := reflect.NewAt(
identPendingVal.Type(),
unsafe.Pointer(identPendingVal.UnsafeAddr()),
).Elem().Interface().(map[unique.Handle[pkg.ID]]<-chan struct{})
notify := identPending[unique.Make(pkg.ID{0xff})]
go close(n) go close(n)
<-notify if notify != nil {
<-notify
}
for c.Done(unique.Make(pkg.ID{0xff})) != nil {
}
<-wCureDone <-wCureDone
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")}, }, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"cancel abort block", pkg.CValidateKnown, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
var wg sync.WaitGroup
defer wg.Wait()
var started sync.WaitGroup
defer started.Wait()
blockCures := func(d byte, e stub.UniqueError, n int) {
started.Add(n)
for i := range n {
wg.Go(func() {
if _, _, err := c.Cure(overrideIdent{pkg.ID{d, byte(i)}, &stubArtifact{
kind: pkg.KindTar,
cure: func(t *pkg.TContext) error {
started.Done()
<-t.Unwrap().Done()
return e + stub.UniqueError(i)
},
}}); !reflect.DeepEqual(err, e+stub.UniqueError(i)) {
panic(err)
}
})
}
started.Wait()
}
blockCures(0xfd, 0xbad, 16)
c.Abort()
wg.Wait()
blockCures(0xfd, 0xcafe, 16)
c.Abort()
wg.Wait()
blockCures(0xff, 0xbad, 1)
if !c.Cancel(unique.Make(pkg.ID{0xff})) {
t.Fatal("missed cancellation")
}
wg.Wait()
blockCures(0xff, 0xcafe, 1)
if !c.Cancel(unique.Make(pkg.ID{0xff})) {
t.Fatal("missed cancellation")
}
wg.Wait()
for c.Cancel(unique.Make(pkg.ID{0xff})) {
}
c.Close()
c.Abort()
}, pkg.MustDecode("E4vEZKhCcL2gPZ2Tt59FS3lDng-d_2SKa2i5G_RbDfwGn6EemptFaGLPUDiOa94C")},
{"no assume checksum", 0, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) { {"no assume checksum", 0, nil, func(t *testing.T, base *check.Absolute, c *pkg.Cache) {
makeGarbage := func(work *check.Absolute, wantErr error) error { makeGarbage := func(work *check.Absolute, wantErr error) error {
if err := os.Mkdir(work.String(), 0700); err != nil { if err := os.Mkdir(work.String(), 0700); err != nil {
@@ -1263,7 +1333,7 @@ func TestNew(t *testing.T) {
if _, err := pkg.Open( if _, err := pkg.Open(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, check.MustAbs(container.Nonexistent), 0, 0, 0, check.MustAbs(container.Nonexistent),
); !reflect.DeepEqual(err, wantErr) { ); !reflect.DeepEqual(err, wantErr) {
t.Errorf("Open: error = %#v, want %#v", err, wantErr) t.Errorf("Open: error = %#v, want %#v", err, wantErr)
} }
@@ -1291,7 +1361,7 @@ func TestNew(t *testing.T) {
if _, err := pkg.Open( if _, err := pkg.Open(
t.Context(), t.Context(),
message.New(nil), message.New(nil),
0, 0, tempDir.Append("cache"), 0, 0, 0, tempDir.Append("cache"),
); !reflect.DeepEqual(err, wantErr) { ); !reflect.DeepEqual(err, wantErr) {
t.Errorf("Open: error = %#v, want %#v", err, wantErr) t.Errorf("Open: error = %#v, want %#v", err, wantErr)
} }

View File

@@ -43,8 +43,7 @@ var _ fmt.Stringer = new(tarArtifactNamed)
func (a *tarArtifactNamed) String() string { return a.name + "-unpack" } func (a *tarArtifactNamed) String() string { return a.name + "-unpack" }
// NewTar returns a new [Artifact] backed by the supplied [Artifact] and // NewTar returns a new [Artifact] backed by the supplied [Artifact] and
// compression method. The source [Artifact] must be compatible with // compression method. The source [Artifact] must be a [FileArtifact].
// [TContext.Open].
func NewTar(a Artifact, compression uint32) Artifact { func NewTar(a Artifact, compression uint32) Artifact {
ta := tarArtifact{a, compression} ta := tarArtifact{a, compression}
if s, ok := a.(fmt.Stringer); ok { if s, ok := a.(fmt.Stringer); ok {

View File

@@ -9,7 +9,9 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"runtime"
"slices" "slices"
"strconv"
"strings" "strings"
"hakurei.app/check" "hakurei.app/check"
@@ -21,6 +23,10 @@ func main() {
log.SetFlags(0) log.SetFlags(0)
log.SetPrefix("testtool: ") log.SetPrefix("testtool: ")
environ := slices.DeleteFunc(slices.Clone(os.Environ()), func(s string) bool {
return s == "CURE_JOBS="+strconv.Itoa(runtime.NumCPU())
})
var hostNet, layers, promote bool var hostNet, layers, promote bool
if len(os.Args) == 2 && os.Args[0] == "testtool" { if len(os.Args) == 2 && os.Args[0] == "testtool" {
switch os.Args[1] { switch os.Args[1] {
@@ -48,15 +54,15 @@ func main() {
var overlayRoot bool var overlayRoot bool
wantEnv := []string{"HAKUREI_TEST=1"} wantEnv := []string{"HAKUREI_TEST=1"}
if len(os.Environ()) == 2 { if len(environ) == 2 {
overlayRoot = true overlayRoot = true
if !layers && !promote { if !layers && !promote {
log.SetPrefix("testtool(overlay root): ") log.SetPrefix("testtool(overlay root): ")
} }
wantEnv = []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"} wantEnv = []string{"HAKUREI_TEST=1", "HAKUREI_ROOT=1"}
} }
if !slices.Equal(wantEnv, os.Environ()) { if !slices.Equal(wantEnv, environ) {
log.Fatalf("Environ: %q, want %q", os.Environ(), wantEnv) log.Fatalf("Environ: %q, want %q", environ, wantEnv)
} }
var overlayWork bool var overlayWork bool

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newAttr() (pkg.Artifact, string) {
version = "2.5.2" version = "2.5.2"
checksum = "YWEphrz6vg1sUMmHHVr1CRo53pFXRhq_pjN-AlG8UgwZK1y6m7zuDhxqJhD0SV0l" checksum = "YWEphrz6vg1sUMmHHVr1CRo53pFXRhq_pjN-AlG8UgwZK1y6m7zuDhxqJhD0SV0l"
) )
return t.NewPackage("attr", version, pkg.NewHTTPGetTar( return t.NewPackage("attr", version, newTar(
nil, "https://download.savannah.nongnu.org/releases/attr/"+ "https://download.savannah.nongnu.org/releases/attr/"+
"attr-"+version+".tar.gz", "attr-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
@@ -81,10 +81,10 @@ func (t Toolchain) newACL() (pkg.Artifact, string) {
version = "2.3.2" version = "2.3.2"
checksum = "-fY5nwH4K8ZHBCRXrzLdguPkqjKI6WIiGu4dBtrZ1o0t6AIU73w8wwJz_UyjIS0P" checksum = "-fY5nwH4K8ZHBCRXrzLdguPkqjKI6WIiGu4dBtrZ1o0t6AIU73w8wwJz_UyjIS0P"
) )
return t.NewPackage("acl", version, pkg.NewHTTPGetTar( return t.NewPackage("acl", version, newTar(
nil, "https://download.savannah.nongnu.org/releases/acl/"+ "https://download.savannah.nongnu.org/releases/acl/"+
"acl-"+version+".tar.gz", "acl-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// makes assumptions about uid_map/gid_map // makes assumptions about uid_map/gid_map

View File

@@ -16,9 +16,9 @@ import (
type PArtifact int type PArtifact int
const ( const (
LLVMCompilerRT PArtifact = iota CompilerRT PArtifact = iota
LLVMRuntimes LLVMRuntimes
LLVMClang Clang
// EarlyInit is the Rosa OS init program. // EarlyInit is the Rosa OS init program.
EarlyInit EarlyInit
@@ -64,6 +64,7 @@ const (
GenInitCPIO GenInitCPIO
Gettext Gettext
Git Git
Glslang
GnuTLS GnuTLS
Go Go
Gperf Gperf
@@ -76,13 +77,17 @@ const (
LibXau LibXau
Libbsd Libbsd
Libcap Libcap
Libclc
Libdrm
Libev Libev
Libexpat Libexpat
Libffi Libffi
Libgd Libgd
Libglvnd
Libiconv Libiconv
Libmd Libmd
Libmnl Libmnl
Libpciaccess
Libnftnl Libnftnl
Libpsl Libpsl
Libseccomp Libseccomp
@@ -119,15 +124,18 @@ const (
PerlTermReadKey PerlTermReadKey
PerlTextCharWidth PerlTextCharWidth
PerlTextWrapI18N PerlTextWrapI18N
PerlUnicodeGCString PerlUnicodeLineBreak
PerlYAMLTiny PerlYAMLTiny
PkgConfig PkgConfig
Procps Procps
Python Python
PythonIniConfig PythonIniConfig
PythonMako
PythonMarkupSafe
PythonPackaging PythonPackaging
PythonPluggy PythonPluggy
PythonPyTest PythonPyTest
PythonPyYAML
PythonPygments PythonPygments
QEMU QEMU
Rdfind Rdfind
@@ -135,6 +143,8 @@ const (
Rsync Rsync
Sed Sed
Setuptools Setuptools
SPIRVHeaders
SPIRVTools
SquashfsTools SquashfsTools
Strace Strace
TamaGo TamaGo
@@ -148,15 +158,17 @@ const (
WaylandProtocols WaylandProtocols
XCB XCB
XCBProto XCBProto
Xproto XDGDBusProxy
XZ XZ
Xproto
Zlib Zlib
Zstd Zstd
// PresetUnexportedStart is the first unexported preset. // PresetUnexportedStart is the first unexported preset.
PresetUnexportedStart PresetUnexportedStart
buildcatrust = iota - 1 llvmSource = iota - 1
buildcatrust
utilMacros utilMacros
// Musl is a standalone libc that does not depend on the toolchain. // Musl is a standalone libc that does not depend on the toolchain.

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newArgpStandalone() (pkg.Artifact, string) {
version = "1.3" version = "1.3"
checksum = "vtW0VyO2pJ-hPyYmDI2zwSLS8QL0sPAUKC1t3zNYbwN2TmsaE-fADhaVtNd3eNFl" checksum = "vtW0VyO2pJ-hPyYmDI2zwSLS8QL0sPAUKC1t3zNYbwN2TmsaE-fADhaVtNd3eNFl"
) )
return t.NewPackage("argp-standalone", version, pkg.NewHTTPGetTar( return t.NewPackage("argp-standalone", version, newTar(
nil, "http://www.lysator.liu.se/~nisse/misc/"+ "http://www.lysator.liu.se/~nisse/misc/"+
"argp-standalone-"+version+".tar.gz", "argp-standalone-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newBzip2() (pkg.Artifact, string) {
version = "1.0.8" version = "1.0.8"
checksum = "cTLykcco7boom-s05H1JVsQi1AtChYL84nXkg_92Dm1Xt94Ob_qlMg_-NSguIK-c" checksum = "cTLykcco7boom-s05H1JVsQi1AtChYL84nXkg_92Dm1Xt94Ob_qlMg_-NSguIK-c"
) )
return t.NewPackage("bzip2", version, pkg.NewHTTPGetTar( return t.NewPackage("bzip2", version, newTar(
nil, "https://sourceware.org/pub/bzip2/bzip2-"+version+".tar.gz", "https://sourceware.org/pub/bzip2/bzip2-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,

View File

@@ -13,10 +13,11 @@ func (t Toolchain) newCMake() (pkg.Artifact, string) {
version = "4.3.1" version = "4.3.1"
checksum = "RHpzZiM1kJ5bwLjo9CpXSeHJJg3hTtV9QxBYpQoYwKFtRh5YhGWpShrqZCSOzQN6" checksum = "RHpzZiM1kJ5bwLjo9CpXSeHJJg3hTtV9QxBYpQoYwKFtRh5YhGWpShrqZCSOzQN6"
) )
return t.NewPackage("cmake", version, pkg.NewHTTPGetTar( return t.NewPackage("cmake", version, newFromGitHubRelease(
nil, "https://github.com/Kitware/CMake/releases/download/"+ "Kitware/CMake",
"v"+version+"/cmake-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "cmake-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// test suite expects writable source tree // test suite expects writable source tree
@@ -90,7 +91,7 @@ index 2ead810437..f85cbb8b1c 100644
ConfigureName: "/usr/src/cmake/bootstrap", ConfigureName: "/usr/src/cmake/bootstrap",
Configure: []KV{ Configure: []KV{
{"prefix", "/system"}, {"prefix", "/system"},
{"parallel", `"$(nproc)"`}, {"parallel", jobsE},
{"--"}, {"--"},
{"-DCMAKE_USE_OPENSSL", "OFF"}, {"-DCMAKE_USE_OPENSSL", "OFF"},
{"-DCMake_TEST_NO_NETWORK", "ON"}, {"-DCMake_TEST_NO_NETWORK", "ON"},
@@ -118,9 +119,6 @@ func init() {
// CMakeHelper is the [CMake] build system helper. // CMakeHelper is the [CMake] build system helper.
type CMakeHelper struct { type CMakeHelper struct {
// Joined with name with a dash if non-empty.
Variant string
// Path elements joined with source. // Path elements joined with source.
Append []string Append []string
@@ -135,14 +133,6 @@ type CMakeHelper struct {
var _ Helper = new(CMakeHelper) var _ Helper = new(CMakeHelper)
// name returns its arguments and an optional variant string joined with '-'.
func (attr *CMakeHelper) name(name, version string) string {
if attr != nil && attr.Variant != "" {
name += "-" + attr.Variant
}
return name + "-" + version
}
// extra returns a hardcoded slice of [CMake] and [Ninja]. // extra returns a hardcoded slice of [CMake] and [Ninja].
func (attr *CMakeHelper) extra(int) P { func (attr *CMakeHelper) extra(int) P {
if attr != nil && attr.Make { if attr != nil && attr.Make {
@@ -180,10 +170,8 @@ func (attr *CMakeHelper) script(name string) string {
} }
generate := "Ninja" generate := "Ninja"
jobs := ""
if attr.Make { if attr.Make {
generate = "'Unix Makefiles'" generate = "'Unix Makefiles'"
jobs += ` "--parallel=$(nproc)"`
} }
return ` return `
@@ -201,7 +189,7 @@ cmake -G ` + generate + ` \
}), " \\\n\t") + ` \ }), " \\\n\t") + ` \
-DCMAKE_INSTALL_PREFIX=/system \ -DCMAKE_INSTALL_PREFIX=/system \
'/usr/src/` + name + `/` + filepath.Join(attr.Append...) + `' '/usr/src/` + name + `/` + filepath.Join(attr.Append...) + `'
cmake --build .` + jobs + ` cmake --build . --parallel=` + jobsE + `
cmake --install . --prefix=/work/system cmake --install . --prefix=/work/system
` + attr.Script ` + attr.Script
} }

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newConnman() (pkg.Artifact, string) {
version = "2.0" version = "2.0"
checksum = "MhVTdJOhndnZn2SWd8URKo_Pj7Zvc14tntEbrVOf9L3yVWJvpb3v3Q6104tWJgtW" checksum = "MhVTdJOhndnZn2SWd8URKo_Pj7Zvc14tntEbrVOf9L3yVWJvpb3v3Q6104tWJgtW"
) )
return t.NewPackage("connman", version, pkg.NewHTTPGetTar( return t.NewPackage("connman", version, newTar(
nil, "https://git.kernel.org/pub/scm/network/connman/connman.git/"+ "https://git.kernel.org/pub/scm/network/connman/connman.git/"+
"snapshot/connman-"+version+".tar.gz", "snapshot/connman-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{

View File

@@ -7,15 +7,15 @@ func (t Toolchain) newCurl() (pkg.Artifact, string) {
version = "8.19.0" version = "8.19.0"
checksum = "YHuVLVVp8q_Y7-JWpID5ReNjq2Zk6t7ArHB6ngQXilp_R5l3cubdxu3UKo-xDByv" checksum = "YHuVLVVp8q_Y7-JWpID5ReNjq2Zk6t7ArHB6ngQXilp_R5l3cubdxu3UKo-xDByv"
) )
return t.NewPackage("curl", version, pkg.NewHTTPGetTar( return t.NewPackage("curl", version, newTar(
nil, "https://curl.se/download/curl-"+version+".tar.bz2", "https://curl.se/download/curl-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
// remove broken test // remove broken test
Writable: true, Writable: true,
ScriptEarly: ` ScriptEarly: `
chmod +w tests/data && rm tests/data/test459 chmod +w tests/data && rm -f tests/data/test459
`, `,
}, &MakeHelper{ }, &MakeHelper{
Configure: []KV{ Configure: []KV{
@@ -25,7 +25,7 @@ chmod +w tests/data && rm tests/data/test459
{"disable-smb"}, {"disable-smb"},
}, },
Check: []string{ Check: []string{
`TFLAGS="-j$(expr "$(nproc)" '*' 2)"`, "TFLAGS=" + jobsLFlagE,
"test-nonflaky", "test-nonflaky",
}, },
}, },

View File

@@ -7,11 +7,11 @@ func (t Toolchain) newDBus() (pkg.Artifact, string) {
version = "1.16.2" version = "1.16.2"
checksum = "INwOuNdrDG7XW5ilW_vn8JSxEa444rRNc5ho97i84I1CNF09OmcFcV-gzbF4uCyg" checksum = "INwOuNdrDG7XW5ilW_vn8JSxEa444rRNc5ho97i84I1CNF09OmcFcV-gzbF4uCyg"
) )
return t.NewPackage("dbus", version, pkg.NewHTTPGetTar( return t.NewPackage("dbus", version, newFromGitLab(
nil, "https://gitlab.freedesktop.org/dbus/dbus/-/archive/"+ "gitlab.freedesktop.org",
"dbus-"+version+"/dbus-dbus-"+version+".tar.bz2", "dbus/dbus",
mustDecode(checksum), "dbus-"+version,
pkg.TarBzip2, checksum,
), &PackageAttr{ ), &PackageAttr{
// OSError: [Errno 30] Read-only file system: '/usr/src/dbus/subprojects/packagecache' // OSError: [Errno 30] Read-only file system: '/usr/src/dbus/subprojects/packagecache'
Writable: true, Writable: true,
@@ -44,3 +44,38 @@ func init() {
ID: 5356, ID: 5356,
} }
} }
func (t Toolchain) newXDGDBusProxy() (pkg.Artifact, string) {
const (
version = "0.1.7"
checksum = "UW5Pe-TP-XAaN-kTbxrkOQ7eYdmlAQlr2pdreLtPT0uwdAz-7rzDP8V_8PWuZBup"
)
return t.NewPackage("xdg-dbus-proxy", version, newFromGitHub(
"flatpak/xdg-dbus-proxy",
version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dman", "disabled"},
},
},
DBus,
GLib,
), version
}
func init() {
artifactsM[XDGDBusProxy] = Metadata{
f: Toolchain.newXDGDBusProxy,
Name: "xdg-dbus-proxy",
Description: "a filtering proxy for D-Bus connections",
Website: "https://github.com/flatpak/xdg-dbus-proxy",
Dependencies: P{
GLib,
},
ID: 58434,
}
}

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newDTC() (pkg.Artifact, string) {
version = "1.7.2" version = "1.7.2"
checksum = "vUoiRynPyYRexTpS6USweT5p4SVHvvVJs8uqFkkVD-YnFjwf6v3elQ0-Etrh00Dt" checksum = "vUoiRynPyYRexTpS6USweT5p4SVHvvVJs8uqFkkVD-YnFjwf6v3elQ0-Etrh00Dt"
) )
return t.NewPackage("dtc", version, pkg.NewHTTPGetTar( return t.NewPackage("dtc", version, newTar(
nil, "https://git.kernel.org/pub/scm/utils/dtc/dtc.git/snapshot/"+ "https://git.kernel.org/pub/scm/utils/dtc/dtc.git/snapshot/"+
"dtc-v"+version+".tar.gz", "dtc-v"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// works around buggy test: // works around buggy test:

View File

@@ -4,13 +4,13 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newElfutils() (pkg.Artifact, string) { func (t Toolchain) newElfutils() (pkg.Artifact, string) {
const ( const (
version = "0.194" version = "0.195"
checksum = "Q3XUygUPv9vR1TkWucwUsQ8Kb1_F6gzk-KMPELr3cC_4AcTrprhVPMvN0CKkiYRa" checksum = "JrGnBD38w8Mj0ZxDw3fKlRBFcLvRKu8rcYnX35R9yTlUSYnzTazyLboG-a2CsJlu"
) )
return t.NewPackage("elfutils", version, pkg.NewHTTPGetTar( return t.NewPackage("elfutils", version, newTar(
nil, "https://sourceware.org/elfutils/ftp/"+ "https://sourceware.org/elfutils/ftp/"+
version+"/elfutils-"+version+".tar.bz2", version+"/elfutils-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{

View File

@@ -135,10 +135,11 @@ func newIANAEtc() pkg.Artifact {
version = "20251215" version = "20251215"
checksum = "kvKz0gW_rGG5QaNK9ZWmWu1IEgYAdmhj_wR7DYrh3axDfIql_clGRHmelP7525NJ" checksum = "kvKz0gW_rGG5QaNK9ZWmWu1IEgYAdmhj_wR7DYrh3axDfIql_clGRHmelP7525NJ"
) )
return pkg.NewHTTPGetTar( return newFromGitHubRelease(
nil, "https://github.com/Mic92/iana-etc/releases/download/"+ "Mic92/iana-etc",
version+"/iana-etc-"+version+".tar.gz", version,
mustDecode(checksum), "iana-etc-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
) )
} }

View File

@@ -7,11 +7,11 @@ func (t Toolchain) newFakeroot() (pkg.Artifact, string) {
version = "1.37.2" version = "1.37.2"
checksum = "4ve-eDqVspzQ6VWDhPS0NjW3aSenBJcPAJq_BFT7OOFgUdrQzoTBxZWipDAGWxF8" checksum = "4ve-eDqVspzQ6VWDhPS0NjW3aSenBJcPAJq_BFT7OOFgUdrQzoTBxZWipDAGWxF8"
) )
return t.NewPackage("fakeroot", version, pkg.NewHTTPGetTar( return t.NewPackage("fakeroot", version, newFromGitLab(
nil, "https://salsa.debian.org/clint/fakeroot/-/archive/upstream/"+ "salsa.debian.org",
version+"/fakeroot-upstream-"+version+".tar.bz2", "clint/fakeroot",
mustDecode(checksum), "upstream/"+version,
pkg.TarBzip2, checksum,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
{"remove-broken-docs", `diff --git a/doc/Makefile.am b/doc/Makefile.am {"remove-broken-docs", `diff --git a/doc/Makefile.am b/doc/Makefile.am

View File

@@ -9,10 +9,11 @@ func (t Toolchain) newFlex() (pkg.Artifact, string) {
version = "2.6.4" version = "2.6.4"
checksum = "p9POjQU7VhgOf3x5iFro8fjhy0NOanvA7CTeuWS_veSNgCixIJshTrWVkc5XLZkB" checksum = "p9POjQU7VhgOf3x5iFro8fjhy0NOanvA7CTeuWS_veSNgCixIJshTrWVkc5XLZkB"
) )
return t.NewPackage("flex", version, pkg.NewHTTPGetTar( return t.NewPackage("flex", version, newFromGitHubRelease(
nil, "https://github.com/westes/flex/releases/download/"+ "westes/flex",
"v"+version+"/flex-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "flex-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
M4, M4,

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newFuse() (pkg.Artifact, string) {
version = "3.18.2" version = "3.18.2"
checksum = "iL-7b7eUtmlVSf5cSq0dzow3UiqSjBmzV3cI_ENPs1tXcHdktkG45j1V12h-4jZe" checksum = "iL-7b7eUtmlVSf5cSq0dzow3UiqSjBmzV3cI_ENPs1tXcHdktkG45j1V12h-4jZe"
) )
return t.NewPackage("fuse", version, pkg.NewHTTPGetTar( return t.NewPackage("fuse", version, newFromGitHubRelease(
nil, "https://github.com/libfuse/libfuse/releases/download/"+ "libfuse/libfuse",
"fuse-"+version+"/fuse-"+version+".tar.gz", "fuse-"+version,
mustDecode(checksum), "fuse-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MesonHelper{ ), nil, &MesonHelper{
Setup: []KV{ Setup: []KV{

View File

@@ -12,10 +12,10 @@ func (t Toolchain) newGit() (pkg.Artifact, string) {
version = "2.53.0" version = "2.53.0"
checksum = "rlqSTeNgSeVKJA7nvzGqddFH8q3eFEPB4qRZft-4zth8wTHnbTbm7J90kp_obHGm" checksum = "rlqSTeNgSeVKJA7nvzGqddFH8q3eFEPB4qRZft-4zth8wTHnbTbm7J90kp_obHGm"
) )
return t.NewPackage("git", version, pkg.NewHTTPGetTar( return t.NewPackage("git", version, newTar(
nil, "https://www.kernel.org/pub/software/scm/git/"+ "https://www.kernel.org/pub/software/scm/git/"+
"git-"+version+".tar.gz", "git-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
ScriptEarly: ` ScriptEarly: `
@@ -58,7 +58,7 @@ disable_test t2200-add-update
"prove", "prove",
}, },
Install: `make \ Install: `make \
"-j$(nproc)" \ ` + jobsFlagE + ` \
DESTDIR=/work \ DESTDIR=/work \
NO_INSTALL_HARDLINKS=1 \ NO_INSTALL_HARDLINKS=1 \
install`, install`,
@@ -114,3 +114,8 @@ git \
rm -rf /work/.git rm -rf /work/.git
`, resolvconf()) `, resolvconf())
} }
// newTagRemote is a helper around NewViaGit for a tag on a git remote.
func (t Toolchain) newTagRemote(url, tag, checksum string) pkg.Artifact {
return t.NewViaGit(url, "refs/tags/"+tag, mustDecode(checksum))
}

130
internal/rosa/glslang.go Normal file
View File

@@ -0,0 +1,130 @@
package rosa
import (
"slices"
"strings"
"hakurei.app/internal/pkg"
)
func (t Toolchain) newSPIRVHeaders() (pkg.Artifact, string) {
const (
version = "1.4.341.0"
checksum = "0PL43-19Iaw4k7_D8J8BvoJ-iLgCVSYZ2ThgDPGfAJwIJFtre7l0cnQtLjcY-JvD"
)
return t.NewPackage("spirv-headers", version, newFromGitHub(
"KhronosGroup/SPIRV-Headers",
"vulkan-sdk-"+version,
checksum,
), nil, &CMakeHelper{
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
},
}), version
}
func init() {
artifactsM[SPIRVHeaders] = Metadata{
f: Toolchain.newSPIRVHeaders,
Name: "spirv-headers",
Description: "machine-readable files for the SPIR-V Registry",
Website: "https://github.com/KhronosGroup/SPIRV-Headers",
ID: 230542,
// upstream changed version scheme, anitya incapable of filtering them
latest: func(v *Versions) string {
for _, s := range v.Stable {
fields := strings.SplitN(s, ".", 4)
if len(fields) != 4 {
continue
}
if slices.ContainsFunc(fields, func(f string) bool {
return slices.ContainsFunc([]byte(f), func(d byte) bool {
return d < '0' || d > '9'
})
}) {
continue
}
return s
}
return v.Latest
},
}
}
func (t Toolchain) newSPIRVTools() (pkg.Artifact, string) {
const (
version = "2026.1"
checksum = "ZSQPQx8NltCDzQLk4qlaVxyWRWeI_JtsjEpeFt3kezTanl9DTHfLixSUCezMFBjv"
)
return t.NewPackage("spirv-tools", version, newFromGitHub(
"KhronosGroup/SPIRV-Tools",
"v"+version,
checksum,
), nil, &CMakeHelper{
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"SPIRV-Headers_SOURCE_DIR", "/system"},
},
},
Python,
SPIRVHeaders,
), version
}
func init() {
artifactsM[SPIRVTools] = Metadata{
f: Toolchain.newSPIRVTools,
Name: "spirv-tools",
Description: "an API and commands for processing SPIR-V modules",
Website: "https://github.com/KhronosGroup/SPIRV-Tools",
Dependencies: P{
SPIRVHeaders,
},
ID: 14894,
}
}
func (t Toolchain) newGlslang() (pkg.Artifact, string) {
const (
version = "16.2.0"
checksum = "6_UuF9reLRDaVkgO-9IfB3kMwme3lQZM8LL8YsJwPdUFkrjzxJtf2A9X3w9nFxj2"
)
return t.NewPackage("glslang", version, newFromGitHub(
"KhronosGroup/glslang",
version,
checksum,
), &PackageAttr{
// test suite writes to source
Writable: true,
Chmod: true,
}, &CMakeHelper{
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"BUILD_SHARED_LIBS", "ON"},
{"ALLOW_EXTERNAL_SPIRV_TOOLS", "ON"},
},
Script: "ctest",
},
Python,
Bash,
Diffutils,
SPIRVTools,
), version
}
func init() {
artifactsM[Glslang] = Metadata{
f: Toolchain.newGlslang,
Name: "glslang",
Description: "reference front end for GLSL/ESSL",
Website: "https://github.com/KhronosGroup/glslang",
ID: 205796,
}
}

View File

@@ -11,9 +11,9 @@ func (t Toolchain) newM4() (pkg.Artifact, string) {
version = "1.4.21" version = "1.4.21"
checksum = "pPa6YOo722Jw80l1OsH1tnUaklnPFjFT-bxGw5iAVrZTm1P8FQaWao_NXop46-pm" checksum = "pPa6YOo722Jw80l1OsH1tnUaklnPFjFT-bxGw5iAVrZTm1P8FQaWao_NXop46-pm"
) )
return t.NewPackage("m4", version, pkg.NewHTTPGetTar( return t.NewPackage("m4", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/m4/m4-"+version+".tar.bz2", "https://ftpmirror.gnu.org/gnu/m4/m4-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -43,9 +43,9 @@ func (t Toolchain) newBison() (pkg.Artifact, string) {
version = "3.8.2" version = "3.8.2"
checksum = "BhRM6K7URj1LNOkIDCFDctSErLS-Xo5d9ba9seg10o6ACrgC1uNhED7CQPgIY29Y" checksum = "BhRM6K7URj1LNOkIDCFDctSErLS-Xo5d9ba9seg10o6ACrgC1uNhED7CQPgIY29Y"
) )
return t.NewPackage("bison", version, pkg.NewHTTPGetTar( return t.NewPackage("bison", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/bison/bison-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/bison/bison-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
M4, M4,
@@ -70,9 +70,9 @@ func (t Toolchain) newSed() (pkg.Artifact, string) {
version = "4.9" version = "4.9"
checksum = "pe7HWH4PHNYrazOTlUoE1fXmhn2GOPFN_xE62i0llOr3kYGrH1g2_orDz0UtZ9Nt" checksum = "pe7HWH4PHNYrazOTlUoE1fXmhn2GOPFN_xE62i0llOr3kYGrH1g2_orDz0UtZ9Nt"
) )
return t.NewPackage("sed", version, pkg.NewHTTPGetTar( return t.NewPackage("sed", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/sed/sed-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/sed/sed-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Diffutils, Diffutils,
@@ -95,15 +95,15 @@ func (t Toolchain) newAutoconf() (pkg.Artifact, string) {
version = "2.73" version = "2.73"
checksum = "yGabDTeOfaCUB0JX-h3REYLYzMzvpDwFmFFzHNR7QilChCUNE4hR6q7nma4viDYg" checksum = "yGabDTeOfaCUB0JX-h3REYLYzMzvpDwFmFFzHNR7QilChCUNE4hR6q7nma4viDYg"
) )
return t.NewPackage("autoconf", version, pkg.NewHTTPGetTar( return t.NewPackage("autoconf", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/autoconf/autoconf-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/autoconf/autoconf-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Flag: TExclusive, Flag: TExclusive,
}, &MakeHelper{ }, &MakeHelper{
Check: []string{ Check: []string{
`TESTSUITEFLAGS="-j$(nproc)"`, "TESTSUITEFLAGS=" + jobsFlagE,
"check", "check",
}, },
}, },
@@ -135,9 +135,9 @@ func (t Toolchain) newAutomake() (pkg.Artifact, string) {
version = "1.18.1" version = "1.18.1"
checksum = "FjvLG_GdQP7cThTZJLDMxYpRcKdpAVG-YDs1Fj1yaHlSdh_Kx6nRGN14E0r_BjcG" checksum = "FjvLG_GdQP7cThTZJLDMxYpRcKdpAVG-YDs1Fj1yaHlSdh_Kx6nRGN14E0r_BjcG"
) )
return t.NewPackage("automake", version, pkg.NewHTTPGetTar( return t.NewPackage("automake", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/automake/automake-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/automake/automake-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -179,13 +179,13 @@ func (t Toolchain) newLibtool() (pkg.Artifact, string) {
version = "2.5.4" version = "2.5.4"
checksum = "pa6LSrQggh8mSJHQfwGjysAApmZlGJt8wif2cCLzqAAa2jpsTY0jZ-6stS3BWZ2Q" checksum = "pa6LSrQggh8mSJHQfwGjysAApmZlGJt8wif2cCLzqAAa2jpsTY0jZ-6stS3BWZ2Q"
) )
return t.NewPackage("libtool", version, pkg.NewHTTPGetTar( return t.NewPackage("libtool", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/libtool/libtool-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/libtool/libtool-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
Check: []string{ Check: []string{
`TESTSUITEFLAGS="-j$(nproc)"`, "TESTSUITEFLAGS=" + jobsFlagE,
"check", "check",
}, },
}, },
@@ -210,9 +210,9 @@ func (t Toolchain) newGzip() (pkg.Artifact, string) {
version = "1.14" version = "1.14"
checksum = "NWhjUavnNfTDFkZJyAUonL9aCOak8GVajWX2OMlzpFnuI0ErpBFyj88mz2xSjz0q" checksum = "NWhjUavnNfTDFkZJyAUonL9aCOak8GVajWX2OMlzpFnuI0ErpBFyj88mz2xSjz0q"
) )
return t.NewPackage("gzip", version, pkg.NewHTTPGetTar( return t.NewPackage("gzip", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/gzip/gzip-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/gzip/gzip-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// dependency loop // dependency loop
@@ -236,9 +236,9 @@ func (t Toolchain) newGettext() (pkg.Artifact, string) {
version = "1.0" version = "1.0"
checksum = "3MasKeEdPeFEgWgzsBKk7JqWqql1wEMbgPmzAfs-mluyokoW0N8oQVxPQoOnSdgC" checksum = "3MasKeEdPeFEgWgzsBKk7JqWqql1wEMbgPmzAfs-mluyokoW0N8oQVxPQoOnSdgC"
) )
return t.NewPackage("gettext", version, pkg.NewHTTPGetTar( return t.NewPackage("gettext", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/gettext/gettext-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/gettext/gettext-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -282,9 +282,9 @@ func (t Toolchain) newDiffutils() (pkg.Artifact, string) {
version = "3.12" version = "3.12"
checksum = "9J5VAq5oA7eqwzS1Yvw-l3G5o-TccUrNQR3PvyB_lgdryOFAfxtvQfKfhdpquE44" checksum = "9J5VAq5oA7eqwzS1Yvw-l3G5o-TccUrNQR3PvyB_lgdryOFAfxtvQfKfhdpquE44"
) )
return t.NewPackage("diffutils", version, pkg.NewHTTPGetTar( return t.NewPackage("diffutils", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/diffutils/diffutils-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/diffutils/diffutils-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -315,9 +315,9 @@ func (t Toolchain) newPatch() (pkg.Artifact, string) {
version = "2.8" version = "2.8"
checksum = "MA0BQc662i8QYBD-DdGgyyfTwaeALZ1K0yusV9rAmNiIsQdX-69YC4t9JEGXZkeR" checksum = "MA0BQc662i8QYBD-DdGgyyfTwaeALZ1K0yusV9rAmNiIsQdX-69YC4t9JEGXZkeR"
) )
return t.NewPackage("patch", version, pkg.NewHTTPGetTar( return t.NewPackage("patch", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/patch/patch-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/patch/patch-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -347,9 +347,9 @@ func (t Toolchain) newBash() (pkg.Artifact, string) {
version = "5.3" version = "5.3"
checksum = "4LQ_GRoB_ko-Ih8QPf_xRKA02xAm_TOxQgcJLmFDT6udUPxTAWrsj-ZNeuTusyDq" checksum = "4LQ_GRoB_ko-Ih8QPf_xRKA02xAm_TOxQgcJLmFDT6udUPxTAWrsj-ZNeuTusyDq"
) )
return t.NewPackage("bash", version, pkg.NewHTTPGetTar( return t.NewPackage("bash", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/bash/bash-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/bash/bash-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Flag: TEarly, Flag: TEarly,
@@ -377,9 +377,9 @@ func (t Toolchain) newCoreutils() (pkg.Artifact, string) {
version = "9.10" version = "9.10"
checksum = "o-B9wssRnZySzJUI1ZJAgw-bZtj1RC67R9po2AcM2OjjS8FQIl16IRHpC6IwO30i" checksum = "o-B9wssRnZySzJUI1ZJAgw-bZtj1RC67R9po2AcM2OjjS8FQIl16IRHpC6IwO30i"
) )
return t.NewPackage("coreutils", version, pkg.NewHTTPGetTar( return t.NewPackage("coreutils", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/coreutils/coreutils-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/coreutils/coreutils-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -516,9 +516,9 @@ func (t Toolchain) newTexinfo() (pkg.Artifact, string) {
version = "7.3" version = "7.3"
checksum = "RRmC8Xwdof7JuZJeWGAQ_GeASIHAuJFQMbNONXBz5InooKIQGmqmWRjGNGEr5n4-" checksum = "RRmC8Xwdof7JuZJeWGAQ_GeASIHAuJFQMbNONXBz5InooKIQGmqmWRjGNGEr5n4-"
) )
return t.NewPackage("texinfo", version, pkg.NewHTTPGetTar( return t.NewPackage("texinfo", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/texinfo/texinfo-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/texinfo/texinfo-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// nonstandard glibc extension // nonstandard glibc extension
@@ -549,9 +549,9 @@ func (t Toolchain) newGperf() (pkg.Artifact, string) {
version = "3.3" version = "3.3"
checksum = "RtIy9pPb_Bb8-31J2Nw-rRGso2JlS-lDlVhuNYhqR7Nt4xM_nObznxAlBMnarJv7" checksum = "RtIy9pPb_Bb8-31J2Nw-rRGso2JlS-lDlVhuNYhqR7Nt4xM_nObznxAlBMnarJv7"
) )
return t.NewPackage("gperf", version, pkg.NewHTTPGetTar( return t.NewPackage("gperf", version, newTar(
nil, "https://ftpmirror.gnu.org/gperf/gperf-"+version+".tar.gz", "https://ftpmirror.gnu.org/gperf/gperf-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Diffutils, Diffutils,
@@ -574,9 +574,9 @@ func (t Toolchain) newGawk() (pkg.Artifact, string) {
version = "5.4.0" version = "5.4.0"
checksum = "m0RkIolC-PI7EY5q8pcx5Y-0twlIW0Yp3wXXmV-QaHorSdf8BhZ7kW9F8iWomz0C" checksum = "m0RkIolC-PI7EY5q8pcx5Y-0twlIW0Yp3wXXmV-QaHorSdf8BhZ7kW9F8iWomz0C"
) )
return t.NewPackage("gawk", version, pkg.NewHTTPGetTar( return t.NewPackage("gawk", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/gawk/gawk-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/gawk/gawk-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Flag: TEarly, Flag: TEarly,
@@ -602,9 +602,9 @@ func (t Toolchain) newGrep() (pkg.Artifact, string) {
version = "3.12" version = "3.12"
checksum = "qMB4RjaPNRRYsxix6YOrjE8gyAT1zVSTy4nW4wKW9fqa0CHYAuWgPwDTirENzm_1" checksum = "qMB4RjaPNRRYsxix6YOrjE8gyAT1zVSTy4nW4wKW9fqa0CHYAuWgPwDTirENzm_1"
) )
return t.NewPackage("grep", version, pkg.NewHTTPGetTar( return t.NewPackage("grep", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/grep/grep-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/grep/grep-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -639,7 +639,6 @@ func (t Toolchain) newFindutils() (pkg.Artifact, string) {
nil, "https://ftpmirror.gnu.org/gnu/findutils/findutils-"+version+".tar.xz", nil, "https://ftpmirror.gnu.org/gnu/findutils/findutils-"+version+".tar.xz",
mustDecode(checksum), mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
SourceKind: SourceKindTarXZ,
ScriptEarly: ` ScriptEarly: `
echo '#!/bin/sh' > gnulib-tests/test-c32ispunct.sh echo '#!/bin/sh' > gnulib-tests/test-c32ispunct.sh
echo 'int main(){return 0;}' > tests/xargs/test-sigusr.c echo 'int main(){return 0;}' > tests/xargs/test-sigusr.c
@@ -667,9 +666,9 @@ func (t Toolchain) newBC() (pkg.Artifact, string) {
version = "1.08.2" version = "1.08.2"
checksum = "8h6f3hjV80XiFs6v9HOPF2KEyg1kuOgn5eeFdVspV05ODBVQss-ey5glc8AmneLy" checksum = "8h6f3hjV80XiFs6v9HOPF2KEyg1kuOgn5eeFdVspV05ODBVQss-ey5glc8AmneLy"
) )
return t.NewPackage("bc", version, pkg.NewHTTPGetTar( return t.NewPackage("bc", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/bc/bc-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/bc/bc-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// source expected to be writable // source expected to be writable
@@ -696,9 +695,9 @@ func (t Toolchain) newLibiconv() (pkg.Artifact, string) {
version = "1.19" version = "1.19"
checksum = "UibB6E23y4MksNqYmCCrA3zTFO6vJugD1DEDqqWYFZNuBsUWMVMcncb_5pPAr88x" checksum = "UibB6E23y4MksNqYmCCrA3zTFO6vJugD1DEDqqWYFZNuBsUWMVMcncb_5pPAr88x"
) )
return t.NewPackage("libiconv", version, pkg.NewHTTPGetTar( return t.NewPackage("libiconv", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/libiconv/libiconv-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/libiconv/libiconv-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version ), nil, (*MakeHelper)(nil)), version
} }
@@ -719,9 +718,9 @@ func (t Toolchain) newTar() (pkg.Artifact, string) {
version = "1.35" version = "1.35"
checksum = "zSaoSlVUDW0dSfm4sbL4FrXLFR8U40Fh3zY5DWhR5NCIJ6GjU6Kc4VZo2-ZqpBRA" checksum = "zSaoSlVUDW0dSfm4sbL4FrXLFR8U40Fh3zY5DWhR5NCIJ6GjU6Kc4VZo2-ZqpBRA"
) )
return t.NewPackage("tar", version, pkg.NewHTTPGetTar( return t.NewPackage("tar", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/tar/tar-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/tar/tar-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
Configure: []KV{ Configure: []KV{
@@ -733,7 +732,7 @@ func (t Toolchain) newTar() (pkg.Artifact, string) {
// very expensive // very expensive
"TARTEST_SKIP_LARGE_FILES=1", "TARTEST_SKIP_LARGE_FILES=1",
`TESTSUITEFLAGS="-j$(nproc)"`, "TESTSUITEFLAGS=" + jobsFlagE,
"check", "check",
}, },
}, },
@@ -761,9 +760,9 @@ func (t Toolchain) newParallel() (pkg.Artifact, string) {
version = "20260322" version = "20260322"
checksum = "gHoPmFkOO62ev4xW59HqyMlodhjp8LvTsBOwsVKHUUdfrt7KwB8koXmSVqQ4VOrB" checksum = "gHoPmFkOO62ev4xW59HqyMlodhjp8LvTsBOwsVKHUUdfrt7KwB8koXmSVqQ4VOrB"
) )
return t.NewPackage("parallel", version, pkg.NewHTTPGetTar( return t.NewPackage("parallel", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/parallel/parallel-"+version+".tar.bz2", "https://ftpmirror.gnu.org/gnu/parallel/parallel-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Perl, Perl,
@@ -790,9 +789,9 @@ func (t Toolchain) newLibunistring() (pkg.Artifact, string) {
version = "1.4.2" version = "1.4.2"
checksum = "iW9BbfLoVlXjWoLTZ4AekQSu4cFBnLcZ4W8OHWbv0AhJNgD3j65_zqaLMzFKylg2" checksum = "iW9BbfLoVlXjWoLTZ4AekQSu4cFBnLcZ4W8OHWbv0AhJNgD3j65_zqaLMzFKylg2"
) )
return t.NewPackage("libunistring", version, pkg.NewHTTPGetTar( return t.NewPackage("libunistring", version, newTar(
nil, "https://ftp.gnu.org/gnu/libunistring/libunistring-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/libunistring/libunistring-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
@@ -823,9 +822,9 @@ func (t Toolchain) newLibtasn1() (pkg.Artifact, string) {
version = "4.21.0" version = "4.21.0"
checksum = "9DYI3UYbfYLy8JsKUcY6f0irskbfL0fHZA91Q-JEOA3kiUwpodyjemRsYRjUpjuq" checksum = "9DYI3UYbfYLy8JsKUcY6f0irskbfL0fHZA91Q-JEOA3kiUwpodyjemRsYRjUpjuq"
) )
return t.NewPackage("libtasn1", version, pkg.NewHTTPGetTar( return t.NewPackage("libtasn1", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/libtasn1/libtasn1-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/libtasn1/libtasn1-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version ), nil, (*MakeHelper)(nil)), version
} }
@@ -846,9 +845,9 @@ func (t Toolchain) newReadline() (pkg.Artifact, string) {
version = "8.3" version = "8.3"
checksum = "r-lcGRJq_MvvBpOq47Z2Y1OI2iqrmtcqhTLVXR0xWo37ZpC2uT_md7gKq5o_qTMV" checksum = "r-lcGRJq_MvvBpOq47Z2Y1OI2iqrmtcqhTLVXR0xWo37ZpC2uT_md7gKq5o_qTMV"
) )
return t.NewPackage("readline", version, pkg.NewHTTPGetTar( return t.NewPackage("readline", version, newTar(
nil, "https://ftp.gnu.org/gnu/readline/readline-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/readline/readline-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
Configure: []KV{ Configure: []KV{
@@ -889,10 +888,9 @@ func (t Toolchain) newGnuTLS() (pkg.Artifact, string) {
} }
} }
return t.NewPackage("gnutls", version, t.NewViaGit( return t.NewPackage("gnutls", version, t.newTagRemote(
"https://gitlab.com/gnutls/gnutls.git", "https://gitlab.com/gnutls/gnutls.git",
"refs/tags/"+version, version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
{"bootstrap-remove-gtk-doc", `diff --git a/bootstrap.conf b/bootstrap.conf {"bootstrap-remove-gtk-doc", `diff --git a/bootstrap.conf b/bootstrap.conf
@@ -1062,9 +1060,9 @@ func (t Toolchain) newBinutils() (pkg.Artifact, string) {
version = "2.46.0" version = "2.46.0"
checksum = "4kK1_EXQipxSqqyvwD4LbiMLFKCUApjq6PeG4XJP4dzxYGqDeqXfh8zLuTyOuOVR" checksum = "4kK1_EXQipxSqqyvwD4LbiMLFKCUApjq6PeG4XJP4dzxYGqDeqXfh8zLuTyOuOVR"
) )
return t.NewPackage("binutils", version, pkg.NewHTTPGetTar( return t.NewPackage("binutils", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/binutils/binutils-"+version+".tar.bz2", "https://ftpmirror.gnu.org/gnu/binutils/binutils-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Bash, Bash,
@@ -1087,12 +1085,16 @@ func (t Toolchain) newGMP() (pkg.Artifact, string) {
version = "6.3.0" version = "6.3.0"
checksum = "yrgbgEDWKDdMWVHh7gPbVl56-sRtVVhfvv0M_LX7xMUUk_mvZ1QOJEAnt7g4i3k5" checksum = "yrgbgEDWKDdMWVHh7gPbVl56-sRtVVhfvv0M_LX7xMUUk_mvZ1QOJEAnt7g4i3k5"
) )
return t.NewPackage("gmp", version, pkg.NewHTTPGetTar( return t.NewPackage("gmp", version, newTar(
nil, "https://gcc.gnu.org/pub/gcc/infrastructure/"+ "https://gcc.gnu.org/pub/gcc/infrastructure/"+
"gmp-"+version+".tar.bz2", "gmp-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), &PackageAttr{
Env: []string{
"CC=cc",
},
}, (*MakeHelper)(nil),
M4, M4,
), version ), version
} }
@@ -1113,10 +1115,10 @@ func (t Toolchain) newMPFR() (pkg.Artifact, string) {
version = "4.2.2" version = "4.2.2"
checksum = "wN3gx0zfIuCn9r3VAn_9bmfvAYILwrRfgBjYSD1IjLqyLrLojNN5vKyQuTE9kA-B" checksum = "wN3gx0zfIuCn9r3VAn_9bmfvAYILwrRfgBjYSD1IjLqyLrLojNN5vKyQuTE9kA-B"
) )
return t.NewPackage("mpfr", version, pkg.NewHTTPGetTar( return t.NewPackage("mpfr", version, newTar(
nil, "https://gcc.gnu.org/pub/gcc/infrastructure/"+ "https://gcc.gnu.org/pub/gcc/infrastructure/"+
"mpfr-"+version+".tar.bz2", "mpfr-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
GMP, GMP,
@@ -1140,13 +1142,12 @@ func init() {
func (t Toolchain) newMPC() (pkg.Artifact, string) { func (t Toolchain) newMPC() (pkg.Artifact, string) {
const ( const (
version = "1.4.0" version = "1.4.1"
checksum = "TbrxLiE3ipQrHz_F3Xzz4zqBAnkMWyjhNwIK6wh9360RZ39xMt8rxfW3LxA9SnvU" checksum = "wdXAhplnS89FjVp20m2nC2CmLFQeyQqLpQAfViTy4vPxFdv2WYOTtfBKeIk5_Rec"
) )
return t.NewPackage("mpc", version, t.NewViaGit( return t.NewPackage("mpc", version, t.newTagRemote(
"https://gitlab.inria.fr/mpc/mpc.git", "https://gitlab.inria.fr/mpc/mpc.git",
"refs/tags/"+version, version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
// does not find mpc-impl.h otherwise // does not find mpc-impl.h otherwise
EnterSource: true, EnterSource: true,
@@ -1182,10 +1183,17 @@ func (t Toolchain) newGCC() (pkg.Artifact, string) {
version = "15.2.0" version = "15.2.0"
checksum = "TXJ5WrbXlGLzy1swghQTr4qxgDCyIZFgJry51XEPTBZ8QYbVmFeB4lZbSMtPJ-a1" checksum = "TXJ5WrbXlGLzy1swghQTr4qxgDCyIZFgJry51XEPTBZ8QYbVmFeB4lZbSMtPJ-a1"
) )
return t.NewPackage("gcc", version, pkg.NewHTTPGetTar(
nil, "https://ftp.tsukuba.wide.ad.jp/software/gcc/releases/"+ var configureExtra []KV
switch runtime.GOARCH {
case "amd64", "arm64":
configureExtra = append(configureExtra, KV{"with-multilib-list", "''"})
}
return t.NewPackage("gcc", version, newTar(
"https://ftp.tsukuba.wide.ad.jp/software/gcc/releases/"+
"gcc-"+version+"/gcc-"+version+".tar.gz", "gcc-"+version+"/gcc-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
@@ -1347,9 +1355,8 @@ ln -s system/lib /work/
// it also saturates the CPU for a consequential amount of time. // it also saturates the CPU for a consequential amount of time.
Flag: TExclusive, Flag: TExclusive,
}, &MakeHelper{ }, &MakeHelper{
Configure: []KV{ Configure: append([]KV{
{"disable-multilib"}, {"disable-multilib"},
{"with-multilib-list", `""`},
{"enable-default-pie"}, {"enable-default-pie"},
{"disable-nls"}, {"disable-nls"},
{"with-gnu-as"}, {"with-gnu-as"},
@@ -1357,7 +1364,7 @@ ln -s system/lib /work/
{"with-system-zlib"}, {"with-system-zlib"},
{"enable-languages", "c,c++,go"}, {"enable-languages", "c,c++,go"},
{"with-native-system-header-dir", "/system/include"}, {"with-native-system-header-dir", "/system/include"},
}, }, configureExtra...),
Make: []string{ Make: []string{
"BOOT_CFLAGS='-O2 -g'", "BOOT_CFLAGS='-O2 -g'",
"bootstrap", "bootstrap",

View File

@@ -21,9 +21,9 @@ cd /work/system/go/src
chmod -R +w .. chmod -R +w ..
./make.bash ./make.bash
`, pkg.Path(AbsUsrSrc.Append("go"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("go"), false, newTar(
nil, "https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz", "https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
))) )))
} }
@@ -55,9 +55,9 @@ ln -s \
../go/bin/go \ ../go/bin/go \
../go/bin/gofmt \ ../go/bin/gofmt \
/work/system/bin /work/system/bin
`, pkg.Path(AbsUsrSrc.Append("go"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("go"), false, newTar(
nil, "https://go.dev/dl/go"+version+".src.tar.gz", "https://go.dev/dl/go"+version+".src.tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
))) )))
} }

View File

@@ -10,10 +10,9 @@ func (t Toolchain) newGLib() (pkg.Artifact, string) {
version = "2.88.0" version = "2.88.0"
checksum = "T79Cg4z6j-sDZ2yIwvbY4ccRv2-fbwbqgcw59F5NQ6qJT6z4v261vbYp3dHO6Ma3" checksum = "T79Cg4z6j-sDZ2yIwvbY4ccRv2-fbwbqgcw59F5NQ6qJT6z4v261vbYp3dHO6Ma3"
) )
return t.NewPackage("glib", version, t.NewViaGit( return t.NewPackage("glib", version, t.newTagRemote(
"https://gitlab.gnome.org/GNOME/glib.git", "https://gitlab.gnome.org/GNOME/glib.git",
"refs/tags/"+version, version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
Paths: []pkg.ExecPath{ Paths: []pkg.ExecPath{
pkg.Path(fhs.AbsEtc.Append( pkg.Path(fhs.AbsEtc.Append(

View File

@@ -99,7 +99,7 @@ mkdir -p /work/system/bin/
f: func(t Toolchain) (pkg.Artifact, string) { f: func(t Toolchain) (pkg.Artifact, string) {
return t.newHakurei("-dist", ` return t.newHakurei("-dist", `
export HAKUREI_VERSION export HAKUREI_VERSION
DESTDIR=/work /usr/src/hakurei/dist/release.sh DESTDIR=/work /usr/src/hakurei/all.sh
`, true), hakureiVersion `, true), hakureiVersion
}, },

View File

@@ -4,13 +4,13 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
const hakureiVersion = "0.3.7" const hakureiVersion = "0.4.0"
// hakureiSource is the source code of a hakurei release. // hakureiSource is the source code of a hakurei release.
var hakureiSource = pkg.NewHTTPGetTar( var hakureiSource = newTar(
nil, "https://git.gensokyo.uk/rosa/hakurei/archive/"+ "https://git.gensokyo.uk/rosa/hakurei/archive/"+
"v"+hakureiVersion+".tar.gz", "v"+hakureiVersion+".tar.gz",
mustDecode("Xh_sdITOATEAQN5_UuaOyrWsgboxorqRO9bml3dGm8GAxF8NFpB7MqhSZgjJxAl2"), "wfQ9DqCW0Fw9o91wj-I55waoqzB-UqzzuC0_2h-P-1M78SgZ1WHSPCDJMth6EyC2",
pkg.TarGzip, pkg.TarGzip,
) )

View File

@@ -2,12 +2,12 @@ package rosa
import "hakurei.app/internal/pkg" import "hakurei.app/internal/pkg"
const kernelVersion = "6.12.80" const kernelVersion = "6.12.81"
var kernelSource = pkg.NewHTTPGetTar( var kernelSource = newTar(
nil, "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+ "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/"+
"snapshot/linux-"+kernelVersion+".tar.gz", "snapshot/linux-"+kernelVersion+".tar.gz",
mustDecode("_iJEAYoQISJxefuWZYfv0RPWUmHHIjHQw33Fapix-irXrEIREP5ruK37UJW4uMZO"), "fBkNwf82DQXh74in6gaF2Jot7Vg-Vlcp9BUtCEipL9mvcM1EXLVFdV7FcrO20Eve",
pkg.TarGzip, pkg.TarGzip,
) )
@@ -1221,7 +1221,7 @@ install -Dm0500 \
/sbin/depmod /sbin/depmod
make \ make \
"-j$(nproc)" \ ` + jobsFlagE + ` \
-f /usr/src/kernel/Makefile \ -f /usr/src/kernel/Makefile \
O=/tmp/kbuild \ O=/tmp/kbuild \
LLVM=1 \ LLVM=1 \
@@ -1282,14 +1282,14 @@ func init() {
func (t Toolchain) newFirmware() (pkg.Artifact, string) { func (t Toolchain) newFirmware() (pkg.Artifact, string) {
const ( const (
version = "20260309" version = "20260410"
checksum = "M1az8BxSiOEH3LA11Trc5VAlakwAHhP7-_LKWg6k-SVIzU3xclMDO4Tiujw1gQrC" checksum = "J8PdQlGqwrivpskPzbL6xacqR6mlKtXpe5RpzFfVzKPAgG81ZRXsc3qrxwdGJbil"
) )
return t.NewPackage("firmware", version, pkg.NewHTTPGetTar( return t.NewPackage("firmware", version, newFromGitLab(
nil, "https://gitlab.com/kernel-firmware/linux-firmware/-/"+ "gitlab.com",
"archive/"+version+"/linux-firmware-"+version+".tar.bz2", "kernel-firmware/linux-firmware",
mustDecode(checksum), version,
pkg.TarBzip2, checksum,
), &PackageAttr{ ), &PackageAttr{
// dedup creates temporary file // dedup creates temporary file
Writable: true, Writable: true,
@@ -1309,7 +1309,7 @@ func (t Toolchain) newFirmware() (pkg.Artifact, string) {
"install-zst", "install-zst",
}, },
SkipCheck: true, // requires pre-commit SkipCheck: true, // requires pre-commit
Install: `make "-j$(nproc)" DESTDIR=/work/system dedup`, Install: "make " + jobsFlagE + " DESTDIR=/work/system dedup",
}, },
Parallel, Parallel,
Rdfind, Rdfind,

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newKmod() (pkg.Artifact, string) {
version = "34.2" version = "34.2"
checksum = "0K7POeTKxMhExsaTsnKAC6LUNsRSfe6sSZxWONPbOu-GI_pXOw3toU_BIoqfBhJV" checksum = "0K7POeTKxMhExsaTsnKAC6LUNsRSfe6sSZxWONPbOu-GI_pXOw3toU_BIoqfBhJV"
) )
return t.NewPackage("kmod", version, pkg.NewHTTPGetTar( return t.NewPackage("kmod", version, newTar(
nil, "https://www.kernel.org/pub/linux/utils/kernel/"+ "https://www.kernel.org/pub/linux/utils/kernel/"+
"kmod/kmod-"+version+".tar.gz", "kmod/kmod-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MesonHelper{ ), nil, &MesonHelper{
Setup: []KV{ Setup: []KV{

View File

@@ -7,10 +7,9 @@ func (t Toolchain) newLibmd() (pkg.Artifact, string) {
version = "1.1.0" version = "1.1.0"
checksum = "9apYqPPZm0j5HQT8sCsVIhnVIqRD7XgN7kPIaTwTqnTuUq5waUAMq4M7ev8CODJ1" checksum = "9apYqPPZm0j5HQT8sCsVIhnVIqRD7XgN7kPIaTwTqnTuUq5waUAMq4M7ev8CODJ1"
) )
return t.NewPackage("libmd", version, t.NewViaGit( return t.NewPackage("libmd", version, t.newTagRemote(
"https://git.hadrons.org/git/libmd.git", "https://git.hadrons.org/git/libmd.git",
"refs/tags/"+version, version, checksum,
mustDecode(checksum),
), nil, &MakeHelper{ ), nil, &MakeHelper{
Generate: "echo '" + version + "' > .dist-version && ./autogen", Generate: "echo '" + version + "' > .dist-version && ./autogen",
ScriptMakeEarly: ` ScriptMakeEarly: `
@@ -38,10 +37,9 @@ func (t Toolchain) newLibbsd() (pkg.Artifact, string) {
version = "0.12.2" version = "0.12.2"
checksum = "NVS0xFLTwSP8JiElEftsZ-e1_C-IgJhHrHE77RwKt5178M7r087waO-zYx2_dfGX" checksum = "NVS0xFLTwSP8JiElEftsZ-e1_C-IgJhHrHE77RwKt5178M7r087waO-zYx2_dfGX"
) )
return t.NewPackage("libbsd", version, t.NewViaGit( return t.NewPackage("libbsd", version, t.newTagRemote(
"https://gitlab.freedesktop.org/libbsd/libbsd.git", "https://gitlab.freedesktop.org/libbsd/libbsd.git",
"refs/tags/"+version, version, checksum,
mustDecode(checksum),
), nil, &MakeHelper{ ), nil, &MakeHelper{
Generate: "echo '" + version + "' > .dist-version && ./autogen", Generate: "echo '" + version + "' > .dist-version && ./autogen",
}, },

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newLibcap() (pkg.Artifact, string) {
version = "2.78" version = "2.78"
checksum = "wFdUkBhFMD9InPnrBZyegWrlPSAg_9JiTBC-eSFyWWlmbzL2qjh2mKxr9Kx2a8ut" checksum = "wFdUkBhFMD9InPnrBZyegWrlPSAg_9JiTBC-eSFyWWlmbzL2qjh2mKxr9Kx2a8ut"
) )
return t.NewPackage("libcap", version, pkg.NewHTTPGetTar( return t.NewPackage("libcap", version, newTar(
nil, "https://git.kernel.org/pub/scm/libs/libcap/libcap.git/"+ "https://git.kernel.org/pub/scm/libs/libcap/libcap.git/"+
"snapshot/libcap-"+version+".tar.gz", "snapshot/libcap-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// uses source tree as scratch space // uses source tree as scratch space

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newLibev() (pkg.Artifact, string) {
version = "4.33" version = "4.33"
checksum = "774eSXV_4k8PySRprUDChbEwsw-kzjIFnJ3MpNOl5zDpamBRvC3BqPyRxvkwcL6_" checksum = "774eSXV_4k8PySRprUDChbEwsw-kzjIFnJ3MpNOl5zDpamBRvC3BqPyRxvkwcL6_"
) )
return t.NewPackage("libev", version, pkg.NewHTTPGetTar( return t.NewPackage("libev", version, newTar(
nil, "https://dist.schmorp.de/libev/Attic/libev-"+version+".tar.gz", "https://dist.schmorp.de/libev/Attic/libev-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version ), nil, (*MakeHelper)(nil)), version
} }

View File

@@ -11,11 +11,11 @@ func (t Toolchain) newLibexpat() (pkg.Artifact, string) {
version = "2.7.5" version = "2.7.5"
checksum = "vTRUjjg-qbHSXUBYKXgzVHkUO7UNyuhrkSYrE7ikApQm0g-OvQ8tspw4w55M-1Tp" checksum = "vTRUjjg-qbHSXUBYKXgzVHkUO7UNyuhrkSYrE7ikApQm0g-OvQ8tspw4w55M-1Tp"
) )
return t.NewPackage("libexpat", version, pkg.NewHTTPGetTar( return t.NewPackage("libexpat", version, newFromGitHubRelease(
nil, "https://github.com/libexpat/libexpat/releases/download/"+ "libexpat/libexpat",
"R_"+strings.ReplaceAll(version, ".", "_")+"/"+ "R_"+strings.ReplaceAll(version, ".", "_"),
"expat-"+version+".tar.bz2", "expat-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Bash, Bash,

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newLibffi() (pkg.Artifact, string) {
version = "3.5.2" version = "3.5.2"
checksum = "2_Q-ZNBBbVhltfL5zEr0wljxPegUimTK4VeMSiwJEGksls3n4gj3lV0Ly3vviSFH" checksum = "2_Q-ZNBBbVhltfL5zEr0wljxPegUimTK4VeMSiwJEGksls3n4gj3lV0Ly3vviSFH"
) )
return t.NewPackage("libffi", version, pkg.NewHTTPGetTar( return t.NewPackage("libffi", version, newFromGitHubRelease(
nil, "https://github.com/libffi/libffi/releases/download/"+ "libffi/libffi",
"v"+version+"/libffi-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "libffi-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
KernelHeaders, KernelHeaders,

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newLibgd() (pkg.Artifact, string) {
version = "2.3.3" version = "2.3.3"
checksum = "8T-sh1_FJT9K9aajgxzh8ot6vWIF-xxjcKAHvTak9MgGUcsFfzP8cAvvv44u2r36" checksum = "8T-sh1_FJT9K9aajgxzh8ot6vWIF-xxjcKAHvTak9MgGUcsFfzP8cAvvv44u2r36"
) )
return t.NewPackage("libgd", version, pkg.NewHTTPGetTar( return t.NewPackage("libgd", version, newFromGitHubRelease(
nil, "https://github.com/libgd/libgd/releases/download/"+ "libgd/libgd",
"gd-"+version+"/libgd-"+version+".tar.gz", "gd-"+version,
mustDecode(checksum), "libgd-"+version+".tar.gz", checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newLibpsl() (pkg.Artifact, string) {
version = "0.21.5" version = "0.21.5"
checksum = "XjfxSzh7peG2Vg4vJlL8z4JZJLcXqbuP6pLWkrGCmRxlnYUFTKNBqWGHCxEOlCad" checksum = "XjfxSzh7peG2Vg4vJlL8z4JZJLcXqbuP6pLWkrGCmRxlnYUFTKNBqWGHCxEOlCad"
) )
return t.NewPackage("libpsl", version, pkg.NewHTTPGetTar( return t.NewPackage("libpsl", version, newFromGitHubRelease(
nil, "https://github.com/rockdaboot/libpsl/releases/download/"+ "rockdaboot/libpsl",
version+"/libpsl-"+version+".tar.gz", version,
mustDecode(checksum), "libpsl-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newLibseccomp() (pkg.Artifact, string) {
version = "2.6.0" version = "2.6.0"
checksum = "mMu-iR71guPjFbb31u-YexBaanKE_nYPjPux-vuBiPfS_0kbwJdfCGlkofaUm-EY" checksum = "mMu-iR71guPjFbb31u-YexBaanKE_nYPjPux-vuBiPfS_0kbwJdfCGlkofaUm-EY"
) )
return t.NewPackage("libseccomp", version, pkg.NewHTTPGetTar( return t.NewPackage("libseccomp", version, newFromGitHubRelease(
nil, "https://github.com/seccomp/libseccomp/releases/download/"+ "seccomp/libseccomp",
"v"+version+"/libseccomp-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "libseccomp-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
ScriptEarly: ` ScriptEarly: `

View File

@@ -7,11 +7,10 @@ func (t Toolchain) newLibucontext() (pkg.Artifact, string) {
version = "1.5" version = "1.5"
checksum = "Ggk7FMmDNBdCx1Z9PcNWWW6LSpjGYssn2vU0GK5BLXJYw7ZxZbA2m_eSgT9TFnIG" checksum = "Ggk7FMmDNBdCx1Z9PcNWWW6LSpjGYssn2vU0GK5BLXJYw7ZxZbA2m_eSgT9TFnIG"
) )
return t.NewPackage("libucontext", version, pkg.NewHTTPGetTar( return t.NewPackage("libucontext", version, newFromGitHub(
nil, "https://github.com/kaniini/libucontext/archive/refs/tags/"+ "kaniini/libucontext",
"libucontext-"+version+".tar.gz", "libucontext-"+version,
mustDecode(checksum), checksum,
pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// uses source tree as scratch space // uses source tree as scratch space
Writable: true, Writable: true,

View File

@@ -4,13 +4,12 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newLibxml2() (pkg.Artifact, string) { func (t Toolchain) newLibxml2() (pkg.Artifact, string) {
const ( const (
version = "2.15.2" version = "2.15.3"
checksum = "zwQvCIBnjzUFY-inX5ckfNT3mIezsCRV55C_Iztde5OnRTB3u33lfO5h03g7DK_8" checksum = "oWkNe53c3d4Lt4OzrXPHBcOLHJ3TWqpa0x7B7bh_DyZ-uIMiplpdZjQRgRWVal2h"
) )
return t.NewPackage("libxml2", version, t.NewViaGit( return t.NewPackage("libxml2", version, t.newTagRemote(
"https://gitlab.gnome.org/GNOME/libxml2.git", "https://gitlab.gnome.org/GNOME/libxml2.git",
"refs/tags/v"+version, "v"+version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
// can't create shell.out: Read-only file system // can't create shell.out: Read-only file system
Writable: true, Writable: true,

View File

@@ -7,10 +7,9 @@ func (t Toolchain) newLibxslt() (pkg.Artifact, string) {
version = "1.1.45" version = "1.1.45"
checksum = "MZc_dyUWpHChkWDKa5iycrECxBsRd4ZMbYfL4VojTbung593mlH2tHGmxYB6NFYT" checksum = "MZc_dyUWpHChkWDKa5iycrECxBsRd4ZMbYfL4VojTbung593mlH2tHGmxYB6NFYT"
) )
return t.NewPackage("libxslt", version, t.NewViaGit( return t.NewPackage("libxslt", version, t.newTagRemote(
"https://gitlab.gnome.org/GNOME/libxslt.git", "https://gitlab.gnome.org/GNOME/libxslt.git",
"refs/tags/v"+version, "v"+version, checksum,
mustDecode(checksum),
), nil, &MakeHelper{ ), nil, &MakeHelper{
Generate: "NOCONFIGURE=1 ./autogen.sh", Generate: "NOCONFIGURE=1 ./autogen.sh",

View File

@@ -1,247 +1,43 @@
package rosa package rosa
import ( import "hakurei.app/internal/pkg"
"runtime"
"slices"
"strconv"
"strings"
"sync"
"hakurei.app/internal/pkg" func init() {
) artifactsM[llvmSource] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
return t.NewPatchedSource("llvm", llvmVersion, newFromGitHub(
"llvm/llvm-project",
"llvmorg-"+llvmVersion,
llvmChecksum,
), true, llvmPatches...), llvmVersion
},
// llvmAttr holds the attributes that will be applied to a new [pkg.Artifact] Name: "llvm-project",
// containing a LLVM variant. Description: "LLVM monorepo with Rosa OS patches",
type llvmAttr struct {
// Enabled projects and runtimes.
pr int
// Concatenated with default environment for PackageAttr.Env. ID: 1830,
env []string
// Concatenated with generated entries for CMakeHelper.Cache.
cmake []KV
// Override CMakeHelper.Append.
append []string
// Passed through to PackageAttr.NonStage0.
nonStage0 []pkg.Artifact
// Concatenated with default fixup for CMakeHelper.Script.
script string
// Patch name and body pairs.
patches []KV
}
const (
llvmProjectClang = 1 << iota
llvmProjectLld
llvmProjectAll = 1<<iota - 1
llvmRuntimeCompilerRT = 1 << iota
llvmRuntimeLibunwind
llvmRuntimeLibc
llvmRuntimeLibcxx
llvmRuntimeLibcxxABI
llvmAll = 1<<iota - 1
llvmRuntimeAll = llvmAll - (2 * llvmProjectAll) - 1
)
// llvmFlagName resolves a llvmAttr.flags project or runtime flag to its name.
func llvmFlagName(flag int) string {
switch flag {
case llvmProjectClang:
return "clang"
case llvmProjectLld:
return "lld"
case llvmRuntimeCompilerRT:
return "compiler-rt"
case llvmRuntimeLibunwind:
return "libunwind"
case llvmRuntimeLibc:
return "libc"
case llvmRuntimeLibcxx:
return "libcxx"
case llvmRuntimeLibcxxABI:
return "libcxxabi"
default:
panic("invalid flag " + strconv.Itoa(flag))
} }
} }
// newLLVMVariant returns a [pkg.Artifact] containing a LLVM variant. func (t Toolchain) newCompilerRT() (pkg.Artifact, string) {
func (t Toolchain) newLLVMVariant(variant string, attr *llvmAttr) pkg.Artifact { muslHeaders, _ := t.newMusl(true)
if attr == nil { return t.NewPackage("compiler-rt", llvmVersion, t.Load(llvmSource), &PackageAttr{
panic("LLVM attr must be non-nil") NonStage0: []pkg.Artifact{
} muslHeaders,
},
var projects, runtimes []string Env: stage0ExclConcat(t, []string{},
for i := 1; i < llvmProjectAll; i <<= 1 {
if attr.pr&i != 0 {
projects = append(projects, llvmFlagName(i))
}
}
for i := (llvmProjectAll + 1) << 1; i < llvmRuntimeAll; i <<= 1 {
if attr.pr&i != 0 {
runtimes = append(runtimes, llvmFlagName(i))
}
}
var script string
cache := []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"LLVM_HOST_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_DEFAULT_TARGET_TRIPLE", `"${ROSA_TRIPLE}"`},
}
if len(projects) > 0 {
cache = append(cache, []KV{
{"LLVM_ENABLE_PROJECTS", `"${ROSA_LLVM_PROJECTS}"`},
}...)
}
if len(runtimes) > 0 {
cache = append(cache, []KV{
{"LLVM_ENABLE_RUNTIMES", `"${ROSA_LLVM_RUNTIMES}"`},
}...)
}
cmakeAppend := []string{"llvm"}
if attr.append != nil {
cmakeAppend = attr.append
} else {
cache = append(cache, []KV{
{"LLVM_ENABLE_LIBCXX", "ON"},
{"LLVM_USE_LINKER", "lld"},
{"LLVM_INSTALL_BINUTILS_SYMLINKS", "ON"},
{"LLVM_INSTALL_CCTOOLS_SYMLINKS", "ON"},
{"LLVM_LIT_ARGS", "'--verbose'"},
}...)
}
if attr.pr&llvmProjectClang != 0 {
cache = append(cache, []KV{
{"CLANG_DEFAULT_LINKER", "lld"},
{"CLANG_DEFAULT_CXX_STDLIB", "libc++"},
{"CLANG_DEFAULT_RTLIB", "compiler-rt"},
{"CLANG_DEFAULT_UNWINDLIB", "libunwind"},
}...)
}
if attr.pr&llvmProjectLld != 0 {
script += `
ln -s ld.lld /work/system/bin/ld
`
}
if attr.pr&llvmRuntimeCompilerRT != 0 {
if attr.append == nil {
cache = append(cache, []KV{
{"COMPILER_RT_USE_LLVM_UNWINDER", "ON"},
}...)
}
}
if attr.pr&llvmRuntimeLibunwind != 0 {
cache = append(cache, []KV{
{"LIBUNWIND_USE_COMPILER_RT", "ON"},
}...)
}
if attr.pr&llvmRuntimeLibcxx != 0 {
cache = append(cache, []KV{
{"LIBCXX_HAS_MUSL_LIBC", "ON"},
{"LIBCXX_USE_COMPILER_RT", "ON"},
}...)
}
if attr.pr&llvmRuntimeLibcxxABI != 0 {
cache = append(cache, []KV{
{"LIBCXXABI_USE_COMPILER_RT", "ON"},
{"LIBCXXABI_USE_LLVM_UNWINDER", "ON"},
}...)
}
return t.NewPackage("llvm", llvmVersion, pkg.NewHTTPGetTar(
nil, "https://github.com/llvm/llvm-project/archive/refs/tags/"+
"llvmorg-"+llvmVersion+".tar.gz",
mustDecode(llvmChecksum),
pkg.TarGzip,
), &PackageAttr{
Patches: slices.Concat(attr.patches, []KV{
{"increase-stack-size-unconditional", `diff --git a/llvm/lib/Support/Threading.cpp b/llvm/lib/Support/Threading.cpp
index 9da357a7ebb9..b2931510c1ae 100644
--- a/llvm/lib/Support/Threading.cpp
+++ b/llvm/lib/Support/Threading.cpp
@@ -80,7 +80,7 @@ unsigned llvm::ThreadPoolStrategy::compute_thread_count() const {
// keyword.
#include "llvm/Support/thread.h"
-#if defined(__APPLE__)
+#if defined(__APPLE__) || 1
// Darwin's default stack size for threads except the main one is only 512KB,
// which is not enough for some/many normal LLVM compilations. This implements
// the same interface as std::thread but requests the same stack size as the
`},
}),
NonStage0: attr.nonStage0,
Env: slices.Concat([]string{
"ROSA_LLVM_PROJECTS=" + strings.Join(projects, ";"),
"ROSA_LLVM_RUNTIMES=" + strings.Join(runtimes, ";"),
}, attr.env),
Flag: TExclusive,
}, &CMakeHelper{
Variant: variant,
Cache: slices.Concat(cache, attr.cmake),
Append: cmakeAppend,
Script: script + attr.script,
},
Python,
Perl,
Diffutils,
Bash,
Gawk,
Coreutils,
Findutils,
KernelHeaders,
)
}
// newLLVM returns LLVM toolchain across multiple [pkg.Artifact].
func (t Toolchain) newLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
target := "'AArch64;RISCV;X86'"
if t.isStage0() {
switch runtime.GOARCH {
case "386", "amd64":
target = "X86"
case "arm64":
target = "AArch64"
case "riscv64":
target = "RISCV"
default:
panic("unsupported target " + runtime.GOARCH)
}
}
minimalDeps := []KV{
{"LLVM_ENABLE_ZLIB", "OFF"},
{"LLVM_ENABLE_ZSTD", "OFF"},
{"LLVM_ENABLE_LIBXML2", "OFF"},
}
muslHeaders, _ := t.newMusl(true, []string{
"CC=clang",
})
compilerRT = t.newLLVMVariant("compiler-rt", &llvmAttr{
env: stage0ExclConcat(t, []string{},
"LDFLAGS="+earlyLDFLAGS(false), "LDFLAGS="+earlyLDFLAGS(false),
), ),
cmake: []KV{ Flag: TExclusive,
}, &CMakeHelper{
Append: []string{"compiler-rt"},
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"LLVM_HOST_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_DEFAULT_TARGET_TRIPLE", `"${ROSA_TRIPLE}"`},
// libc++ not yet available // libc++ not yet available
{"CMAKE_CXX_COMPILER_TARGET", ""}, {"CMAKE_CXX_COMPILER_TARGET", ""},
@@ -257,11 +53,7 @@ func (t Toolchain) newLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) {
{"COMPILER_RT_BUILD_PROFILE", "OFF"}, {"COMPILER_RT_BUILD_PROFILE", "OFF"},
{"COMPILER_RT_BUILD_XRAY", "OFF"}, {"COMPILER_RT_BUILD_XRAY", "OFF"},
}, },
append: []string{"compiler-rt"}, Script: `
nonStage0: []pkg.Artifact{
muslHeaders,
},
script: `
mkdir -p "/work/system/lib/clang/` + llvmVersionMajor + `/lib/" mkdir -p "/work/system/lib/clang/` + llvmVersionMajor + `/lib/"
ln -s \ ln -s \
"../../../${ROSA_TRIPLE}" \ "../../../${ROSA_TRIPLE}" \
@@ -274,286 +66,179 @@ ln -s \
"clang_rt.crtend-` + linuxArch() + `.o" \ "clang_rt.crtend-` + linuxArch() + `.o" \
"/work/system/lib/${ROSA_TRIPLE}/crtendS.o" "/work/system/lib/${ROSA_TRIPLE}/crtendS.o"
`, `,
})
musl, _ = t.newMusl(false, stage0ExclConcat(t, []string{
"CC=clang",
"LIBCC=/system/lib/clang/" + llvmVersionMajor + "/lib/" +
triplet() + "/libclang_rt.builtins.a",
"AR=ar",
"RANLIB=ranlib",
}, },
"LDFLAGS="+earlyLDFLAGS(false), Python,
), compilerRT)
runtimes = t.newLLVMVariant("runtimes", &llvmAttr{ KernelHeaders,
env: stage0ExclConcat(t, []string{}, ), llvmVersion
}
func init() {
artifactsM[CompilerRT] = Metadata{
f: Toolchain.newCompilerRT,
Name: "compiler-rt",
Description: "LLVM runtime: compiler-rt",
Website: "https://llvm.org/",
Dependencies: P{
Musl,
},
}
}
func (t Toolchain) newLLVMRuntimes() (pkg.Artifact, string) {
return t.NewPackage("llvm-runtimes", llvmVersion, t.Load(llvmSource), &PackageAttr{
NonStage0: t.AppendPresets(nil, CompilerRT),
Env: stage0ExclConcat(t, []string{},
"LDFLAGS="+earlyLDFLAGS(false), "LDFLAGS="+earlyLDFLAGS(false),
), ),
pr: llvmRuntimeLibunwind | llvmRuntimeLibcxx | llvmRuntimeLibcxxABI, Flag: TExclusive,
cmake: slices.Concat([]KV{ }, &CMakeHelper{
Append: []string{"runtimes"},
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"LLVM_HOST_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_DEFAULT_TARGET_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_ENABLE_RUNTIMES", "'libunwind;libcxx;libcxxabi'"},
{"LIBUNWIND_USE_COMPILER_RT", "ON"},
{"LIBCXX_HAS_MUSL_LIBC", "ON"},
{"LIBCXX_USE_COMPILER_RT", "ON"},
{"LIBCXXABI_USE_COMPILER_RT", "ON"},
{"LIBCXXABI_USE_LLVM_UNWINDER", "ON"},
// libc++ not yet available // libc++ not yet available
{"CMAKE_CXX_COMPILER_WORKS", "ON"}, {"CMAKE_CXX_COMPILER_WORKS", "ON"},
{"LIBCXX_HAS_ATOMIC_LIB", "OFF"}, {"LIBCXX_HAS_ATOMIC_LIB", "OFF"},
{"LIBCXXABI_HAS_CXA_THREAD_ATEXIT_IMPL", "OFF"}, {"LIBCXXABI_HAS_CXA_THREAD_ATEXIT_IMPL", "OFF"},
}, minimalDeps),
append: []string{"runtimes"}, {"LLVM_ENABLE_ZLIB", "OFF"},
nonStage0: []pkg.Artifact{ {"LLVM_ENABLE_ZSTD", "OFF"},
compilerRT, {"LLVM_ENABLE_LIBXML2", "OFF"},
musl,
}, },
}) },
Python,
clang = t.newLLVMVariant("clang", &llvmAttr{ KernelHeaders,
pr: llvmProjectClang | llvmProjectLld, ), llvmVersion
env: stage0ExclConcat(t, []string{},
"CFLAGS="+earlyCFLAGS,
"CXXFLAGS="+earlyCXXFLAGS(),
"LDFLAGS="+earlyLDFLAGS(false),
),
cmake: slices.Concat([]KV{
{"LLVM_TARGETS_TO_BUILD", target},
{"CMAKE_CROSSCOMPILING", "OFF"},
{"CXX_SUPPORTS_CUSTOM_LINKER", "ON"},
}, minimalDeps),
nonStage0: []pkg.Artifact{
musl,
compilerRT,
runtimes,
},
script: `
ln -s clang /work/system/bin/cc
ln -s clang++ /work/system/bin/c++
ninja check-all
`,
patches: []KV{
{"add-rosa-vendor", `diff --git a/llvm/include/llvm/TargetParser/Triple.h b/llvm/include/llvm/TargetParser/Triple.h
index 9c83abeeb3b1..5acfe5836a23 100644
--- a/llvm/include/llvm/TargetParser/Triple.h
+++ b/llvm/include/llvm/TargetParser/Triple.h
@@ -190,6 +190,7 @@ public:
Apple,
PC,
+ Rosa,
SCEI,
Freescale,
IBM,
diff --git a/llvm/lib/TargetParser/Triple.cpp b/llvm/lib/TargetParser/Triple.cpp
index a4f9dd42c0fe..cb5a12387034 100644
--- a/llvm/lib/TargetParser/Triple.cpp
+++ b/llvm/lib/TargetParser/Triple.cpp
@@ -279,6 +279,7 @@ StringRef Triple::getVendorTypeName(VendorType Kind) {
case NVIDIA: return "nvidia";
case OpenEmbedded: return "oe";
case PC: return "pc";
+ case Rosa: return "rosa";
case SCEI: return "scei";
case SUSE: return "suse";
case Meta:
@@ -689,6 +690,7 @@ static Triple::VendorType parseVendor(StringRef VendorName) {
return StringSwitch<Triple::VendorType>(VendorName)
.Case("apple", Triple::Apple)
.Case("pc", Triple::PC)
+ .Case("rosa", Triple::Rosa)
.Case("scei", Triple::SCEI)
.Case("sie", Triple::SCEI)
.Case("fsl", Triple::Freescale)
`},
{"xfail-broken-tests", `diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c
index 50fdce630255..4b4465a75617 100644
--- a/clang/test/Modules/timestamps.c
+++ b/clang/test/Modules/timestamps.c
@@ -1,3 +1,5 @@
+// XFAIL: target={{.*-rosa-linux-musl}}
+
/// Verify timestamps that gets embedded in the module
#include <c-header.h>
`},
{"path-system-include", `diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index 8ac8d4eb9181..e46b04a898ca 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -671,6 +671,12 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(
DriverArgs, CC1Args,
concat(SysRoot, "/usr/include", MultiarchIncludeDir));
+ if (!MultiarchIncludeDir.empty() &&
+ D.getVFS().exists(concat(SysRoot, "/system/include", MultiarchIncludeDir)))
+ addExternCSystemInclude(
+ DriverArgs, CC1Args,
+ concat(SysRoot, "/system/include", MultiarchIncludeDir));
+
if (getTriple().getOS() == llvm::Triple::RTEMS)
return;
@@ -681,6 +687,7 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/include"));
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/usr/include"));
+ addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/system/include"));
if (!DriverArgs.hasArg(options::OPT_nobuiltininc) && getTriple().isMusl())
addSystemInclude(DriverArgs, CC1Args, ResourceDirInclude);
`},
{"path-system-libraries", `diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index 8ac8d4eb9181..f4d1347ab64d 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -282,6 +282,7 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
const bool IsHexagon = Arch == llvm::Triple::hexagon;
const bool IsRISCV = Triple.isRISCV();
const bool IsCSKY = Triple.isCSKY();
+ const bool IsRosa = Triple.getVendor() == llvm::Triple::Rosa;
if (IsCSKY && !SelectedMultilibs.empty())
SysRoot = SysRoot + SelectedMultilibs.back().osSuffix();
@@ -318,12 +319,23 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
const std::string OSLibDir = std::string(getOSLibDir(Triple, Args));
const std::string MultiarchTriple = getMultiarchTriple(D, Triple, SysRoot);
+ if (IsRosa) {
+ ExtraOpts.push_back("-rpath");
+ ExtraOpts.push_back("/system/lib");
+ ExtraOpts.push_back("-rpath");
+ ExtraOpts.push_back(concat("/system/lib", MultiarchTriple));
+ }
+
// mips32: Debian multilib, we use /libo32, while in other case, /lib is
// used. We need add both libo32 and /lib.
if (Arch == llvm::Triple::mips || Arch == llvm::Triple::mipsel) {
Generic_GCC::AddMultilibPaths(D, SysRoot, "libo32", MultiarchTriple, Paths);
- addPathIfExists(D, concat(SysRoot, "/libo32"), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr/libo32"), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/libo32"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr/libo32"), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/libo32"), Paths);
+ }
}
Generic_GCC::AddMultilibPaths(D, SysRoot, OSLibDir, MultiarchTriple, Paths);
@@ -341,18 +353,30 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
Paths);
}
- addPathIfExists(D, concat(SysRoot, "/usr/lib", MultiarchTriple), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/usr/lib", MultiarchTriple), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/lib", MultiarchTriple), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir), Paths);
+ }
if (IsRISCV) {
StringRef ABIName = tools::riscv::getRISCVABI(Args, Triple);
addPathIfExists(D, concat(SysRoot, "/", OSLibDir, ABIName), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir, ABIName), Paths);
+ if (!IsRosa)
+ addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir, ABIName), Paths);
+ else
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir, ABIName), Paths);
}
Generic_GCC::AddMultiarchPaths(D, SysRoot, OSLibDir, Paths);
- addPathIfExists(D, concat(SysRoot, "/lib"), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr/lib"), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/lib"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr/lib"), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/lib"), Paths);
+ }
}
ToolChain::RuntimeLibType Linux::GetDefaultRuntimeLibType() const {
@@ -457,6 +481,9 @@ std::string Linux::getDynamicLinker(const ArgList &Args) const {
return Triple.isArch64Bit() ? "/system/bin/linker64" : "/system/bin/linker";
}
if (Triple.isMusl()) {
+ if (Triple.getVendor() == llvm::Triple::Rosa)
+ return "/system/bin/linker";
+
std::string ArchName;
bool IsArm = false;
diff --git a/clang/tools/clang-installapi/Options.cpp b/clang/tools/clang-installapi/Options.cpp
index 64324a3f8b01..15ce70b68217 100644
--- a/clang/tools/clang-installapi/Options.cpp
+++ b/clang/tools/clang-installapi/Options.cpp
@@ -515,7 +515,7 @@ bool Options::processFrontendOptions(InputArgList &Args) {
FEOpts.FwkPaths = std::move(FrameworkPaths);
// Add default framework/library paths.
- PathSeq DefaultLibraryPaths = {"/usr/lib", "/usr/local/lib"};
+ PathSeq DefaultLibraryPaths = {"/usr/lib", "/system/lib", "/usr/local/lib"};
PathSeq DefaultFrameworkPaths = {"/Library/Frameworks",
"/System/Library/Frameworks"};
`},
},
})
return
} }
func init() { func init() {
artifactsM[LLVMCompilerRT] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
_, compilerRT, _, _ := t.newLLVM()
return compilerRT, llvmVersion
},
Name: "llvm-compiler-rt",
Description: "LLVM runtime: compiler-rt",
Website: "https://llvm.org/",
}
artifactsM[LLVMRuntimes] = Metadata{ artifactsM[LLVMRuntimes] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) { f: Toolchain.newLLVMRuntimes,
_, _, runtimes, _ := t.newLLVM()
return runtimes, llvmVersion
},
Name: "llvm-runtimes", Name: "llvm-runtimes",
Description: "LLVM runtimes: libunwind, libcxx, libcxxabi", Description: "LLVM runtimes: libunwind, libcxx, libcxxabi",
Website: "https://llvm.org/", Website: "https://llvm.org/",
Dependencies: P{
CompilerRT,
},
}
}
func (t Toolchain) newClang() (pkg.Artifact, string) {
target := "'AArch64;RISCV;X86'"
if t.isStage0() {
target = "Native"
} }
artifactsM[LLVMClang] = Metadata{ return t.NewPackage("clang", llvmVersion, t.Load(llvmSource), &PackageAttr{
f: func(t Toolchain) (pkg.Artifact, string) { NonStage0: t.AppendPresets(nil, LLVMRuntimes),
_, _, _, clang := t.newLLVM() Env: stage0ExclConcat(t, []string{},
return clang, llvmVersion "CFLAGS="+earlyCFLAGS,
"CXXFLAGS="+earlyCXXFLAGS(),
"LDFLAGS="+earlyLDFLAGS(false),
),
Flag: TExclusive,
}, &CMakeHelper{
Append: []string{"llvm"},
Cache: []KV{
{"CMAKE_BUILD_TYPE", "Release"},
{"LLVM_HOST_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_DEFAULT_TARGET_TRIPLE", `"${ROSA_TRIPLE}"`},
{"LLVM_ENABLE_PROJECTS", "'clang;lld'"},
{"LLVM_ENABLE_LIBCXX", "ON"},
{"LLVM_USE_LINKER", "lld"},
{"LLVM_INSTALL_BINUTILS_SYMLINKS", "ON"},
{"LLVM_INSTALL_CCTOOLS_SYMLINKS", "ON"},
{"LLVM_LIT_ARGS", "'--verbose'"},
{"CLANG_DEFAULT_LINKER", "lld"},
{"CLANG_DEFAULT_CXX_STDLIB", "libc++"},
{"CLANG_DEFAULT_RTLIB", "compiler-rt"},
{"CLANG_DEFAULT_UNWINDLIB", "libunwind"},
{"LLVM_TARGETS_TO_BUILD", target},
{"CMAKE_CROSSCOMPILING", "OFF"},
{"CXX_SUPPORTS_CUSTOM_LINKER", "ON"},
{"LLVM_ENABLE_ZLIB", "OFF"},
{"LLVM_ENABLE_ZSTD", "OFF"},
{"LLVM_ENABLE_LIBXML2", "OFF"},
}, },
Script: `
ln -s ld.lld /work/system/bin/ld
ln -s clang /work/system/bin/cc
ln -s clang++ /work/system/bin/c++
ninja ` + jobsFlagE + ` check-all
`,
},
Python,
Perl,
Diffutils,
Bash,
Gawk,
Coreutils,
Findutils,
KernelHeaders,
), llvmVersion
}
func init() {
artifactsM[Clang] = Metadata{
f: Toolchain.newClang,
Name: "clang", Name: "clang",
Description: `an "LLVM native" C/C++/Objective-C compiler`, Description: `an "LLVM native" C/C++/Objective-C compiler`,
Website: "https://llvm.org/", Website: "https://llvm.org/",
ID: 1830, Dependencies: P{
LLVMRuntimes,
},
} }
} }
var ( func (t Toolchain) newLibclc() (pkg.Artifact, string) {
// llvm stores the result of Toolchain.newLLVM. return t.NewPackage("libclc", llvmVersion, t.Load(llvmSource), nil, &CMakeHelper{
llvm [_toolchainEnd][4]pkg.Artifact Append: []string{"libclc"},
// llvmOnce is for lazy initialisation of llvm.
llvmOnce [_toolchainEnd]sync.Once
)
// NewLLVM returns LLVM toolchain across multiple [pkg.Artifact]. Cache: []KV{
func (t Toolchain) NewLLVM() (musl, compilerRT, runtimes, clang pkg.Artifact) { {"CMAKE_BUILD_TYPE", "Release"},
llvmOnce[t].Do(func() {
llvm[t][0], llvm[t][1], llvm[t][2], llvm[t][3] = t.newLLVM() {"LLVM_HOST_TRIPLE", `"${ROSA_TRIPLE}"`},
}) {"LLVM_DEFAULT_TARGET_TRIPLE", `"${ROSA_TRIPLE}"`},
return llvm[t][0], llvm[t][1], llvm[t][2], llvm[t][3]
{"LIBCLC_TARGETS_TO_BUILD", "all"},
},
Script: "ninja " + jobsFlagE + " test",
}), llvmVersion
}
func init() {
artifactsM[Libclc] = Metadata{
f: Toolchain.newLibclc,
Name: "libclc",
Description: "an open source, BSD/MIT dual licensed implementation of the library requirements of the OpenCL C programming language",
Website: "https://libclc.llvm.org/",
}
} }

View File

@@ -0,0 +1,191 @@
package rosa
// llvmPatches are centralised patches against latest LLVM monorepo.
var llvmPatches = []KV{
{"increase-stack-size-unconditional", `diff --git a/llvm/lib/Support/Threading.cpp b/llvm/lib/Support/Threading.cpp
index 9da357a7ebb9..b2931510c1ae 100644
--- a/llvm/lib/Support/Threading.cpp
+++ b/llvm/lib/Support/Threading.cpp
@@ -80,7 +80,7 @@ unsigned llvm::ThreadPoolStrategy::compute_thread_count() const {
// keyword.
#include "llvm/Support/thread.h"
-#if defined(__APPLE__)
+#if defined(__APPLE__) || 1
// Darwin's default stack size for threads except the main one is only 512KB,
// which is not enough for some/many normal LLVM compilations. This implements
// the same interface as std::thread but requests the same stack size as the
`},
{"add-rosa-vendor", `diff --git a/llvm/include/llvm/TargetParser/Triple.h b/llvm/include/llvm/TargetParser/Triple.h
index 9c83abeeb3b1..5acfe5836a23 100644
--- a/llvm/include/llvm/TargetParser/Triple.h
+++ b/llvm/include/llvm/TargetParser/Triple.h
@@ -190,6 +190,7 @@ public:
Apple,
PC,
+ Rosa,
SCEI,
Freescale,
IBM,
diff --git a/llvm/lib/TargetParser/Triple.cpp b/llvm/lib/TargetParser/Triple.cpp
index a4f9dd42c0fe..cb5a12387034 100644
--- a/llvm/lib/TargetParser/Triple.cpp
+++ b/llvm/lib/TargetParser/Triple.cpp
@@ -279,6 +279,7 @@ StringRef Triple::getVendorTypeName(VendorType Kind) {
case NVIDIA: return "nvidia";
case OpenEmbedded: return "oe";
case PC: return "pc";
+ case Rosa: return "rosa";
case SCEI: return "scei";
case SUSE: return "suse";
case Meta:
@@ -689,6 +690,7 @@ static Triple::VendorType parseVendor(StringRef VendorName) {
return StringSwitch<Triple::VendorType>(VendorName)
.Case("apple", Triple::Apple)
.Case("pc", Triple::PC)
+ .Case("rosa", Triple::Rosa)
.Case("scei", Triple::SCEI)
.Case("sie", Triple::SCEI)
.Case("fsl", Triple::Freescale)
`},
{"xfail-broken-tests", `diff --git a/clang/test/Modules/timestamps.c b/clang/test/Modules/timestamps.c
index 50fdce630255..4b4465a75617 100644
--- a/clang/test/Modules/timestamps.c
+++ b/clang/test/Modules/timestamps.c
@@ -1,3 +1,5 @@
+// XFAIL: target={{.*-rosa-linux-musl}}
+
/// Verify timestamps that gets embedded in the module
#include <c-header.h>
`},
{"path-system-include", `diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index 8ac8d4eb9181..e46b04a898ca 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -671,6 +671,12 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(
DriverArgs, CC1Args,
concat(SysRoot, "/usr/include", MultiarchIncludeDir));
+ if (!MultiarchIncludeDir.empty() &&
+ D.getVFS().exists(concat(SysRoot, "/system/include", MultiarchIncludeDir)))
+ addExternCSystemInclude(
+ DriverArgs, CC1Args,
+ concat(SysRoot, "/system/include", MultiarchIncludeDir));
+
if (getTriple().getOS() == llvm::Triple::RTEMS)
return;
@@ -681,6 +687,7 @@ void Linux::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/include"));
addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/usr/include"));
+ addExternCSystemInclude(DriverArgs, CC1Args, concat(SysRoot, "/system/include"));
if (!DriverArgs.hasArg(options::OPT_nobuiltininc) && getTriple().isMusl())
addSystemInclude(DriverArgs, CC1Args, ResourceDirInclude);
`},
{"path-system-libraries", `diff --git a/clang/lib/Driver/ToolChains/Linux.cpp b/clang/lib/Driver/ToolChains/Linux.cpp
index 8ac8d4eb9181..f4d1347ab64d 100644
--- a/clang/lib/Driver/ToolChains/Linux.cpp
+++ b/clang/lib/Driver/ToolChains/Linux.cpp
@@ -282,6 +282,7 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
const bool IsHexagon = Arch == llvm::Triple::hexagon;
const bool IsRISCV = Triple.isRISCV();
const bool IsCSKY = Triple.isCSKY();
+ const bool IsRosa = Triple.getVendor() == llvm::Triple::Rosa;
if (IsCSKY && !SelectedMultilibs.empty())
SysRoot = SysRoot + SelectedMultilibs.back().osSuffix();
@@ -318,12 +319,23 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
const std::string OSLibDir = std::string(getOSLibDir(Triple, Args));
const std::string MultiarchTriple = getMultiarchTriple(D, Triple, SysRoot);
+ if (IsRosa) {
+ ExtraOpts.push_back("-rpath");
+ ExtraOpts.push_back("/system/lib");
+ ExtraOpts.push_back("-rpath");
+ ExtraOpts.push_back(concat("/system/lib", MultiarchTriple));
+ }
+
// mips32: Debian multilib, we use /libo32, while in other case, /lib is
// used. We need add both libo32 and /lib.
if (Arch == llvm::Triple::mips || Arch == llvm::Triple::mipsel) {
Generic_GCC::AddMultilibPaths(D, SysRoot, "libo32", MultiarchTriple, Paths);
- addPathIfExists(D, concat(SysRoot, "/libo32"), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr/libo32"), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/libo32"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr/libo32"), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/libo32"), Paths);
+ }
}
Generic_GCC::AddMultilibPaths(D, SysRoot, OSLibDir, MultiarchTriple, Paths);
@@ -341,18 +353,30 @@ Linux::Linux(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
Paths);
}
- addPathIfExists(D, concat(SysRoot, "/usr/lib", MultiarchTriple), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/usr/lib", MultiarchTriple), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/lib", MultiarchTriple), Paths);
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir), Paths);
+ }
if (IsRISCV) {
StringRef ABIName = tools::riscv::getRISCVABI(Args, Triple);
addPathIfExists(D, concat(SysRoot, "/", OSLibDir, ABIName), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir, ABIName), Paths);
+ if (!IsRosa)
+ addPathIfExists(D, concat(SysRoot, "/usr", OSLibDir, ABIName), Paths);
+ else
+ addPathIfExists(D, concat(SysRoot, "/system", OSLibDir, ABIName), Paths);
}
Generic_GCC::AddMultiarchPaths(D, SysRoot, OSLibDir, Paths);
- addPathIfExists(D, concat(SysRoot, "/lib"), Paths);
- addPathIfExists(D, concat(SysRoot, "/usr/lib"), Paths);
+ if (!IsRosa) {
+ addPathIfExists(D, concat(SysRoot, "/lib"), Paths);
+ addPathIfExists(D, concat(SysRoot, "/usr/lib"), Paths);
+ } else {
+ addPathIfExists(D, concat(SysRoot, "/system/lib"), Paths);
+ }
}
ToolChain::RuntimeLibType Linux::GetDefaultRuntimeLibType() const {
@@ -457,6 +481,9 @@ std::string Linux::getDynamicLinker(const ArgList &Args) const {
return Triple.isArch64Bit() ? "/system/bin/linker64" : "/system/bin/linker";
}
if (Triple.isMusl()) {
+ if (Triple.getVendor() == llvm::Triple::Rosa)
+ return "/system/bin/linker";
+
std::string ArchName;
bool IsArm = false;
diff --git a/clang/tools/clang-installapi/Options.cpp b/clang/tools/clang-installapi/Options.cpp
index 64324a3f8b01..15ce70b68217 100644
--- a/clang/tools/clang-installapi/Options.cpp
+++ b/clang/tools/clang-installapi/Options.cpp
@@ -515,7 +515,7 @@ bool Options::processFrontendOptions(InputArgList &Args) {
FEOpts.FwkPaths = std::move(FrameworkPaths);
// Add default framework/library paths.
- PathSeq DefaultLibraryPaths = {"/usr/lib", "/usr/local/lib"};
+ PathSeq DefaultLibraryPaths = {"/usr/lib", "/system/lib", "/usr/local/lib"};
PathSeq DefaultFrameworkPaths = {"/Library/Frameworks",
"/System/Library/Frameworks"};
`},
}

View File

@@ -20,9 +20,9 @@ cd "$(mktemp -d)"
--disable-dependency-tracking --disable-dependency-tracking
./build.sh ./build.sh
./make DESTDIR=/work install check ./make DESTDIR=/work install check
`, pkg.Path(AbsUsrSrc.Append("make"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("make"), false, newTar(
nil, "https://ftpmirror.gnu.org/gnu/make/make-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/make/make-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
))), version ))), version
} }
@@ -78,11 +78,6 @@ type MakeHelper struct {
var _ Helper = new(MakeHelper) var _ Helper = new(MakeHelper)
// name returns its arguments joined with '-'.
func (*MakeHelper) name(name, version string) string {
return name + "-" + version
}
// extra returns make and other optional dependencies. // extra returns make and other optional dependencies.
func (attr *MakeHelper) extra(flag int) P { func (attr *MakeHelper) extra(flag int) P {
extra := P{Make} extra := P{Make}
@@ -176,7 +171,10 @@ func (attr *MakeHelper) script(name string) string {
s = "-" + s s = "-" + s
} }
if v[1] != "" { if v[1] != "" {
s += "=" + v[1] if v[0] != "" {
s += "="
}
s += v[1]
} }
if !yield(s) { if !yield(s) {
return return
@@ -190,7 +188,7 @@ func (attr *MakeHelper) script(name string) string {
scriptMake := ` scriptMake := `
make \ make \
"-j$(nproc)"` ` + jobsFlagE
if len(attr.Make) > 0 { if len(attr.Make) > 0 {
scriptMake += " \\\n\t" + strings.Join(attr.Make, " \\\n\t") scriptMake += " \\\n\t" + strings.Join(attr.Make, " \\\n\t")
} }
@@ -198,7 +196,7 @@ make \
if !attr.SkipCheck { if !attr.SkipCheck {
scriptMake += attr.ScriptCheckEarly + `make \ scriptMake += attr.ScriptCheckEarly + `make \
"-j$(nproc)" \ ` + jobsFlagE + ` \
` `
if len(attr.Check) > 0 { if len(attr.Check) > 0 {
scriptMake += strings.Join(attr.Check, " \\\n\t") scriptMake += strings.Join(attr.Check, " \\\n\t")

66
internal/rosa/mesa.go Normal file
View File

@@ -0,0 +1,66 @@
package rosa
import "hakurei.app/internal/pkg"
func (t Toolchain) newLibglvnd() (pkg.Artifact, string) {
const (
version = "1.7.0"
checksum = "eIQJK2sgFQDHdeFkQO87TrSUaZRFG4y2DrwA8Ut-sGboI59uw1OOiIVqq2AIwnGY"
)
return t.NewPackage("libglvnd", version, newFromGitLab(
"gitlab.freedesktop.org",
"glvnd/libglvnd",
"v"+version,
checksum,
), nil, (*MesonHelper)(nil),
Binutils, // symbols check fail with llvm nm
), version
}
func init() {
artifactsM[Libglvnd] = Metadata{
f: Toolchain.newLibglvnd,
Name: "libglvnd",
Description: "The GL Vendor-Neutral Dispatch library",
Website: "https://gitlab.freedesktop.org/glvnd/libglvnd",
ID: 12098,
}
}
func (t Toolchain) newLibdrm() (pkg.Artifact, string) {
const (
version = "2.4.131"
checksum = "riHPSpvTnvCPbR-iT4jt7_X-z4rpwm6oNh9ZN2zP6RBFkFVxBRKmedG4eEXSADIh"
)
return t.NewPackage("libdrm", version, newFromGitLab(
"gitlab.freedesktop.org",
"mesa/libdrm",
"libdrm-"+version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dintel", "enabled"},
},
},
Binutils, // symbols check fail with llvm nm
Libpciaccess,
KernelHeaders,
), version
}
func init() {
artifactsM[Libdrm] = Metadata{
f: Toolchain.newLibdrm,
Name: "libdrm",
Description: "a userspace library for accessing the DRM",
Website: "https://dri.freedesktop.org/",
Dependencies: P{
Libpciaccess,
},
ID: 1596,
}
}

View File

@@ -9,26 +9,16 @@ import (
func (t Toolchain) newMeson() (pkg.Artifact, string) { func (t Toolchain) newMeson() (pkg.Artifact, string) {
const ( const (
version = "1.10.2" version = "1.11.0"
checksum = "18VmKUVKuXCwtawkYCeYHseC3cKpi86OhnIPaV878wjY0rkXH8XnQwUyymnxFgcl" checksum = "QJolMPzypTiS65GReSNPPlkUjHI6b1EDpZ-avIk3n6b6TQ93KfUM57DVUpY97Hf7"
) )
return t.New("meson-"+version, 0, []pkg.Artifact{ return t.NewPackage("meson", version, newFromGitHub(
t.Load(Zlib), "mesonbuild/meson",
t.Load(Python), version,
t.Load(Setuptools), checksum,
}, nil, nil, ` ), nil, (*PipHelper)(nil),
cd /usr/src/meson Setuptools,
chmod -R +w meson.egg-info ), version
python3 setup.py \
install \
--prefix=/system \
--root=/work
`, pkg.Path(AbsUsrSrc.Append("meson"), true, pkg.NewHTTPGetTar(
nil, "https://github.com/mesonbuild/meson/releases/download/"+
version+"/meson-"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
))), version
} }
func init() { func init() {
artifactsM[Meson] = Metadata{ artifactsM[Meson] = Metadata{
@@ -66,11 +56,6 @@ type MesonHelper struct {
var _ Helper = new(MesonHelper) var _ Helper = new(MesonHelper)
// name returns its arguments joined with '-'.
func (*MesonHelper) name(name, version string) string {
return name + "-" + version
}
// extra returns hardcoded meson runtime dependencies. // extra returns hardcoded meson runtime dependencies.
func (*MesonHelper) extra(int) P { return P{Meson} } func (*MesonHelper) extra(int) P { return P{Meson} }

View File

@@ -26,10 +26,9 @@ cp -v lksh /work/system/bin/sh
mkdir -p /work/bin/ mkdir -p /work/bin/
ln -vs ../system/bin/sh /work/bin/ ln -vs ../system/bin/sh /work/bin/
`, pkg.Path(AbsUsrSrc.Append("mksh"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("mksh"), false, newTar(
nil,
"https://mbsd.evolvis.org/MirOS/dist/mir/mksh/mksh-R"+version+".tgz", "https://mbsd.evolvis.org/MirOS/dist/mir/mksh/mksh-R"+version+".tgz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
))), version ))), version
} }

View File

@@ -7,11 +7,10 @@ func (t Toolchain) newMuslFts() (pkg.Artifact, string) {
version = "1.2.7" version = "1.2.7"
checksum = "N_p_ZApX3eHt7xoDCw1hLf6XdJOw7ZSx7xPvpvAP0knG2zgU0zeN5w8tt5Pg60XJ" checksum = "N_p_ZApX3eHt7xoDCw1hLf6XdJOw7ZSx7xPvpvAP0knG2zgU0zeN5w8tt5Pg60XJ"
) )
return t.NewPackage("musl-fts", version, pkg.NewHTTPGetTar( return t.NewPackage("musl-fts", version, newFromGitHub(
nil, "https://github.com/void-linux/musl-fts/archive/refs/tags/"+ "void-linux/musl-fts",
"v"+version+".tar.gz", "v"+version,
mustDecode(checksum), checksum,
pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{
"CC=cc -fPIC", "CC=cc -fPIC",

View File

@@ -7,11 +7,10 @@ func (t Toolchain) newMuslObstack() (pkg.Artifact, string) {
version = "1.2.3" version = "1.2.3"
checksum = "tVRY_KjIlkkMszcaRlkKdBVQHIXTT_T_TiMxbwErlILXrOBosocg8KklppZhNdCG" checksum = "tVRY_KjIlkkMszcaRlkKdBVQHIXTT_T_TiMxbwErlILXrOBosocg8KklppZhNdCG"
) )
return t.NewPackage("musl-obstack", version, pkg.NewHTTPGetTar( return t.NewPackage("musl-obstack", version, newFromGitHub(
nil, "https://github.com/void-linux/musl-obstack/archive/refs/tags/"+ "void-linux/musl-obstack",
"v"+version+".tar.gz", "v"+version,
mustDecode(checksum), checksum,
pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{
"CC=cc -fPIC", "CC=cc -fPIC",

View File

@@ -4,7 +4,6 @@ import "hakurei.app/internal/pkg"
func (t Toolchain) newMusl( func (t Toolchain) newMusl(
headers bool, headers bool,
env []string,
extra ...pkg.Artifact, extra ...pkg.Artifact,
) (pkg.Artifact, string) { ) (pkg.Artifact, string) {
const ( const (
@@ -37,9 +36,9 @@ rmdir -v /work/lib
helper.Script = "" helper.Script = ""
} }
return t.NewPackage(name, version, pkg.NewHTTPGetTar( return t.NewPackage(name, version, newTar(
nil, "https://musl.libc.org/releases/musl-"+version+".tar.gz", "https://musl.libc.org/releases/musl-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
NonStage0: extra, NonStage0: extra,
@@ -47,7 +46,15 @@ rmdir -v /work/lib
// expected to be writable in copies // expected to be writable in copies
Chmod: true, Chmod: true,
Env: env, Env: stage0ExclConcat(t, []string{
"CC=clang",
"LIBCC=/system/lib/clang/" + llvmVersionMajor + "/lib/" +
triplet() + "/libclang_rt.builtins.a",
"AR=ar",
"RANLIB=ranlib",
},
"LDFLAGS="+earlyLDFLAGS(false),
),
}, &helper, }, &helper,
Coreutils, Coreutils,
), version ), version
@@ -55,7 +62,7 @@ rmdir -v /work/lib
func init() { func init() {
artifactsM[Musl] = Metadata{ artifactsM[Musl] = Metadata{
f: func(t Toolchain) (pkg.Artifact, string) { f: func(t Toolchain) (pkg.Artifact, string) {
return t.newMusl(false, nil) return t.newMusl(false, t.Load(CompilerRT))
}, },
Name: "musl", Name: "musl",

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newNcurses() (pkg.Artifact, string) {
version = "6.6" version = "6.6"
checksum = "XvWp4xi6hR_hH8XUoGY26L_pqBSDapJYulhzZqPuR0KNklqypqNc1yNXU-nOjf5w" checksum = "XvWp4xi6hR_hH8XUoGY26L_pqBSDapJYulhzZqPuR0KNklqypqNc1yNXU-nOjf5w"
) )
return t.NewPackage("ncurses", version, pkg.NewHTTPGetTar( return t.NewPackage("ncurses", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/ncurses/ncurses-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/ncurses/ncurses-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// "tests" are actual demo programs, not a test suite. // "tests" are actual demo programs, not a test suite.

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newLibmnl() (pkg.Artifact, string) {
version = "1.0.5" version = "1.0.5"
checksum = "DN-vbbvQDpxXJm0TJ6xlluILvfrB86avrCTX50XyE9SEFSAZ_o8nuKc5Gu0Am7-u" checksum = "DN-vbbvQDpxXJm0TJ6xlluILvfrB86avrCTX50XyE9SEFSAZ_o8nuKc5Gu0Am7-u"
) )
return t.NewPackage("libmnl", version, pkg.NewHTTPGetTar( return t.NewPackage("libmnl", version, newTar(
nil, "https://www.netfilter.org/projects/libmnl/files/"+ "https://www.netfilter.org/projects/libmnl/files/"+
"libmnl-"+version+".tar.bz2", "libmnl-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
@@ -55,10 +55,9 @@ func (t Toolchain) newLibnftnl() (pkg.Artifact, string) {
version = "1.3.1" version = "1.3.1"
checksum = "91ou66K-I17iX6DB6hiQkhhC_v4DFW5iDGzwjVRNbJNEmKqowLZBlh3FY-ZDO0r9" checksum = "91ou66K-I17iX6DB6hiQkhhC_v4DFW5iDGzwjVRNbJNEmKqowLZBlh3FY-ZDO0r9"
) )
return t.NewPackage("libnftnl", version, t.NewViaGit( return t.NewPackage("libnftnl", version, t.newTagRemote(
"https://git.netfilter.org/libnftnl", "https://git.netfilter.org/libnftnl",
"refs/tags/libnftnl-"+version, "libnftnl-"+version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{
"CFLAGS=-D_GNU_SOURCE", "CFLAGS=-D_GNU_SOURCE",
@@ -98,10 +97,9 @@ func (t Toolchain) newIPTables() (pkg.Artifact, string) {
version = "1.8.13" version = "1.8.13"
checksum = "TUA-cFIAsiMvtRR-XzQvXzoIhJUOc9J2gQDJCbBRjmgmVfGfPTCf58wL7e-cUKVQ" checksum = "TUA-cFIAsiMvtRR-XzQvXzoIhJUOc9J2gQDJCbBRjmgmVfGfPTCf58wL7e-cUKVQ"
) )
return t.NewPackage("iptables", version, t.NewViaGit( return t.NewPackage("iptables", version, t.newTagRemote(
"https://git.netfilter.org/iptables", "https://git.netfilter.org/iptables",
"refs/tags/v"+version, "v"+version, checksum,
mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
ScriptEarly: ` ScriptEarly: `
rm \ rm \

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newNettle() (pkg.Artifact, string) {
version = "4.0" version = "4.0"
checksum = "6agC-vHzzoqAlaX3K9tX8yHgrm03HLqPZzVzq8jh_ePbuPMIvpxereu_uRJFmQK7" checksum = "6agC-vHzzoqAlaX3K9tX8yHgrm03HLqPZzVzq8jh_ePbuPMIvpxereu_uRJFmQK7"
) )
return t.NewPackage("nettle", version, pkg.NewHTTPGetTar( return t.NewPackage("nettle", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/nettle/nettle-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/nettle/nettle-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
M4, M4,

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newNettle3() (pkg.Artifact, string) {
version = "3.10.2" version = "3.10.2"
checksum = "07aXlj10X5llf67jIqRQAA1pgLSgb0w_JYggZVPuKNoc-B-_usb5Kr8FrfBe7g1S" checksum = "07aXlj10X5llf67jIqRQAA1pgLSgb0w_JYggZVPuKNoc-B-_usb5Kr8FrfBe7g1S"
) )
return t.NewPackage("nettle", version, pkg.NewHTTPGetTar( return t.NewPackage("nettle", version, newTar(
nil, "https://ftpmirror.gnu.org/gnu/nettle/nettle-"+version+".tar.gz", "https://ftpmirror.gnu.org/gnu/nettle/nettle-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
M4, M4,

View File

@@ -13,25 +13,26 @@ func (t Toolchain) newNinja() (pkg.Artifact, string) {
}, nil, nil, ` }, nil, nil, `
cd "$(mktemp -d)" cd "$(mktemp -d)"
python3 /usr/src/ninja/configure.py \ python3 /usr/src/ninja/configure.py \
--verbose \
--bootstrap \ --bootstrap \
--gtest-source-dir=/usr/src/googletest --gtest-source-dir=/usr/src/googletest
./ninja all ./ninja `+jobsFlagE+` all
./ninja_test ./ninja_test
mkdir -p /work/system/bin/ mkdir -p /work/system/bin/
cp ninja /work/system/bin/ cp ninja /work/system/bin/
`, pkg.Path(AbsUsrSrc.Append("googletest"), false, `, pkg.Path(AbsUsrSrc.Append("googletest"), false,
pkg.NewHTTPGetTar( newFromGitHubRelease(
nil, "https://github.com/google/googletest/releases/download/"+ "google/googletest",
"v1.16.0/googletest-1.16.0.tar.gz", "v1.16.0",
mustDecode("NjLGvSbgPy_B-y-o1hdanlzEzaYeStFcvFGxpYV3KYlhrWWFRcugYhM3ZMzOA9B_"), "googletest-1.16.0.tar.gz",
"NjLGvSbgPy_B-y-o1hdanlzEzaYeStFcvFGxpYV3KYlhrWWFRcugYhM3ZMzOA9B_",
pkg.TarGzip, pkg.TarGzip,
)), pkg.Path(AbsUsrSrc.Append("ninja"), true, t.NewPatchedSource( )), pkg.Path(AbsUsrSrc.Append("ninja"), true, t.NewPatchedSource(
"ninja", version, pkg.NewHTTPGetTar( "ninja", version, newFromGitHub(
nil, "https://github.com/ninja-build/ninja/archive/refs/tags/"+ "ninja-build/ninja",
"v"+version+".tar.gz", "v"+version,
mustDecode(checksum), checksum,
pkg.TarGzip,
), false, ), false,
))), version ))), version
} }

View File

@@ -8,17 +8,16 @@ import (
func (t Toolchain) newNSS() (pkg.Artifact, string) { func (t Toolchain) newNSS() (pkg.Artifact, string) {
const ( const (
version = "3.122" version = "3.123"
checksum = "QvC6TBO4BAUEh6wmgUrb1hwH5podQAN-QdcAaWL32cWEppmZs6oKkZpD9GvZf59S" checksum = "pwBz0FO8jmhejPblfzNQLGsqBBGT0DwAw-z9yBJH3V3hVJBMKSc1l0R8GC0_BnzF"
version0 = "4_38_2" version0 = "4_38_2"
checksum0 = "25x2uJeQnOHIiq_zj17b4sYqKgeoU8-IsySUptoPcdHZ52PohFZfGuIisBreWzx0" checksum0 = "25x2uJeQnOHIiq_zj17b4sYqKgeoU8-IsySUptoPcdHZ52PohFZfGuIisBreWzx0"
) )
return t.NewPackage("nss", version, pkg.NewHTTPGetTar( return t.NewPackage("nss", version, newFromGitHub(
nil, "https://github.com/nss-dev/nss/archive/refs/tags/"+ "nss-dev/nss",
"NSS_"+strings.Join(strings.SplitN(version, ".", 2), "_")+"_RTM.tar.gz", "NSS_"+strings.Join(strings.SplitN(version, ".", 3), "_")+"_RTM",
mustDecode(checksum), checksum,
pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Paths: []pkg.ExecPath{ Paths: []pkg.ExecPath{
pkg.Path(AbsUsrSrc.Append("nspr.zip"), false, pkg.NewHTTPGet( pkg.Path(AbsUsrSrc.Append("nspr.zip"), false, pkg.NewHTTPGet(
@@ -83,16 +82,30 @@ func init() {
} }
} }
func init() { func (t Toolchain) newBuildCATrust() (pkg.Artifact, string) {
const version = "0.4.0" const (
artifactsM[buildcatrust] = newViaPip( version = "0.5.1"
"buildcatrust", checksum = "g9AqIksz-hvCUceSR7ZKwfqf8Y_UsJU_3_zLUIdc4IkxFVkgdv9kKVvhFjE4s1-7"
"transform certificate stores between formats",
version, "py3", "none", "any",
"k_FGzkRCLjbTWBkuBLzQJ1S8FPAz19neJZlMHm0t10F2Y0hElmvVwdSBRc03Rjo1",
"https://github.com/nix-community/buildcatrust/"+
"releases/download/v"+version+"/",
) )
return t.newViaPip("buildcatrust", version,
"https://github.com/nix-community/buildcatrust/releases/"+
"download/v"+version+"/buildcatrust-"+version+"-py3-none-any.whl",
checksum), version
}
func init() {
artifactsM[buildcatrust] = Metadata{
f: Toolchain.newBuildCATrust,
Name: "buildcatrust",
Description: "transform certificate stores between formats",
Website: "https://github.com/nix-community/buildcatrust",
Dependencies: P{
Python,
},
ID: 233988,
}
} }
func (t Toolchain) newNSSCACert() (pkg.Artifact, string) { func (t Toolchain) newNSSCACert() (pkg.Artifact, string) {

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newOpenSSL() (pkg.Artifact, string) {
version = "3.6.2" version = "3.6.2"
checksum = "jH004dXTiE01Hp0kyShkWXwrSHEksZi4i_3v47D9H9Uz9LQ1aMwF7mrl2Tb4t_XA" checksum = "jH004dXTiE01Hp0kyShkWXwrSHEksZi4i_3v47D9H9Uz9LQ1aMwF7mrl2Tb4t_XA"
) )
return t.NewPackage("openssl", version, pkg.NewHTTPGetTar( return t.NewPackage("openssl", version, newFromGitHubRelease(
nil, "https://github.com/openssl/openssl/releases/download/"+ "openssl/openssl",
"openssl-"+version+"/openssl-"+version+".tar.gz", "openssl-"+version,
mustDecode(checksum), "openssl-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Env: []string{ Env: []string{
@@ -24,9 +25,10 @@ func (t Toolchain) newOpenSSL() (pkg.Artifact, string) {
{"prefix", "/system"}, {"prefix", "/system"},
{"libdir", "lib"}, {"libdir", "lib"},
{"openssldir", "etc/ssl"}, {"openssldir", "etc/ssl"},
{"", "no-docs"},
}, },
Check: []string{ Check: []string{
`HARNESS_JOBS="$(expr "$(nproc)" '*' 2)"`, "HARNESS_JOBS=" + jobsE,
"test", "test",
}, },
}, },

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newP11Kit() (pkg.Artifact, string) {
version = "0.26.2" version = "0.26.2"
checksum = "3ei-6DUVtYzrRVe-SubtNgRlweXd6H2qHmUu-_5qVyIn6gSTvZbGS2u79Y8IFb2N" checksum = "3ei-6DUVtYzrRVe-SubtNgRlweXd6H2qHmUu-_5qVyIn6gSTvZbGS2u79Y8IFb2N"
) )
return t.NewPackage("p11-kit", version, t.NewViaGit( return t.NewPackage("p11-kit", version, t.newTagRemote(
"https://github.com/p11-glue/p11-kit.git", "https://github.com/p11-glue/p11-kit.git",
"refs/tags/"+version, mustDecode(checksum), version, checksum,
), nil, &MesonHelper{ ), nil, &MesonHelper{
Setup: []KV{ Setup: []KV{
{"Dsystemd", "disabled"}, {"Dsystemd", "disabled"},

View File

@@ -9,10 +9,11 @@ func (t Toolchain) newPCRE2() (pkg.Artifact, string) {
version = "10.47" version = "10.47"
checksum = "IbC24vVayju6nB9EhrBPSDexk22wDecdpyrjgC3nCZXkwTnUjq4CD2q5sopqu6CW" checksum = "IbC24vVayju6nB9EhrBPSDexk22wDecdpyrjgC3nCZXkwTnUjq4CD2q5sopqu6CW"
) )
return t.NewPackage("pcre2", version, pkg.NewHTTPGetTar( return t.NewPackage("pcre2", version, newFromGitHubRelease(
nil, "https://github.com/PCRE2Project/pcre2/releases/download/"+ "PCRE2Project/pcre2",
"pcre2-"+version+"/pcre2-"+version+".tar.bz2", "pcre2-"+version,
mustDecode(checksum), "pcre2-"+version+".tar.bz2",
checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
ScriptEarly: ` ScriptEarly: `

View File

@@ -2,6 +2,7 @@ package rosa
import ( import (
"slices" "slices"
"strings"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
@@ -11,9 +12,9 @@ func (t Toolchain) newPerl() (pkg.Artifact, string) {
version = "5.42.2" version = "5.42.2"
checksum = "Me_xFfgkRnVyG0sE6a74TktK2OUq9Z1LVJNEu_9RdZG3S2fbjfzNiuk2SJqHAgbm" checksum = "Me_xFfgkRnVyG0sE6a74TktK2OUq9Z1LVJNEu_9RdZG3S2fbjfzNiuk2SJqHAgbm"
) )
return t.NewPackage("perl", version, pkg.NewHTTPGetTar( return t.NewPackage("perl", version, newTar(
nil, "https://www.cpan.org/src/5.0/perl-"+version+".tar.gz", "https://www.cpan.org/src/5.0/perl-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// uses source tree as scratch space // uses source tree as scratch space
@@ -42,7 +43,7 @@ rm -f /system/bin/ps # perl does not like toybox ps
{"Duseshrplib"}, {"Duseshrplib"},
}, },
Check: []string{ Check: []string{
"TEST_JOBS=256", "TEST_JOBS=" + jobsLE,
"test_harness", "test_harness",
}, },
Install: `LD_LIBRARY_PATH="$PWD" ./perl -Ilib -I. installperl --destdir=/work`, Install: `LD_LIBRARY_PATH="$PWD" ./perl -Ilib -I. installperl --destdir=/work`,
@@ -91,10 +92,10 @@ func (t Toolchain) newPerlModuleBuild() (pkg.Artifact, string) {
version = "0.4234" version = "0.4234"
checksum = "ZKxEFG4hE1rqZt52zBL2LRZBMkYzhjb5-cTBXcsyA52EbPeeYyVxU176yAea8-Di" checksum = "ZKxEFG4hE1rqZt52zBL2LRZBMkYzhjb5-cTBXcsyA52EbPeeYyVxU176yAea8-Di"
) )
return t.newViaPerlModuleBuild("Module-Build", version, pkg.NewHTTPGetTar( return t.newViaPerlModuleBuild("Module-Build", version, newTar(
nil, "https://cpan.metacpan.org/authors/id/L/LE/LEONT/"+ "https://cpan.metacpan.org/authors/id/L/LE/LEONT/"+
"Module-Build-"+version+".tar.gz", "Module-Build-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil), version ), nil), version
} }
@@ -145,11 +146,11 @@ func (t Toolchain) newPerlLocaleGettext() (pkg.Artifact, string) {
version = "1.07" version = "1.07"
checksum = "cFq4BKFD1MWSoa7lsrPjpdo9kzPqd0jlRcBFUyL1L1isw8m3D_Sge_ff0MAu_9J3" checksum = "cFq4BKFD1MWSoa7lsrPjpdo9kzPqd0jlRcBFUyL1L1isw8m3D_Sge_ff0MAu_9J3"
) )
return t.newViaPerlMakeMaker("Locale::gettext", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Locale::gettext", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/P/PV/PVANDRY/"+ "PVANDRY",
"Locale-gettext-"+version+".tar.gz", "Locale-gettext",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -159,6 +160,8 @@ func init() {
Name: "perl-Locale::gettext", Name: "perl-Locale::gettext",
Description: "message handling functions", Description: "message handling functions",
Website: "https://metacpan.org/release/Locale-gettext", Website: "https://metacpan.org/release/Locale-gettext",
ID: 7523,
} }
} }
@@ -167,11 +170,11 @@ func (t Toolchain) newPerlPodParser() (pkg.Artifact, string) {
version = "1.67" version = "1.67"
checksum = "RdURu9mOfExk_loCp6abxlcQV3FycSNbTqhRS9i6JUqnYfGGEgercK30g0gjYyqe" checksum = "RdURu9mOfExk_loCp6abxlcQV3FycSNbTqhRS9i6JUqnYfGGEgercK30g0gjYyqe"
) )
return t.newViaPerlMakeMaker("Pod::Parser", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Pod::Parser", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/M/MA/MAREKR/"+ "MAREKR",
"Pod-Parser-"+version+".tar.gz", "Pod-Parser",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -181,6 +184,8 @@ func init() {
Name: "perl-Pod::Parser", Name: "perl-Pod::Parser",
Description: "base class for creating POD filters and translators", Description: "base class for creating POD filters and translators",
Website: "https://metacpan.org/release/Pod-Parser", Website: "https://metacpan.org/release/Pod-Parser",
ID: 3244,
} }
} }
@@ -189,11 +194,11 @@ func (t Toolchain) newPerlSGMLS() (pkg.Artifact, string) {
version = "1.1" version = "1.1"
checksum = "aZijn4MUqD-wfyZgdcCruCwl4SgDdu25cNmJ4_UvdAk9a7uz4gzMQdoeB6DQ6QOy" checksum = "aZijn4MUqD-wfyZgdcCruCwl4SgDdu25cNmJ4_UvdAk9a7uz4gzMQdoeB6DQ6QOy"
) )
return t.newViaPerlMakeMaker("SGMLS", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("SGMLS", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/R/RA/RAAB/"+ "RAAB",
"SGMLSpm-"+version+".tar.gz", "SGMLSpm",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -203,6 +208,22 @@ func init() {
Name: "perl-SGMLS", Name: "perl-SGMLS",
Description: "class for postprocessing the output from the sgmls and nsgmls parsers", Description: "class for postprocessing the output from the sgmls and nsgmls parsers",
Website: "https://metacpan.org/release/RAAB/SGMLSpm-1.1", Website: "https://metacpan.org/release/RAAB/SGMLSpm-1.1",
ID: 389576,
latest: func(v *Versions) string {
for _, s := range v.Stable {
_, m, ok := strings.Cut(s, ".")
if !ok {
continue
}
if len(m) > 1 && m[0] == '0' {
continue
}
return s
}
return v.Latest
},
} }
} }
@@ -211,11 +232,11 @@ func (t Toolchain) newPerlTermReadKey() (pkg.Artifact, string) {
version = "2.38" version = "2.38"
checksum = "qerL8Xo7kD0f42PZoiEbmE8Roc_S9pOa27LXelY4DN_0UNy_u5wLrGHI8utNlaiI" checksum = "qerL8Xo7kD0f42PZoiEbmE8Roc_S9pOa27LXelY4DN_0UNy_u5wLrGHI8utNlaiI"
) )
return t.newViaPerlMakeMaker("Term::ReadKey", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Term::ReadKey", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/J/JS/JSTOWE/"+ "JSTOWE",
"TermReadKey-"+version+".tar.gz", "TermReadKey",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -225,6 +246,8 @@ func init() {
Name: "perl-Term::ReadKey", Name: "perl-Term::ReadKey",
Description: "a perl module for simple terminal control", Description: "a perl module for simple terminal control",
Website: "https://metacpan.org/release/TermReadKey", Website: "https://metacpan.org/release/TermReadKey",
ID: 3372,
} }
} }
@@ -233,11 +256,11 @@ func (t Toolchain) newPerlTextCharWidth() (pkg.Artifact, string) {
version = "0.04" version = "0.04"
checksum = "G2p5RHU4_HiZ23ZusBA_enTlVMxz0J4esUx4CGcOPhY6xYTbp-aXWRN6lYZpzBw2" checksum = "G2p5RHU4_HiZ23ZusBA_enTlVMxz0J4esUx4CGcOPhY6xYTbp-aXWRN6lYZpzBw2"
) )
return t.newViaPerlMakeMaker("Text::CharWidth", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Text::CharWidth", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/K/KU/KUBOTA/"+ "KUBOTA",
"Text-CharWidth-"+version+".tar.gz", "Text-CharWidth",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -247,6 +270,8 @@ func init() {
Name: "perl-Text::CharWidth", Name: "perl-Text::CharWidth",
Description: "get number of occupied columns of a string on terminal", Description: "get number of occupied columns of a string on terminal",
Website: "https://metacpan.org/release/Text-CharWidth", Website: "https://metacpan.org/release/Text-CharWidth",
ID: 14380,
} }
} }
@@ -255,11 +280,11 @@ func (t Toolchain) newPerlTextWrapI18N() (pkg.Artifact, string) {
version = "0.06" version = "0.06"
checksum = "Vmo89qLgxUqyQ6QmWJVqu60aQAUjrNKRjFQSXGnvClxofzRjiCa6idzPgJ4VkixM" checksum = "Vmo89qLgxUqyQ6QmWJVqu60aQAUjrNKRjFQSXGnvClxofzRjiCa6idzPgJ4VkixM"
) )
return t.newViaPerlMakeMaker("Text::WrapI18N", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Text::WrapI18N", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/K/KU/KUBOTA/"+ "KUBOTA",
"Text-WrapI18N-"+version+".tar.gz", "Text-WrapI18N",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil, ), nil,
PerlTextCharWidth, PerlTextCharWidth,
), version ), version
@@ -275,6 +300,8 @@ func init() {
Dependencies: P{ Dependencies: P{
PerlTextCharWidth, PerlTextCharWidth,
}, },
ID: 14385,
} }
} }
@@ -283,11 +310,11 @@ func (t Toolchain) newPerlMIMECharset() (pkg.Artifact, string) {
version = "1.013.1" version = "1.013.1"
checksum = "Ou_ukcrOa1cgtE3mptinb-os3bdL1SXzbRDFZQF3prrJj-drc3rp_huay7iDLJol" checksum = "Ou_ukcrOa1cgtE3mptinb-os3bdL1SXzbRDFZQF3prrJj-drc3rp_huay7iDLJol"
) )
return t.newViaPerlMakeMaker("MIME::Charset", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("MIME::Charset", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/N/NE/NEZUMI/"+ "NEZUMI",
"MIME-Charset-"+version+".tar.gz", "MIME-Charset",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -297,34 +324,38 @@ func init() {
Name: "perl-MIME::Charset", Name: "perl-MIME::Charset",
Description: "Charset Information for MIME", Description: "Charset Information for MIME",
Website: "https://metacpan.org/release/MIME-Charset", Website: "https://metacpan.org/release/MIME-Charset",
ID: 3070,
} }
} }
func (t Toolchain) newPerlUnicodeGCString() (pkg.Artifact, string) { func (t Toolchain) newPerlUnicodeLineBreak() (pkg.Artifact, string) {
const ( const (
version = "2019.001" version = "2019.001"
checksum = "ZHVkh7EDgAUHnTpvXsnPAuWpgNoBImtY_9_8TIbo2co_WgUwEb0MtXPhI8pAZ5OH" checksum = "ZHVkh7EDgAUHnTpvXsnPAuWpgNoBImtY_9_8TIbo2co_WgUwEb0MtXPhI8pAZ5OH"
) )
return t.newViaPerlMakeMaker("Unicode::GCString", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("Unicode::LineBreak", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/N/NE/NEZUMI/"+ "NEZUMI",
"Unicode-LineBreak-"+version+".tar.gz", "Unicode-LineBreak",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil, ), nil,
PerlMIMECharset, PerlMIMECharset,
), version ), version
} }
func init() { func init() {
artifactsM[PerlUnicodeGCString] = Metadata{ artifactsM[PerlUnicodeLineBreak] = Metadata{
f: Toolchain.newPerlUnicodeGCString, f: Toolchain.newPerlUnicodeLineBreak,
Name: "perl-Unicode::GCString", Name: "perl-Unicode::LineBreak",
Description: "String as Sequence of UAX #29 Grapheme Clusters", Description: "String as Sequence of UAX #29 Grapheme Clusters",
Website: "https://metacpan.org/release/Unicode-LineBreak", Website: "https://metacpan.org/release/Unicode-LineBreak",
Dependencies: P{ Dependencies: P{
PerlMIMECharset, PerlMIMECharset,
}, },
ID: 6033,
} }
} }
@@ -333,11 +364,11 @@ func (t Toolchain) newPerlYAMLTiny() (pkg.Artifact, string) {
version = "1.76" version = "1.76"
checksum = "V1MV4KPym1LxSw8CRXqPR3K-l1hGHbT5Ob4t-9xju6R9X_CWyw6hI8wsMaNdHdBY" checksum = "V1MV4KPym1LxSw8CRXqPR3K-l1hGHbT5Ob4t-9xju6R9X_CWyw6hI8wsMaNdHdBY"
) )
return t.newViaPerlMakeMaker("YAML::Tiny", version, pkg.NewHTTPGetTar( return t.newViaPerlMakeMaker("YAML::Tiny", version, newFromCPAN(
nil, "https://cpan.metacpan.org/authors/id/E/ET/ETHER/"+ "ETHER",
"YAML-Tiny-"+version+".tar.gz", "YAML-Tiny",
mustDecode(checksum), version,
pkg.TarGzip, checksum,
), nil), version ), nil), version
} }
func init() { func init() {
@@ -347,5 +378,7 @@ func init() {
Name: "perl-YAML::Tiny", Name: "perl-YAML::Tiny",
Description: "read/write YAML files with as little code as possible", Description: "read/write YAML files with as little code as possible",
Website: "https://metacpan.org/release/YAML-Tiny", Website: "https://metacpan.org/release/YAML-Tiny",
ID: 3549,
} }
} }

View File

@@ -7,11 +7,11 @@ func (t Toolchain) newPkgConfig() (pkg.Artifact, string) {
version = "0.29.2" version = "0.29.2"
checksum = "6UsGqEMA8EER_5b9N0b32UCqiRy39B6_RnPfvuslWhtFV1qYD4DfS10crGZN_TP2" checksum = "6UsGqEMA8EER_5b9N0b32UCqiRy39B6_RnPfvuslWhtFV1qYD4DfS10crGZN_TP2"
) )
return t.NewPackage("pkg-config", version, pkg.NewHTTPGetTar( return t.NewPackage("pkg-config", version, newFromGitLab(
nil, "https://gitlab.freedesktop.org/pkg-config/pkg-config/-/archive"+ "gitlab.freedesktop.org",
"/pkg-config-"+version+"/pkg-config-pkg-config-"+version+".tar.bz2", "pkg-config/pkg-config",
mustDecode(checksum), "pkg-config-"+version,
pkg.TarBzip2, checksum,
), nil, &MakeHelper{ ), nil, &MakeHelper{
Generate: "./autogen.sh --no-configure", Generate: "./autogen.sh --no-configure",
Configure: []KV{ Configure: []KV{

View File

@@ -7,11 +7,11 @@ func (t Toolchain) newProcps() (pkg.Artifact, string) {
version = "4.0.6" version = "4.0.6"
checksum = "pl_fZLvDlv6iZTkm8l_tHFpzTDVFGCiSJEs3eu0zAX6u36AV36P_En8K7JPScRWM" checksum = "pl_fZLvDlv6iZTkm8l_tHFpzTDVFGCiSJEs3eu0zAX6u36AV36P_En8K7JPScRWM"
) )
return t.NewPackage("procps", version, pkg.NewHTTPGetTar( return t.NewPackage("procps", version, newFromGitLab(
nil, "https://gitlab.com/procps-ng/procps/-/archive/"+ "gitlab.com",
"v"+version+"/procps-v"+version+".tar.bz2", "procps-ng/procps",
mustDecode(checksum), "v"+version,
pkg.TarBzip2, checksum,
), nil, &MakeHelper{ ), nil, &MakeHelper{
Generate: "./autogen.sh", Generate: "./autogen.sh",
Configure: []KV{ Configure: []KV{

View File

@@ -1,6 +1,7 @@
package rosa package rosa
import ( import (
"path"
"slices" "slices"
"strings" "strings"
@@ -12,10 +13,10 @@ func (t Toolchain) newPython() (pkg.Artifact, string) {
version = "3.14.4" version = "3.14.4"
checksum = "X0VRAAGOlCVldh4J9tRAE-YrJtDvqfQTJaqxKPXNX6YTPlwpR9GwA5WRIZDO-63s" checksum = "X0VRAAGOlCVldh4J9tRAE-YrJtDvqfQTJaqxKPXNX6YTPlwpR9GwA5WRIZDO-63s"
) )
return t.NewPackage("python", version, pkg.NewHTTPGetTar( return t.NewPackage("python", version, newTar(
nil, "https://www.python.org/ftp/python/"+version+ "https://www.python.org/ftp/python/"+version+
"/Python-"+version+".tgz", "/Python-"+version+".tgz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// test_synopsis_sourceless assumes this is writable and checks __pycache__ // test_synopsis_sourceless assumes this is writable and checks __pycache__
@@ -80,26 +81,98 @@ func init() {
} }
} }
// newViaPip is a helper for installing python dependencies via pip. // PipHelper is the [Python] pip packaging helper.
func newViaPip( type PipHelper struct {
name, description, version, interpreter, abi, platform, checksum, prefix string, // Whether to omit --no-build-isolation.
BuildIsolation bool
// Whether to enter source after install.
EnterSource bool
// Runs after install.
Script string
}
var _ Helper = new(PipHelper)
// extra returns python.
func (*PipHelper) extra(int) P { return P{Python} }
// wantsChmod returns true.
func (*PipHelper) wantsChmod() bool { return true }
// wantsWrite is equivalent to wantsChmod.
func (attr *PipHelper) wantsWrite() bool { return attr.wantsChmod() }
// scriptEarly is a noop.
func (*PipHelper) scriptEarly() string { return "" }
// createDir returns false.
func (*PipHelper) createDir() bool { return false }
// wantsDir requests a new directory in TMPDIR.
func (*PipHelper) wantsDir() string { return `"$(mktemp -d)"` }
// script generates the pip3 install command.
func (attr *PipHelper) script(name string) string {
if attr == nil {
attr = new(PipHelper)
}
var extra string
if !attr.BuildIsolation {
extra += `
--no-build-isolation \`
}
script := attr.Script
if attr.EnterSource {
script = "cd '/usr/src/" + name + "'\n" + script
}
return `
pip3 install \
--no-index \
--prefix=/system \
--root=/work \` + extra + `
'/usr/src/` + name + `'
` + script
}
// newViaPip installs a pip wheel from a url.
func (t Toolchain) newViaPip(
name, version, url, checksum string,
extra ...PArtifact, extra ...PArtifact,
) Metadata { ) pkg.Artifact {
wname := name + "-" + version + "-" + interpreter + "-" + abi + "-" + platform + ".whl" return t.New(name+"-"+version, 0, t.AppendPresets(nil,
return Metadata{ slices.Concat(P{Python}, extra)...,
f: func(t Toolchain) (pkg.Artifact, string) { ), nil, nil, `
return t.New(name+"-"+version, 0, t.AppendPresets(nil,
slices.Concat(P{Python}, extra)...,
), nil, nil, `
pip3 install \ pip3 install \
--no-index \ --no-index \
--prefix=/system \ --prefix=/system \
--root=/work \ --root=/work \
/usr/src/`+wname+` '/usr/src/`+path.Base(url)+`'
`, pkg.Path(AbsUsrSrc.Append(wname), false, pkg.NewHTTPGet( `, pkg.Path(AbsUsrSrc.Append(path.Base(url)), false, pkg.NewHTTPGet(
nil, prefix+wname, nil, url,
mustDecode(checksum), mustDecode(checksum),
))), version )))
}
// newPypi creates [Metadata] for a [pypi] package.
//
// [pypi]: https://pypi.org/
func newPypi(
name string, id int,
description, version, interpreter, abi, platform, checksum string,
extra ...PArtifact,
) Metadata {
return Metadata{
f: func(t Toolchain) (pkg.Artifact, string) {
return t.newViaPip(name, version, "https://files.pythonhosted.org/"+path.Join(
"packages",
interpreter,
string(name[0]),
name,
name+"-"+version+"-"+interpreter+"-"+abi+"-"+platform+".whl",
), checksum, extra...), version
}, },
Name: "python-" + name, Name: "python-" + name,
@@ -107,6 +180,8 @@ pip3 install \
Website: "https://pypi.org/project/" + name + "/", Website: "https://pypi.org/project/" + name + "/",
Dependencies: slices.Concat(P{Python}, extra), Dependencies: slices.Concat(P{Python}, extra),
ID: id,
} }
} }
@@ -115,20 +190,12 @@ func (t Toolchain) newSetuptools() (pkg.Artifact, string) {
version = "82.0.1" version = "82.0.1"
checksum = "nznP46Tj539yqswtOrIM4nQgwLA1h-ApKX7z7ghazROCpyF5swtQGwsZoI93wkhc" checksum = "nznP46Tj539yqswtOrIM4nQgwLA1h-ApKX7z7ghazROCpyF5swtQGwsZoI93wkhc"
) )
return t.New("setuptools-"+version, 0, t.AppendPresets(nil, return t.NewPackage("setuptools", version, newFromGitHub(
Python, "pypa/setuptools",
), nil, nil, ` "v"+version, checksum,
pip3 install \ ), nil, &PipHelper{
--no-index \ BuildIsolation: true,
--prefix=/system \ }), version
--root=/work \
/usr/src/setuptools
`, pkg.Path(AbsUsrSrc.Append("setuptools"), true, pkg.NewHTTPGetTar(
nil, "https://github.com/pypa/setuptools/archive/refs/tags/"+
"v"+version+".tar.gz",
mustDecode(checksum),
pkg.TarGzip,
))), version
} }
func init() { func init() {
artifactsM[Setuptools] = Metadata{ artifactsM[Setuptools] = Metadata{
@@ -147,52 +214,71 @@ func init() {
} }
func init() { func init() {
artifactsM[PythonPygments] = newViaPip( artifactsM[PythonPygments] = newPypi(
"pygments", "pygments", 3986,
" a syntax highlighting package written in Python", " a syntax highlighting package written in Python",
"2.19.2", "py3", "none", "any", "2.20.0", "py3", "none", "any",
"ak_lwTalmSr7W4Mjy2XBZPG9I6a0gwSy2pS87N8x4QEuZYif0ie9z0OcfRfi9msd", "qlyqX2YSXcV0Z8XgGaPttc_gkq-xsu_nYs6NFOcYnk-CX7qmcj45gG-h6DpwPIcO",
"https://files.pythonhosted.org/packages/"+
"c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/",
) )
artifactsM[PythonPluggy] = newViaPip( artifactsM[PythonPluggy] = newPypi(
"pluggy", "pluggy", 7500,
"the core framework used by the pytest, tox, and devpi projects", "the core framework used by the pytest, tox, and devpi projects",
"1.6.0", "py3", "none", "any", "1.6.0", "py3", "none", "any",
"2HWYBaEwM66-y1hSUcWI1MyE7dVVuNNRW24XD6iJBey4YaUdAK8WeXdtFMQGC-4J", "2HWYBaEwM66-y1hSUcWI1MyE7dVVuNNRW24XD6iJBey4YaUdAK8WeXdtFMQGC-4J",
"https://files.pythonhosted.org/packages/"+
"54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/",
) )
artifactsM[PythonPackaging] = newViaPip( artifactsM[PythonPackaging] = newPypi(
"packaging", "packaging", 60461,
"reusable core utilities for various Python Packaging interoperability specifications", "reusable core utilities for various Python Packaging interoperability specifications",
"26.0", "py3", "none", "any", "26.1", "py3", "none", "any",
"iVVXcqdwHDskPKoCFUlh2x8J0Gyq-bhO4ns9DvUJ7oJjeOegRYtSIvLV33Bki-pP", "6WZjBJeRb0eZZavxM8cLPcgD-ch-1FblsHoCFKC_9VUC5XAmd397LwliVhsnQcSN",
"https://files.pythonhosted.org/packages/"+
"b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/",
) )
artifactsM[PythonIniConfig] = newViaPip( artifactsM[PythonIniConfig] = newPypi(
"iniconfig", "iniconfig", 114778,
"a small and simple INI-file parser module", "a small and simple INI-file parser module",
"2.3.0", "py3", "none", "any", "2.3.0", "py3", "none", "any",
"SDgs4S5bXi77aVOeKTPv2TUrS3M9rduiK4DpU0hCmDsSBWqnZcWInq9lsx6INxut", "SDgs4S5bXi77aVOeKTPv2TUrS3M9rduiK4DpU0hCmDsSBWqnZcWInq9lsx6INxut",
"https://files.pythonhosted.org/packages/"+
"cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/",
) )
artifactsM[PythonPyTest] = newViaPip( artifactsM[PythonPyTest] = newPypi(
"pytest", "pytest", 3765,
"the pytest framework", "the pytest framework",
"9.0.2", "py3", "none", "any", "9.0.3", "py3", "none", "any",
"IM2wDbLke1EtZhF92zvAjUl_Hms1uKDtM7U8Dt4acOaChMnDg1pW7ib8U0wYGDLH", "57WLrIVOfyoRDjt5qD6LGOaDcDCtzQnKDSTUb7GzHyJDtry_nGHHs4-0tW0tiIJr",
"https://files.pythonhosted.org/packages/"+
"3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/",
PythonIniConfig, PythonIniConfig,
PythonPackaging, PythonPackaging,
PythonPluggy, PythonPluggy,
PythonPygments, PythonPygments,
) )
artifactsM[PythonMarkupSafe] = newPypi(
"markupsafe", 3918,
"implements a text object that escapes characters so it is safe to use in HTML and XML",
"3.0.3", "cp314", "cp314", "musllinux_1_2_"+linuxArch(),
perArch[string]{
"amd64": "E2mo9ig_FKgTpGon_8qqviSEULwhnmxTIqd9vfyNxNpK4yofVYM7eLW_VE-LKbtO",
"arm64": "iG_hqsncOs8fA7bCaAg0x9XenXWlo9sqblyPcSG7yA9sfGLvM9KZznCpwWfOCwFC",
"riscv64": "7DI7U0M3jvr7U4uZml25GLw3m3EvMubCtNukZmss1gkVJ_DVkhV5DgX3Wt_sztbv",
}.unwrap(),
)
artifactsM[PythonMako] = newPypi(
"mako", 3915,
"a template library written in Python",
"1.3.11", "py3", "none", "any",
"WJ_hxYI-nNiuDiM6QhfAG84uO5U-M2aneB0JS9AQ2J2Oi6YXAbBxIdOeOEng6CoS",
PythonMarkupSafe,
)
artifactsM[PythonPyYAML] = newPypi(
"pyyaml", 4123,
"a YAML parser and emitter for Python",
"6.0.3", "cp314", "cp314", "musllinux_1_2_"+linuxArch(),
perArch[string]{
"amd64": "4_jhCFpUNtyrFp2HOMqUisR005u90MHId53eS7rkUbcGXkoaJ7JRsY21dREHEfGN",
"arm64": "sQ818ZYSmC7Vj9prIPx3sEYqSDhZlWvLbgHV9w4GjxsfQ63ZSzappctKM7Lb0Whw",
}.unwrap(),
)
} }

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newQEMU() (pkg.Artifact, string) {
version = "10.2.2" version = "10.2.2"
checksum = "uNzRxlrVoLWe-EmZmBp75SezymgE512iE5XN90Bl7wi6CjE_oQGQB-9ocs7E16QG" checksum = "uNzRxlrVoLWe-EmZmBp75SezymgE512iE5XN90Bl7wi6CjE_oQGQB-9ocs7E16QG"
) )
return t.NewPackage("qemu", version, pkg.NewHTTPGetTar( return t.NewPackage("qemu", version, newTar(
nil, "https://download.qemu.org/qemu-"+version+".tar.bz2", "https://download.qemu.org/qemu-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newRdfind() (pkg.Artifact, string) {
version = "1.8.0" version = "1.8.0"
checksum = "PoaeJ2WIG6yyfe5VAYZlOdAQiR3mb3WhAUMj2ziTCx_IIEal4640HMJUb4SzU9U3" checksum = "PoaeJ2WIG6yyfe5VAYZlOdAQiR3mb3WhAUMj2ziTCx_IIEal4640HMJUb4SzU9U3"
) )
return t.NewPackage("rdfind", version, pkg.NewHTTPGetTar( return t.NewPackage("rdfind", version, newTar(
nil, "https://rdfind.pauldreik.se/rdfind-"+version+".tar.gz", "https://rdfind.pauldreik.se/rdfind-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// test suite hard codes /bin/echo // test suite hard codes /bin/echo

View File

@@ -51,7 +51,8 @@ func TestReportSIGSEGV(t *testing.T) {
} }
}() }()
defer r.HandleAccess(&err)() defer r.HandleAccess(&err)()
r.ArtifactOf(unique.Make(pkg.ID{})) for {
r.ArtifactOf(unique.Make(pkg.ID{}))
}
} }
} }

View File

@@ -4,6 +4,7 @@ package rosa
import ( import (
"errors" "errors"
"log" "log"
"path"
"runtime" "runtime"
"slices" "slices"
"strconv" "strconv"
@@ -69,6 +70,18 @@ func triplet() string {
return linuxArch() + "-rosa-linux-musl" return linuxArch() + "-rosa-linux-musl"
} }
// perArch is a value that differs per architecture.
type perArch[T any] map[string]T
// unwrap returns the value for the current architecture.
func (p perArch[T]) unwrap() T {
v, ok := p[runtime.GOARCH]
if !ok {
panic("unsupported target " + runtime.GOARCH)
}
return v
}
const ( const (
// EnvTriplet holds the return value of triplet. // EnvTriplet holds the return value of triplet.
EnvTriplet = "ROSA_TRIPLE" EnvTriplet = "ROSA_TRIPLE"
@@ -315,24 +328,23 @@ mkdir -vp /work/system/bin
name += "-std" name += "-std"
} }
boot := t - 1
musl, compilerRT, runtimes, clang := boot.NewLLVM()
toybox := Toybox toybox := Toybox
if flag&TEarly != 0 { if flag&TEarly != 0 {
toybox = toyboxEarly toybox = toyboxEarly
} }
std := []pkg.Artifact{cureEtc{newIANAEtc()}, musl}
toolchain := []pkg.Artifact{compilerRT, runtimes, clang}
utils := []pkg.Artifact{
boot.Load(Mksh),
boot.Load(toybox),
}
base := Clang
if flag&TNoToolchain != 0 { if flag&TNoToolchain != 0 {
toolchain = nil base = Musl
} }
support = slices.Concat(extra, std, toolchain, utils) support = slices.Concat(extra, (t-1).AppendPresets([]pkg.Artifact{
cureEtc{newIANAEtc()},
},
base,
Mksh,
toybox,
))
env = fixupEnviron(env, []string{ env = fixupEnviron(env, []string{
EnvTriplet + "=" + triplet(), EnvTriplet + "=" + triplet(),
lcMessages, lcMessages,
@@ -408,8 +420,6 @@ const helperInPlace = "\x00"
// Helper is a build system helper for [Toolchain.NewPackage]. // Helper is a build system helper for [Toolchain.NewPackage].
type Helper interface { type Helper interface {
// name returns the value passed to the name argument of [Toolchain.New].
name(name, version string) string
// extra returns helper-specific dependencies. // extra returns helper-specific dependencies.
extra(flag int) P extra(flag int) P
@@ -431,11 +441,6 @@ type Helper interface {
script(name string) string script(name string) string
} }
const (
// SourceKindTarXZ denotes a source tarball to be decompressed using [XZ].
SourceKindTarXZ = 1 + iota
)
// PackageAttr holds build-system-agnostic attributes. // PackageAttr holds build-system-agnostic attributes.
type PackageAttr struct { type PackageAttr struct {
// Mount the source tree writable. // Mount the source tree writable.
@@ -452,8 +457,6 @@ type PackageAttr struct {
// Passed to [Toolchain.NewPatchedSource]. // Passed to [Toolchain.NewPatchedSource].
Patches []KV Patches []KV
// Kind of source artifact.
SourceKind int
// Dependencies not provided by stage0. // Dependencies not provided by stage0.
NonStage0 []pkg.Artifact NonStage0 []pkg.Artifact
@@ -525,11 +528,6 @@ func (t Toolchain) NewPackage(
panic("source must be non-nil") panic("source must be non-nil")
} }
wantsChmod, wantsWrite := helper.wantsChmod(), helper.wantsWrite() wantsChmod, wantsWrite := helper.wantsChmod(), helper.wantsWrite()
if attr.SourceKind > 0 &&
(attr.Writable || attr.Chmod || wantsChmod || wantsWrite || len(attr.Patches) > 0) {
panic("source processing requested on a non-unpacked kind")
}
dc := len(attr.NonStage0) dc := len(attr.NonStage0)
if !t.isStage0() { if !t.isStage0() {
dc += 1<<3 + len(extra) dc += 1<<3 + len(extra)
@@ -548,17 +546,23 @@ func (t Toolchain) NewPackage(
paPut(pv) paPut(pv)
} }
var scriptEarly string var (
scriptEarly string
sourceSuffix string
)
if _, ok := source.(pkg.FileArtifact); ok {
if attr.Writable || attr.Chmod ||
wantsChmod || wantsWrite ||
len(attr.Patches) > 0 {
panic("source processing requested on a xz-compressed tarball")
}
var sourceSuffix string
switch attr.SourceKind {
case SourceKindTarXZ:
sourceSuffix = ".tar.xz" sourceSuffix = ".tar.xz"
scriptEarly += ` scriptEarly += `
tar -C /usr/src/ -xf '/usr/src/` + name + `.tar.xz' tar -C /usr/src/ -xf '/usr/src/` + name + `.tar.xz'
mv '/usr/src/` + name + `-` + version + `' '/usr/src/` + name + `' mv '/usr/src/` + name + `-` + version + `' '/usr/src/` + name + `'
` `
break
} }
dir := helper.wantsDir() dir := helper.wantsDir()
@@ -582,7 +586,7 @@ cd '/usr/src/` + name + `/'
} }
return t.New( return t.New(
helper.name(name, version), name+"-"+version,
attr.Flag, attr.Flag,
extraRes, extraRes,
nil, nil,
@@ -597,3 +601,65 @@ cd '/usr/src/` + name + `/'
})..., })...,
) )
} }
const (
// jobsE is expression for preferred job count set by [pkg].
jobsE = `"$` + pkg.EnvJobs + `"`
// jobsE is expression for flag with preferred job count.
jobsFlagE = `"-j$` + pkg.EnvJobs + `"`
// jobsLE is expression for twice of preferred job count set by [pkg].
jobsLE = `"$(expr ` + jobsE + ` '*' 2)"`
// jobsE is expression for flag with double of preferred job count.
jobsLFlagE = `"-j$(expr ` + jobsE + ` '*' 2)"`
)
// newTar wraps [pkg.NewHTTPGetTar] with a simpler function signature.
func newTar(url, checksum string, compression uint32) pkg.Artifact {
return pkg.NewHTTPGetTar(nil, url, mustDecode(checksum), compression)
}
// newFromCPAN is a helper for downloading release from CPAN.
func newFromCPAN(author, name, version, checksum string) pkg.Artifact {
return newTar(
"https://cpan.metacpan.org/authors/id/"+
author[:1]+"/"+author[:2]+"/"+author+"/"+
name+"-"+version+".tar.gz",
checksum,
pkg.TarGzip,
)
}
// newFromGitLab is a helper for downloading source from GitLab.
func newFromGitLab(domain, suffix, ref, checksum string) pkg.Artifact {
return newTar(
"https://"+domain+"/"+suffix+"/-/archive/"+
ref+"/"+path.Base(suffix)+"-"+
strings.ReplaceAll(ref, "/", "-")+".tar.bz2",
checksum,
pkg.TarBzip2,
)
}
// newFromGitHub is a helper for downloading source from Microsoft Github.
func newFromGitHub(suffix, tag, checksum string) pkg.Artifact {
return newTar(
"https://github.com/"+suffix+
"/archive/refs/tags/"+tag+".tar.gz",
checksum,
pkg.TarGzip,
)
}
// newFromGitHubRelease is a helper for downloading release tarball from
// Microsoft Github.
func newFromGitHubRelease(
suffix, tag, name, checksum string,
compression uint32,
) pkg.Artifact {
return newTar(
"https://github.com/"+suffix+
"/releases/download/"+tag+"/"+name,
checksum,
compression,
)
}

View File

@@ -60,7 +60,7 @@ func getCache(t *testing.T) *pkg.Cache {
msg := message.New(log.New(os.Stderr, "rosa: ", 0)) msg := message.New(log.New(os.Stderr, "rosa: ", 0))
msg.SwapVerbose(true) msg.SwapVerbose(true)
if buildTestCache, err = pkg.Open(ctx, msg, 0, 0, a); err != nil { if buildTestCache, err = pkg.Open(ctx, msg, 0, 0, 0, a); err != nil {
t.Fatal(err) t.Fatal(err)
} }
} }
@@ -94,7 +94,7 @@ func TestCureAll(t *testing.T) {
func BenchmarkStage3(b *testing.B) { func BenchmarkStage3(b *testing.B) {
for b.Loop() { for b.Loop() {
rosa.Std.Load(rosa.LLVMClang) rosa.Std.Load(rosa.Clang)
b.StopTimer() b.StopTimer()
rosa.DropCaches() rosa.DropCaches()

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newRsync() (pkg.Artifact, string) {
version = "3.4.1" version = "3.4.1"
checksum = "VBlTsBWd9z3r2-ex7GkWeWxkUc5OrlgDzikAC0pK7ufTjAJ0MbmC_N04oSVTGPiv" checksum = "VBlTsBWd9z3r2-ex7GkWeWxkUc5OrlgDzikAC0pK7ufTjAJ0MbmC_N04oSVTGPiv"
) )
return t.NewPackage("rsync", version, pkg.NewHTTPGetTar( return t.NewPackage("rsync", version, newTar(
nil, "https://download.samba.org/pub/rsync/src/"+ "https://download.samba.org/pub/rsync/src/"+
"rsync-"+version+".tar.gz", "rsync-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
Flag: TEarly, Flag: TEarly,

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newSquashfsTools() (pkg.Artifact, string) {
version = "4.7.5" version = "4.7.5"
checksum = "rF52wLQP-jeAmcD-48wqJcck8ZWRFwkax3T-7snaRf5EBnCQQh0YypMY9lwcivLz" checksum = "rF52wLQP-jeAmcD-48wqJcck8ZWRFwkax3T-7snaRf5EBnCQQh0YypMY9lwcivLz"
) )
return t.NewPackage("squashfs-tools", version, pkg.NewHTTPGetTar( return t.NewPackage("squashfs-tools", version, newFromGitHubRelease(
nil, "https://github.com/plougher/squashfs-tools/releases/"+ "plougher/squashfs-tools",
"download/"+version+"/squashfs-tools-"+version+".tar.gz", version,
mustDecode(checksum), "squashfs-tools-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// uses source tree as scratch space // uses source tree as scratch space

View File

@@ -1,19 +1,17 @@
package rosa package rosa
import ( import (
"runtime"
"sync" "sync"
"hakurei.app/internal/pkg" "hakurei.app/internal/pkg"
) )
func (t Toolchain) newStage0() (pkg.Artifact, string) { func (t Toolchain) newStage0() (pkg.Artifact, string) {
musl, compilerRT, runtimes, clang := t.NewLLVM()
return t.New("rosa-stage0", 0, []pkg.Artifact{ return t.New("rosa-stage0", 0, []pkg.Artifact{
musl, t.Load(Musl),
compilerRT, t.Load(CompilerRT),
runtimes, t.Load(LLVMRuntimes),
clang, t.Load(Clang),
t.Load(Zlib), t.Load(Zlib),
t.Load(Bzip2), t.Load(Bzip2),
@@ -61,23 +59,14 @@ var (
// NewStage0 returns a stage0 distribution created from curing [Stage0]. // NewStage0 returns a stage0 distribution created from curing [Stage0].
func NewStage0() pkg.Artifact { func NewStage0() pkg.Artifact {
stage0Once.Do(func() { stage0Once.Do(func() {
var seed string stage0 = newTar(
switch runtime.GOARCH { "https://hakurei.app/seed/20260210/"+
case "amd64":
seed = "tqM1Li15BJ-uFG8zU-XjgFxoN_kuzh1VxrSDVUVa0vGmo-NeWapSftH739sY8EAg"
case "arm64":
seed = "CJj3ZSnRyLmFHlWIQtTPQD9oikOZY4cD_mI3v_-LIYc2hhg-cq_CZFBLzQBAkFIn"
case "riscv64":
seed = "FcszJjcVWdKAnn-bt8qmUn5GUUTjv_xQjXOWkUpOplRkG3Ckob3StUoAi5KQ5-QF"
default:
panic("unsupported target " + runtime.GOARCH)
}
stage0 = pkg.NewHTTPGetTar(
nil, "https://hakurei.app/seed/20260210/"+
"stage0-"+triplet()+".tar.bz2", "stage0-"+triplet()+".tar.bz2",
mustDecode(seed), perArch[string]{
"amd64": "tqM1Li15BJ-uFG8zU-XjgFxoN_kuzh1VxrSDVUVa0vGmo-NeWapSftH739sY8EAg",
"arm64": "CJj3ZSnRyLmFHlWIQtTPQD9oikOZY4cD_mI3v_-LIYc2hhg-cq_CZFBLzQBAkFIn",
"riscv64": "FcszJjcVWdKAnn-bt8qmUn5GUUTjv_xQjXOWkUpOplRkG3Ckob3StUoAi5KQ5-QF",
}.unwrap(),
pkg.TarBzip2, pkg.TarBzip2,
) )
}) })

View File

@@ -11,8 +11,6 @@ func (t Toolchain) newStrace() (pkg.Artifact, string) {
nil, "https://strace.io/files/"+version+"/strace-"+version+".tar.xz", nil, "https://strace.io/files/"+version+"/strace-"+version+".tar.xz",
mustDecode(checksum), mustDecode(checksum),
), &PackageAttr{ ), &PackageAttr{
SourceKind: SourceKindTarXZ,
// patch not possible with nonzero SourceKind // patch not possible with nonzero SourceKind
ScriptEarly: ` ScriptEarly: `
sed -i 's/off64_t/off_t/g' \ sed -i 's/off64_t/off_t/g' \

View File

@@ -31,11 +31,10 @@ rm \
os/root_unix_test.go os/root_unix_test.go
./all.bash ./all.bash
`, pkg.Path(AbsUsrSrc.Append("tamago"), false, pkg.NewHTTPGetTar( `, pkg.Path(AbsUsrSrc.Append("tamago"), false, newFromGitHub(
nil, "https://github.com/usbarmory/tamago-go/archive/refs/tags/"+ "usbarmory/tamago-go",
"tamago-go"+version+".tar.gz", "tamago-go"+version,
mustDecode(checksum), checksum,
pkg.TarGzip,
))), version ))), version
} }
func init() { func init() {

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newToybox(suffix, script string) (pkg.Artifact, string) {
version = "0.8.13" version = "0.8.13"
checksum = "rZ1V1ATDte2WeQZanxLVoiRGdfPXhMlEo5-exX-e-ml8cGn9qOv0ABEUVZpX3wTI" checksum = "rZ1V1ATDte2WeQZanxLVoiRGdfPXhMlEo5-exX-e-ml8cGn9qOv0ABEUVZpX3wTI"
) )
return t.NewPackage("toybox"+suffix, version, pkg.NewHTTPGetTar( return t.NewPackage("toybox"+suffix, version, newTar(
nil, "https://landley.net/toybox/downloads/toybox-"+version+".tar.gz", "https://landley.net/toybox/downloads/toybox-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
// uses source tree as scratch space // uses source tree as scratch space

View File

@@ -22,11 +22,11 @@ make -f unix/Makefile generic1
mkdir -p /work/system/bin/ mkdir -p /work/system/bin/
mv unzip /work/system/bin/ mv unzip /work/system/bin/
`, pkg.Path(AbsUsrSrc.Append("unzip"), true, t.NewPatchedSource( `, pkg.Path(AbsUsrSrc.Append("unzip"), true, t.NewPatchedSource(
"unzip", version, pkg.NewHTTPGetTar( "unzip", version, newTar(
nil, "https://downloads.sourceforge.net/project/infozip/"+ "https://downloads.sourceforge.net/project/infozip/"+
"UnZip%206.x%20%28latest%29/UnZip%20"+version+"/"+ "UnZip%206.x%20%28latest%29/UnZip%20"+version+"/"+
"unzip"+strings.ReplaceAll(version, ".", "")+".tar.gz", "unzip"+strings.ReplaceAll(version, ".", "")+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), false, ), false,
))), version ))), version

View File

@@ -11,11 +11,11 @@ func (t Toolchain) newUtilLinux() (pkg.Artifact, string) {
version = "2.42" version = "2.42"
checksum = "Uy8Nxg9DsW5YwDoeaZeZTyQJ2YmnaaL_fSsQXsLUiFFUd7wnZeD_3SEaVO7ClJlk" checksum = "Uy8Nxg9DsW5YwDoeaZeZTyQJ2YmnaaL_fSsQXsLUiFFUd7wnZeD_3SEaVO7ClJlk"
) )
return t.NewPackage("util-linux", version, pkg.NewHTTPGetTar( return t.NewPackage("util-linux", version, newTar(
nil, "https://www.kernel.org/pub/linux/utils/util-linux/"+ "https://www.kernel.org/pub/linux/utils/util-linux/"+
"v"+strings.Join(strings.SplitN(version, ".", 3)[:2], ".")+ "v"+strings.Join(strings.SplitN(version, ".", 3)[:2], ".")+
"/util-linux-"+version+".tar.gz", "/util-linux-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), &PackageAttr{ ), &PackageAttr{
ScriptEarly: ` ScriptEarly: `

View File

@@ -7,11 +7,11 @@ func (t Toolchain) newWayland() (pkg.Artifact, string) {
version = "1.25.0" version = "1.25.0"
checksum = "q-4dYXme46JPgLGtXAxyZGTy7udll9RfT0VXtcW2YRR1WWViUhvdZXZneXzLqpCg" checksum = "q-4dYXme46JPgLGtXAxyZGTy7udll9RfT0VXtcW2YRR1WWViUhvdZXZneXzLqpCg"
) )
return t.NewPackage("wayland", version, pkg.NewHTTPGetTar( return t.NewPackage("wayland", version, newFromGitLab(
nil, "https://gitlab.freedesktop.org/wayland/wayland/"+ "gitlab.freedesktop.org",
"-/archive/"+version+"/wayland-"+version+".tar.bz2", "wayland/wayland",
mustDecode(checksum), version,
pkg.TarBzip2, checksum,
), &PackageAttr{ ), &PackageAttr{
Writable: true, Writable: true,
ScriptEarly: ` ScriptEarly: `
@@ -57,11 +57,11 @@ func (t Toolchain) newWaylandProtocols() (pkg.Artifact, string) {
version = "1.48" version = "1.48"
checksum = "xvfHCBIzXGwOqOu9b8dfhGw_U29Pd-g4JBwpYIaxee8SwEbxi6NaVU-Y1Q7wY4jK" checksum = "xvfHCBIzXGwOqOu9b8dfhGw_U29Pd-g4JBwpYIaxee8SwEbxi6NaVU-Y1Q7wY4jK"
) )
return t.NewPackage("wayland-protocols", version, pkg.NewHTTPGetTar( return t.NewPackage("wayland-protocols", version, newFromGitLab(
nil, "https://gitlab.freedesktop.org/wayland/wayland-protocols/"+ "gitlab.freedesktop.org",
"-/archive/"+version+"/wayland-protocols-"+version+".tar.bz2", "wayland/wayland-protocols",
mustDecode(checksum), version,
pkg.TarBzip2, checksum,
), &PackageAttr{ ), &PackageAttr{
Patches: []KV{ Patches: []KV{
{"build-only", `From 8b4c76275fa1b6e0a99a53494151d9a2c907144d Mon Sep 17 00:00:00 2001 {"build-only", `From 8b4c76275fa1b6e0a99a53494151d9a2c907144d Mon Sep 17 00:00:00 2001

View File

@@ -7,10 +7,10 @@ func (t Toolchain) newUtilMacros() (pkg.Artifact, string) {
version = "1.20.2" version = "1.20.2"
checksum = "Ze8QH3Z3emC0pWFP-0nUYeMy7aBW3L_dxBBmVgcumIHNzEKc1iGTR-yUFR3JcM1G" checksum = "Ze8QH3Z3emC0pWFP-0nUYeMy7aBW3L_dxBBmVgcumIHNzEKc1iGTR-yUFR3JcM1G"
) )
return t.NewPackage("util-macros", version, pkg.NewHTTPGetTar( return t.NewPackage("util-macros", version, newTar(
nil, "https://www.x.org/releases/individual/util/"+ "https://www.x.org/releases/individual/util/"+
"util-macros-"+version+".tar.gz", "util-macros-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil)), version ), nil, (*MakeHelper)(nil)), version
} }
@@ -31,10 +31,10 @@ func (t Toolchain) newXproto() (pkg.Artifact, string) {
version = "7.0.31" version = "7.0.31"
checksum = "Cm69urWY5RctKpR78eGzuwrjDEfXGkvHRdodj6sjypOGy5FF4-lmnUttVHYV1ydg" checksum = "Cm69urWY5RctKpR78eGzuwrjDEfXGkvHRdodj6sjypOGy5FF4-lmnUttVHYV1ydg"
) )
return t.NewPackage("xproto", version, pkg.NewHTTPGetTar( return t.NewPackage("xproto", version, newTar(
nil, "https://www.x.org/releases/individual/proto/"+ "https://www.x.org/releases/individual/proto/"+
"xproto-"+version+".tar.bz2", "xproto-"+version+".tar.bz2",
mustDecode(checksum), checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// ancient configure script // ancient configure script
@@ -63,10 +63,10 @@ func (t Toolchain) newLibXau() (pkg.Artifact, string) {
version = "1.0.12" version = "1.0.12"
checksum = "G9AjnU_C160q814MCdjFOVt_mQz_pIt4wf4GNOQmGJS3UuuyMw53sfPvJ7WOqwXN" checksum = "G9AjnU_C160q814MCdjFOVt_mQz_pIt4wf4GNOQmGJS3UuuyMw53sfPvJ7WOqwXN"
) )
return t.NewPackage("libXau", version, pkg.NewHTTPGetTar( return t.NewPackage("libXau", version, newTar(
nil, "https://www.x.org/releases/individual/lib/"+ "https://www.x.org/releases/individual/lib/"+
"libXau-"+version+".tar.gz", "libXau-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &MakeHelper{ ), nil, &MakeHelper{
// ancient configure script // ancient configure script
@@ -95,3 +95,35 @@ func init() {
ID: 1765, ID: 1765,
} }
} }
func (t Toolchain) newLibpciaccess() (pkg.Artifact, string) {
const (
version = "0.19"
checksum = "84H0c_U_7fMqo81bpuwyXGXtk4XvvFH_YK00UiOriv3YGsuAhmbo2IkFhmJnvu2x"
)
return t.NewPackage("libpciaccess", version, newFromGitLab(
"gitlab.freedesktop.org",
"xorg/lib/libpciaccess",
"libpciaccess-"+version,
checksum,
), nil, &MesonHelper{
Setup: []KV{
{"Dzlib", "enabled"},
},
}), version
}
func init() {
artifactsM[Libpciaccess] = Metadata{
f: Toolchain.newLibpciaccess,
Name: "libpciaccess",
Description: "generic PCI access library",
Website: "https://gitlab.freedesktop.org/xorg/lib/libpciaccess",
Dependencies: P{
Zlib,
},
ID: 1703,
}
}

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newXCBProto() (pkg.Artifact, string) {
version = "1.17.0" version = "1.17.0"
checksum = "_NtbKaJ_iyT7XiJz25mXQ7y-niTzE8sHPvLXZPcqtNoV_-vTzqkezJ8Hp2U1enCv" checksum = "_NtbKaJ_iyT7XiJz25mXQ7y-niTzE8sHPvLXZPcqtNoV_-vTzqkezJ8Hp2U1enCv"
) )
return t.NewPackage("xcb-proto", version, pkg.NewHTTPGetTar( return t.NewPackage("xcb-proto", version, newTar(
nil, "https://xcb.freedesktop.org/dist/xcb-proto-"+version+".tar.gz", "https://xcb.freedesktop.org/dist/xcb-proto-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Python, Python,
@@ -32,9 +32,9 @@ func (t Toolchain) newXCB() (pkg.Artifact, string) {
version = "1.17.0" version = "1.17.0"
checksum = "hjjsc79LpWM_hZjNWbDDS6qRQUXREjjekS6UbUsDq-RR1_AjgNDxhRvZf-1_kzDd" checksum = "hjjsc79LpWM_hZjNWbDDS6qRQUXREjjekS6UbUsDq-RR1_AjgNDxhRvZf-1_kzDd"
) )
return t.NewPackage("xcb", version, pkg.NewHTTPGetTar( return t.NewPackage("xcb", version, newTar(
nil, "https://xcb.freedesktop.org/dist/libxcb-"+version+".tar.gz", "https://xcb.freedesktop.org/dist/libxcb-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Python, Python,

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newXZ() (pkg.Artifact, string) {
version = "5.8.3" version = "5.8.3"
checksum = "nCdayphPGdIdVoAZ2hR4vYlhDG9LeVKho_i7ealTud4Vxy5o5dWe0VwFlN7utuUL" checksum = "nCdayphPGdIdVoAZ2hR4vYlhDG9LeVKho_i7ealTud4Vxy5o5dWe0VwFlN7utuUL"
) )
return t.NewPackage("xz", version, pkg.NewHTTPGetTar( return t.NewPackage("xz", version, newFromGitHubRelease(
nil, "https://github.com/tukaani-project/xz/releases/download/"+ "tukaani-project/xz",
"v"+version+"/xz-"+version+".tar.bz2", "v"+version,
mustDecode(checksum), "xz-"+version+".tar.bz2",
checksum,
pkg.TarBzip2, pkg.TarBzip2,
), nil, (*MakeHelper)(nil), ), nil, (*MakeHelper)(nil),
Diffutils, Diffutils,

View File

@@ -7,9 +7,9 @@ func (t Toolchain) newZlib() (pkg.Artifact, string) {
version = "1.3.2" version = "1.3.2"
checksum = "KHZrePe42vL2XvOUE3KlJkp1UgWhWkl0jjT_BOvFhuM4GzieEH9S7CioepOFVGYB" checksum = "KHZrePe42vL2XvOUE3KlJkp1UgWhWkl0jjT_BOvFhuM4GzieEH9S7CioepOFVGYB"
) )
return t.NewPackage("zlib", version, pkg.NewHTTPGetTar( return t.NewPackage("zlib", version, newTar(
nil, "https://www.zlib.net/fossils/zlib-"+version+".tar.gz", "https://www.zlib.net/fossils/zlib-"+version+".tar.gz",
mustDecode(checksum), checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &CMakeHelper{ ), nil, &CMakeHelper{
Cache: []KV{ Cache: []KV{

View File

@@ -7,10 +7,11 @@ func (t Toolchain) newZstd() (pkg.Artifact, string) {
version = "1.5.7" version = "1.5.7"
checksum = "4XhfR7DwVkwo1R-TmYDAJOcx9YXv9WSFhcFUe3hWEAMmdMLPhFaznCqYIA19_xxV" checksum = "4XhfR7DwVkwo1R-TmYDAJOcx9YXv9WSFhcFUe3hWEAMmdMLPhFaznCqYIA19_xxV"
) )
return t.NewPackage("zstd", version, pkg.NewHTTPGetTar( return t.NewPackage("zstd", version, newFromGitHubRelease(
nil, "https://github.com/facebook/zstd/releases/download/"+ "facebook/zstd",
"v"+version+"/zstd-"+version+".tar.gz", "v"+version,
mustDecode(checksum), "zstd-"+version+".tar.gz",
checksum,
pkg.TarGzip, pkg.TarGzip,
), nil, &CMakeHelper{ ), nil, &CMakeHelper{
Append: []string{"build", "cmake"}, Append: []string{"build", "cmake"},

View File

@@ -136,11 +136,12 @@ in
conf = { conf = {
inherit id; inherit id;
inherit (app) identity groups enablements; inherit (app) identity enablements;
inherit (dbusConfig) session_bus system_bus; inherit (dbusConfig) session_bus system_bus;
direct_wayland = app.insecureWayland; direct_wayland = app.insecureWayland;
sched_policy = app.schedPolicy; sched_policy = app.schedPolicy;
sched_priority = app.schedPriority; sched_priority = app.schedPriority;
groups = app.groups ++ optional (cfg.sharefs.source != null) cfg.sharefs.group;
container = { container = {
inherit (app) inherit (app)
@@ -357,29 +358,30 @@ in
users = mkMerge ( users = mkMerge (
foldlAttrs foldlAttrs
( (
acc: _: fid: acc: username: fid:
acc acc
++ foldlAttrs ( ++
acc': _: app: foldlAttrs
acc' ++ [ { ${getsubname fid app.identity} = getuser fid app.identity; } ] (
) [ { ${getsubname fid 0} = getuser fid 0; } ] cfg.apps acc': _: app:
) acc' ++ [ { ${getsubname fid app.identity} = getuser fid app.identity; } ]
( )
if (cfg.sharefs.source != null) then [
[ {
{ ${getsubname fid 0} = getuser fid 0;
${cfg.sharefs.user} = { ${username}.extraGroups = [ cfg.sharefs.group ];
uid = lib.mkDefault 1023; }
inherit (cfg.sharefs) group; ]
isSystemUser = true; cfg.apps
home = cfg.sharefs.source;
};
}
]
else
[ ]
) )
(optional (cfg.sharefs.source != null) {
${cfg.sharefs.user} = {
uid = lib.mkDefault 1023;
inherit (cfg.sharefs) group;
isSystemUser = true;
home = cfg.sharefs.source;
};
})
cfg.users cfg.users
); );
@@ -393,18 +395,11 @@ in
acc' ++ [ { ${getsubname fid app.identity} = getgroup fid app.identity; } ] acc' ++ [ { ${getsubname fid app.identity} = getgroup fid app.identity; } ]
) [ { ${getsubname fid 0} = getgroup fid 0; } ] cfg.apps ) [ { ${getsubname fid 0} = getgroup fid 0; } ] cfg.apps
) )
( (optional (cfg.sharefs.source != null) {
if (cfg.sharefs.source != null) then ${cfg.sharefs.group} = {
[ gid = lib.mkDefault 1023;
{ };
${cfg.sharefs.group} = { })
gid = lib.mkDefault 1023;
};
}
]
else
[ ]
)
cfg.users cfg.users
); );
}; };

View File

@@ -8,10 +8,7 @@
description = "Alice Foobar"; description = "Alice Foobar";
password = "foobar"; password = "foobar";
uid = 1000; uid = 1000;
extraGroups = [ extraGroups = [ "wheel" ];
"wheel"
"sharefs"
];
}; };
untrusted = { untrusted = {
isNormalUser = true; isNormalUser = true;