Compare commits

...

25 Commits

Author SHA1 Message Date
7638a44fa6
treewide: parallel tests
All checks were successful
Test / Create distribution (push) Successful in 25s
Test / Hakurei (push) Successful in 44s
Test / Sandbox (push) Successful in 41s
Test / Hakurei (race detector) (push) Successful in 44s
Test / Sandbox (race detector) (push) Successful in 41s
Test / Hpkg (push) Successful in 41s
Test / Flake checks (push) Successful in 1m24s
Most tests already had no global state, however parallel was never enabled. This change enables it for all applicable tests.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-13 04:38:48 +09:00
a14b6535a6
helper/stub: write ready byte late
All checks were successful
Test / Create distribution (push) Successful in 27s
Test / Sandbox (race detector) (push) Successful in 41s
Test / Sandbox (push) Successful in 41s
Test / Hakurei (push) Successful in 44s
Test / Hakurei (race detector) (push) Successful in 44s
Test / Hpkg (push) Successful in 42s
Test / Flake checks (push) Successful in 1m30s
Hopefully eliminates spurious failures.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-13 01:55:44 +09:00
763ab27e09
system: remove tmpfiles
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Sandbox (push) Successful in 2m10s
Test / Hakurei (push) Successful in 3m9s
Test / Hpkg (push) Successful in 4m2s
Test / Sandbox (race detector) (push) Successful in 4m33s
Test / Hakurei (race detector) (push) Successful in 5m21s
Test / Flake checks (push) Successful in 1m32s
This is no longer used.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-13 01:12:44 +09:00
bff2a1e748
container/initplace: remove indirect method
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Hpkg (push) Successful in 4m2s
Test / Sandbox (race detector) (push) Successful in 4m30s
Test / Sandbox (push) Successful in 1m24s
Test / Hakurei (race detector) (push) Successful in 5m20s
Test / Hakurei (push) Successful in 2m13s
Test / Flake checks (push) Successful in 1m29s
This is no longer useful and is highly error-prone.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-13 01:06:45 +09:00
8a91234cb4
hst: reword and improve doc comments
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Sandbox (push) Successful in 2m9s
Test / Hpkg (push) Successful in 3m58s
Test / Sandbox (race detector) (push) Successful in 4m31s
Test / Hakurei (race detector) (push) Successful in 5m19s
Test / Hakurei (push) Successful in 2m12s
Test / Flake checks (push) Successful in 1m31s
This corrects minor mistakes in doc comments and adds them for undocumented constants.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-12 05:03:14 +09:00
db7051a368
internal/app/spcontainer: check fs init behaviour
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Hakurei (push) Successful in 3m8s
Test / Hpkg (push) Successful in 3m53s
Test / Sandbox (race detector) (push) Successful in 4m34s
Test / Sandbox (push) Successful in 1m21s
Test / Hakurei (race detector) (push) Successful in 5m22s
Test / Flake checks (push) Successful in 1m34s
This covers every statement. Some of them are unreachable unless the kernel returns garbage.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-12 03:58:53 +09:00
36f312b3ba
internal/app/spcontainer: resolve path through dispatcher
All checks were successful
Test / Create distribution (push) Successful in 36s
Test / Sandbox (push) Successful in 2m13s
Test / Hpkg (push) Successful in 4m2s
Test / Hakurei (race detector) (push) Successful in 5m23s
Test / Hakurei (push) Successful in 2m14s
Test / Sandbox (race detector) (push) Successful in 2m7s
Test / Flake checks (push) Successful in 1m32s
This prevents state from os tainting the test data.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 20:20:41 +09:00
037144b06e
system/dbus: use well-known address in spec
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Sandbox (push) Successful in 2m13s
Test / Hpkg (push) Successful in 4m8s
Test / Hakurei (race detector) (push) Successful in 5m26s
Test / Hakurei (push) Successful in 2m14s
Test / Sandbox (race detector) (push) Successful in 2m4s
Test / Flake checks (push) Successful in 1m32s
The session bus still performs non-standard formatting since it makes no sense for hakurei to start the session bus.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 18:52:06 +09:00
f5a597c406
hst: rename /.hakurei constant
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Sandbox (push) Successful in 2m13s
Test / Hakurei (push) Successful in 3m3s
Test / Hpkg (push) Successful in 3m57s
Test / Sandbox (race detector) (push) Successful in 4m30s
Test / Hakurei (race detector) (push) Successful in 5m16s
Test / Flake checks (push) Successful in 1m20s
This provides disambiguation from fhs.AbsTmp.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 14:32:35 +09:00
8874aaf81b
hst: remove template bind nix store
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Sandbox (push) Successful in 2m20s
Test / Hakurei (push) Successful in 3m5s
Test / Hpkg (push) Successful in 3m59s
Test / Sandbox (race detector) (push) Successful in 4m35s
Test / Hakurei (race detector) (push) Successful in 5m25s
Test / Flake checks (push) Successful in 1m28s
This does not add anything meaningful to the template, since there are already prior examples showing src-only bind ops. Remove this since it causes confusion by covering the previous mount point targeting /nix/store.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 13:59:10 +09:00
04a27c8e47
hst: use plausible overlay template
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Sandbox (push) Successful in 2m11s
Test / Hakurei (push) Successful in 3m6s
Test / Hpkg (push) Successful in 3m57s
Test / Hakurei (race detector) (push) Successful in 5m19s
Test / Sandbox (race detector) (push) Successful in 2m7s
Test / Flake checks (push) Successful in 1m39s
The current value is copied from a test case, and does not resemble its intended use case.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 13:51:08 +09:00
9e3df0905b
internal/app/spcontainer: check params init behaviour
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Hakurei (push) Successful in 3m6s
Test / Hpkg (push) Successful in 4m4s
Test / Sandbox (push) Successful in 1m21s
Test / Hakurei (race detector) (push) Successful in 5m23s
Test / Sandbox (race detector) (push) Successful in 2m8s
Test / Flake checks (push) Successful in 1m31s
This change also significantly reduces duplicate information in test case.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 02:44:02 +09:00
9290748761
internal/app/spaccount: check behaviour
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Hakurei (push) Successful in 3m7s
Test / Hpkg (push) Successful in 4m2s
Test / Sandbox (race detector) (push) Successful in 4m27s
Test / Hakurei (race detector) (push) Successful in 5m19s
Test / Sandbox (push) Successful in 1m18s
Test / Flake checks (push) Successful in 1m30s
This begins the effort of fully covering internal/app.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 00:54:04 +09:00
23084888a0
internal/app/spaccount: apply default in shim
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m19s
Test / Hpkg (push) Successful in 4m6s
Test / Hakurei (race detector) (push) Successful in 5m20s
Test / Sandbox (race detector) (push) Successful in 2m10s
Test / Hakurei (push) Successful in 2m13s
Test / Flake checks (push) Successful in 1m37s
The original code clobbers hst.Config, and was not changed when being ported over.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-11 00:38:06 +09:00
50f6fcb326
container/stub: mark test overrides as helper
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Hpkg (push) Successful in 4m2s
Test / Sandbox (race detector) (push) Successful in 4m35s
Test / Sandbox (push) Successful in 1m24s
Test / Hakurei (race detector) (push) Successful in 5m23s
Test / Hakurei (push) Successful in 2m16s
Test / Flake checks (push) Successful in 1m21s
This fixes line information in test reporting messages.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 22:15:20 +09:00
070e346587
internal/app: relocate params state initialisation
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m13s
Test / Hakurei (push) Successful in 3m5s
Test / Hpkg (push) Successful in 4m9s
Test / Hakurei (race detector) (push) Successful in 5m18s
Test / Sandbox (race detector) (push) Successful in 2m9s
Test / Flake checks (push) Successful in 1m40s
This is useful for testing.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 22:00:49 +09:00
24de7c50a0
internal/app: relocate state initialisation
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Sandbox (push) Successful in 2m9s
Test / Hakurei (push) Successful in 3m4s
Test / Hpkg (push) Successful in 4m4s
Test / Sandbox (race detector) (push) Successful in 4m37s
Test / Hakurei (race detector) (push) Successful in 5m18s
Test / Flake checks (push) Successful in 1m28s
This is useful for testing.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 20:15:58 +09:00
f6dd9dab6a
internal/app: hold path hiding in op
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m20s
Test / Hakurei (push) Successful in 3m8s
Test / Hpkg (push) Successful in 4m12s
Test / Sandbox (race detector) (push) Successful in 4m37s
Test / Hakurei (race detector) (push) Successful in 5m21s
Test / Flake checks (push) Successful in 1m34s
This makes no sense to be part of the global state.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 19:56:30 +09:00
776650af01
hst/config: negative WaitDelay bypasses default
All checks were successful
Test / Create distribution (push) Successful in 34s
Test / Sandbox (push) Successful in 2m19s
Test / Hpkg (push) Successful in 4m4s
Test / Sandbox (race detector) (push) Successful in 4m44s
Test / Hakurei (race detector) (push) Successful in 5m25s
Test / Hakurei (push) Successful in 2m16s
Test / Flake checks (push) Successful in 1m30s
This behaviour might be useful, so do not lock it out. This change also fixes an oversight where the unchecked value is used to determine ForwardCancel.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 05:11:32 +09:00
109aaee659
internal/app: copy parts of config to state
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m12s
Test / Hakurei (push) Successful in 3m7s
Test / Hpkg (push) Successful in 4m4s
Test / Sandbox (race detector) (push) Successful in 4m30s
Test / Hakurei (race detector) (push) Successful in 5m20s
Test / Flake checks (push) Successful in 1m34s
This is less error-prone than passing the address to the entire hst.Config struct, and reduces the likelihood of accidentally clobbering hst.Config. This also improves ease of testing.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 03:19:09 +09:00
22ee5ae151
internal/app: filter ops in implementation
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m18s
Test / Hpkg (push) Successful in 4m1s
Test / Sandbox (race detector) (push) Successful in 4m28s
Test / Hakurei (race detector) (push) Successful in 5m19s
Test / Hakurei (push) Successful in 2m14s
Test / Flake checks (push) Successful in 1m33s
This is cleaner and less error-prone, and should also result in negligibly less memory allocation.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 02:23:34 +09:00
4246256d78
internal/app: hold config address in state
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m13s
Test / Hakurei (push) Successful in 3m6s
Test / Hpkg (push) Successful in 4m9s
Test / Sandbox (race detector) (push) Successful in 4m32s
Test / Hakurei (race detector) (push) Successful in 5m22s
Test / Flake checks (push) Successful in 1m34s
This can be removed eventually as it is barely used.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-10 01:21:01 +09:00
a941ac025f
container/init: unwrap descriptive fatal error
All checks were successful
Test / Create distribution (push) Successful in 33s
Test / Sandbox (push) Successful in 2m12s
Test / Hakurei (push) Successful in 3m6s
Test / Hpkg (push) Successful in 4m0s
Test / Hakurei (race detector) (push) Successful in 5m20s
Test / Sandbox (race detector) (push) Successful in 2m3s
Test / Flake checks (push) Successful in 1m27s
These errors are printed with a descriptive message prefixed to them, so it is more readable to expose the underlying errno.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-09 22:04:35 +09:00
87b5c30ef6
message: relocate from container
All checks were successful
Test / Create distribution (push) Successful in 35s
Test / Sandbox (push) Successful in 2m22s
Test / Hpkg (push) Successful in 4m2s
Test / Sandbox (race detector) (push) Successful in 4m28s
Test / Hakurei (race detector) (push) Successful in 5m21s
Test / Hakurei (push) Successful in 2m9s
Test / Flake checks (push) Successful in 1m29s
This package is quite useful. This change allows it to be imported without importing container.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-09 05:18:19 +09:00
df9b77b077
internal/app: do not encode config early
All checks were successful
Test / Create distribution (push) Successful in 36s
Test / Sandbox (push) Successful in 2m11s
Test / Hpkg (push) Successful in 4m10s
Test / Sandbox (race detector) (push) Successful in 4m40s
Test / Hakurei (race detector) (push) Successful in 5m21s
Test / Hakurei (push) Successful in 2m18s
Test / Flake checks (push) Successful in 1m32s
Finalise no longer clobbers hst.Config.

Signed-off-by: Ophestra <cat@gensokyo.uk>
2025-10-09 04:38:54 +09:00
138 changed files with 1975 additions and 1089 deletions

View File

@ -3,7 +3,6 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log"
@ -13,19 +12,23 @@ import (
"strconv"
"sync"
"time"
_ "unsafe"
"hakurei.app/command"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/internal"
"hakurei.app/internal/app"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system/dbus"
)
func buildCommand(ctx context.Context, msg container.Msg, early *earlyHardeningErrs, out io.Writer) command.Command {
//go:linkname optionalErrorUnwrap hakurei.app/container.optionalErrorUnwrap
func optionalErrorUnwrap(_ error) error
func buildCommand(ctx context.Context, msg message.Msg, early *earlyHardeningErrs, out io.Writer) command.Command {
var (
flagVerbose bool
flagJSON bool
@ -115,7 +118,7 @@ func buildCommand(ctx context.Context, msg container.Msg, early *earlyHardeningE
progPath := shell
if len(args) > 0 {
if p, err := exec.LookPath(args[0]); err != nil {
log.Fatal(errors.Unwrap(err))
log.Fatal(optionalErrorUnwrap(err))
return err
} else if progPath, err = check.NewAbs(p); err != nil {
log.Fatal(err.Error())

View File

@ -7,10 +7,12 @@ import (
"testing"
"hakurei.app/command"
"hakurei.app/container"
"hakurei.app/message"
)
func TestHelp(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
args []string
@ -68,8 +70,10 @@ Flags:
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
out := new(bytes.Buffer)
c := buildCommand(t.Context(), container.NewMsg(nil), new(earlyHardeningErrs), out)
c := buildCommand(t.Context(), message.NewMsg(nil), new(earlyHardeningErrs), out)
if err := c.Parse(tc.args); !errors.Is(err, command.ErrHelp) && !errors.Is(err, flag.ErrHelp) {
t.Errorf("Parse: error = %v; want %v",
err, command.ErrHelp)

View File

@ -13,6 +13,7 @@ import (
"syscall"
"hakurei.app/container"
"hakurei.app/message"
)
var (
@ -31,7 +32,7 @@ func main() {
log.SetPrefix("hakurei: ")
log.SetFlags(0)
msg := container.NewMsg(log.Default())
msg := message.NewMsg(log.Default())
early := earlyHardeningErrs{
yamaLSM: container.SetPtracer(0),

View File

@ -10,13 +10,13 @@ import (
"strings"
"syscall"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/internal/app"
"hakurei.app/internal/app/state"
"hakurei.app/message"
)
func tryPath(msg container.Msg, name string) (config *hst.Config) {
func tryPath(msg message.Msg, name string) (config *hst.Config) {
var r io.Reader
config = new(hst.Config)
@ -49,7 +49,7 @@ func tryPath(msg container.Msg, name string) (config *hst.Config) {
return
}
func tryFd(msg container.Msg, name string) io.ReadCloser {
func tryFd(msg message.Msg, name string) io.ReadCloser {
if v, err := strconv.Atoi(name); err != nil {
if !errors.Is(err, strconv.ErrSyntax) {
msg.Verbosef("name cannot be interpreted as int64: %v", err)
@ -68,7 +68,7 @@ func tryFd(msg container.Msg, name string) io.ReadCloser {
}
}
func tryShort(msg container.Msg, name string) (config *hst.Config, entry *state.State) {
func tryShort(msg message.Msg, name string) (config *hst.Config, entry *state.State) {
likePrefix := false
if len(name) <= 32 {
likePrefix = true

View File

@ -11,10 +11,10 @@ import (
"text/tabwriter"
"time"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/internal/app"
"hakurei.app/internal/app/state"
"hakurei.app/message"
)
func printShowSystem(output io.Writer, short, flagJSON bool) {
@ -56,7 +56,7 @@ func printShowInstance(
if err := config.Validate(); err != nil {
valid = false
if m, ok := container.GetErrorMessage(err); ok {
if m, ok := message.GetMessage(err); ok {
mustPrint(output, "Error: "+m+"!\n\n")
}
}

View File

@ -27,6 +27,8 @@ var (
)
func TestPrintShowInstance(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
instance *state.State
@ -49,8 +51,7 @@ Filesystem
autoroot:w:/var/lib/hakurei/base/org.debian
autoetc:/etc/
w+ephemeral(-rwxr-xr-x):/tmp/
w*/nix/store:/mnt-root/nix/.rw-store/upper:/mnt-root/nix/.rw-store/work:/mnt-root/nix/.ro-store
*/nix/store
w*/nix/store:/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper:/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work:/var/lib/hakurei/base/org.nixos/ro-store
/run/current-system@
/run/opengl-driver@
w-/var/lib/hakurei/u0/org.chromium.Chromium:/data/data/org.chromium.Chromium
@ -130,8 +131,7 @@ Filesystem
autoroot:w:/var/lib/hakurei/base/org.debian
autoetc:/etc/
w+ephemeral(-rwxr-xr-x):/tmp/
w*/nix/store:/mnt-root/nix/.rw-store/upper:/mnt-root/nix/.rw-store/work:/mnt-root/nix/.ro-store
*/nix/store
w*/nix/store:/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper:/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work:/var/lib/hakurei/base/org.nixos/ro-store
/run/current-system@
/run/opengl-driver@
w-/var/lib/hakurei/u0/org.chromium.Chromium:/data/data/org.chromium.Chromium
@ -290,14 +290,10 @@ App
"type": "overlay",
"dst": "/nix/store",
"lower": [
"/mnt-root/nix/.ro-store"
"/var/lib/hakurei/base/org.nixos/ro-store"
],
"upper": "/mnt-root/nix/.rw-store/upper",
"work": "/mnt-root/nix/.rw-store/work"
},
{
"type": "bind",
"src": "/nix/store"
"upper": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper",
"work": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work"
},
{
"type": "link",
@ -444,14 +440,10 @@ App
"type": "overlay",
"dst": "/nix/store",
"lower": [
"/mnt-root/nix/.ro-store"
"/var/lib/hakurei/base/org.nixos/ro-store"
],
"upper": "/mnt-root/nix/.rw-store/upper",
"work": "/mnt-root/nix/.rw-store/work"
},
{
"type": "bind",
"src": "/nix/store"
"upper": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper",
"work": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work"
},
{
"type": "link",
@ -497,6 +489,8 @@ App
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
output := new(strings.Builder)
gotValid := printShowInstance(output, testTime, tc.instance, tc.config, tc.short, tc.json)
if got := output.String(); got != tc.want {
@ -511,6 +505,8 @@ App
}
func TestPrintPs(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
entries state.Entries
@ -654,14 +650,10 @@ func TestPrintPs(t *testing.T) {
"type": "overlay",
"dst": "/nix/store",
"lower": [
"/mnt-root/nix/.ro-store"
"/var/lib/hakurei/base/org.nixos/ro-store"
],
"upper": "/mnt-root/nix/.rw-store/upper",
"work": "/mnt-root/nix/.rw-store/work"
},
{
"type": "bind",
"src": "/nix/store"
"upper": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper",
"work": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work"
},
{
"type": "link",
@ -712,6 +704,8 @@ func TestPrintPs(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
output := new(strings.Builder)
printPs(output, testTime, stubStore(tc.entries), tc.short, tc.json)
if got := output.String(); got != tc.want {

View File

@ -91,7 +91,7 @@ func (app *appInfo) toHst(pathSet *appPathSet, pathname *check.Absolute, argv []
{FilesystemConfig: &hst.FSLink{Target: pathCurrentSystem, Linkname: app.CurrentSystem.String()}},
{FilesystemConfig: &hst.FSLink{Target: pathBin, Linkname: pathSwBin.String()}},
{FilesystemConfig: &hst.FSLink{Target: fhs.AbsUsrBin, Linkname: pathSwBin.String()}},
{FilesystemConfig: &hst.FSBind{Source: pathSet.metaPath, Target: hst.AbsTmp.Append("app")}},
{FilesystemConfig: &hst.FSBind{Source: pathSet.metaPath, Target: hst.AbsPrivateTmp.Append("app")}},
{FilesystemConfig: &hst.FSBind{Source: fhs.AbsEtc.Append("resolv.conf"), Optional: true}},
{FilesystemConfig: &hst.FSBind{Source: fhs.AbsSys.Append("block"), Optional: true}},
{FilesystemConfig: &hst.FSBind{Source: fhs.AbsSys.Append("bus"), Optional: true}},

View File

@ -11,10 +11,10 @@ import (
"syscall"
"hakurei.app/command"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/message"
)
var (
@ -24,7 +24,7 @@ var (
func main() {
log.SetPrefix("hpkg: ")
log.SetFlags(0)
msg := container.NewMsg(log.Default())
msg := message.NewMsg(log.Default())
if err := os.Setenv("SHELL", pathShell.String()); err != nil {
log.Fatalf("cannot set $SHELL: %v", err)
@ -162,7 +162,7 @@ func main() {
withCacheDir(ctx, msg, "install", []string{
// export inner bundle path in the environment
"export BUNDLE=" + hst.Tmp + "/bundle",
"export BUNDLE=" + hst.PrivateTmp + "/bundle",
// replace inner /etc
"mkdir -p etc",
"chmod -R +w etc",
@ -309,7 +309,7 @@ func main() {
if a.GPU {
config.Container.Filesystem = append(config.Container.Filesystem,
hst.FilesystemConfigJSON{FilesystemConfig: &hst.FSBind{Source: pathSet.nixPath.Append(".nixGL"), Target: hst.AbsTmp.Append("nixGL")}})
hst.FilesystemConfigJSON{FilesystemConfig: &hst.FSBind{Source: pathSet.nixPath.Append(".nixGL"), Target: hst.AbsPrivateTmp.Append("nixGL")}})
appendGPUFilesystem(config)
}

View File

@ -7,10 +7,10 @@ import (
"strconv"
"sync/atomic"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/message"
)
const bash = "bash"
@ -52,7 +52,7 @@ func lookPath(file string) string {
var beforeRunFail = new(atomic.Pointer[func()])
func mustRun(msg container.Msg, name string, arg ...string) {
func mustRun(msg message.Msg, name string, arg ...string) {
msg.Verbosef("spawning process: %q %q", name, arg)
cmd := exec.Command(name, arg...)
cmd.Stdin, cmd.Stdout, cmd.Stderr = os.Stdin, os.Stdout, os.Stderr

View File

@ -9,14 +9,14 @@ import (
"os"
"os/exec"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/internal"
"hakurei.app/message"
)
var hakureiPath = internal.MustHakureiPath()
func mustRunApp(ctx context.Context, msg container.Msg, config *hst.Config, beforeFail func()) {
func mustRunApp(ctx context.Context, msg message.Msg, config *hst.Config, beforeFail func()) {
var (
cmd *exec.Cmd
st io.WriteCloser

View File

@ -5,15 +5,15 @@ import (
"os"
"strings"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/message"
)
func withNixDaemon(
ctx context.Context,
msg container.Msg,
msg message.Msg,
action string, command []string, net bool, updateConfig func(config *hst.Config) *hst.Config,
app *appInfo, pathSet *appPathSet, dropShell bool, beforeFail func(),
) {
@ -64,7 +64,7 @@ func withNixDaemon(
func withCacheDir(
ctx context.Context,
msg container.Msg,
msg message.Msg,
action string, command []string, workDir *check.Absolute,
app *appInfo, pathSet *appPathSet, dropShell bool, beforeFail func()) {
mustRunAppDropShell(ctx, msg, &hst.Config{
@ -88,7 +88,7 @@ func withCacheDir(
{FilesystemConfig: &hst.FSLink{Target: pathCurrentSystem, Linkname: app.CurrentSystem.String()}},
{FilesystemConfig: &hst.FSLink{Target: pathBin, Linkname: pathSwBin.String()}},
{FilesystemConfig: &hst.FSLink{Target: fhs.AbsUsrBin, Linkname: pathSwBin.String()}},
{FilesystemConfig: &hst.FSBind{Source: workDir, Target: hst.AbsTmp.Append("bundle")}},
{FilesystemConfig: &hst.FSBind{Source: workDir, Target: hst.AbsPrivateTmp.Append("bundle")}},
{FilesystemConfig: &hst.FSBind{Target: pathDataData.Append(app.ID, "cache"), Source: pathSet.cacheDir, Write: true, Ensure: true}},
},
@ -102,7 +102,7 @@ func withCacheDir(
}, dropShell, beforeFail)
}
func mustRunAppDropShell(ctx context.Context, msg container.Msg, config *hst.Config, dropShell bool, beforeFail func()) {
func mustRunAppDropShell(ctx context.Context, msg message.Msg, config *hst.Config, dropShell bool, beforeFail func()) {
if dropShell {
if config.Container != nil {
config.Container.Args = []string{bash, "-l"}

View File

@ -6,32 +6,46 @@ import (
"testing"
)
func Test_parseUint32Fast(t *testing.T) {
func TestParseUint32Fast(t *testing.T) {
t.Parallel()
t.Run("zero-length", func(t *testing.T) {
t.Parallel()
if _, err := parseUint32Fast(""); err == nil || err.Error() != "zero length string" {
t.Errorf(`parseUint32Fast(""): error = %v`, err)
return
}
})
t.Run("overflow", func(t *testing.T) {
t.Parallel()
if _, err := parseUint32Fast("10000000000"); err == nil || err.Error() != "string too long" {
t.Errorf("parseUint32Fast: error = %v", err)
return
}
})
t.Run("invalid byte", func(t *testing.T) {
t.Parallel()
if _, err := parseUint32Fast("meow"); err == nil || err.Error() != "invalid character 'm' at index 0" {
t.Errorf(`parseUint32Fast("meow"): error = %v`, err)
return
}
})
t.Run("full range", func(t *testing.T) {
t.Parallel()
testRange := func(i, end int) {
for ; i < end; i++ {
s := strconv.Itoa(i)
w := i
t.Run("parse "+s, func(t *testing.T) {
t.Parallel()
v, err := parseUint32Fast(s)
if err != nil {
t.Errorf("parseUint32Fast(%q): error = %v",
@ -55,7 +69,9 @@ func Test_parseUint32Fast(t *testing.T) {
})
}
func Test_parseConfig(t *testing.T) {
func TestParseConfig(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
puid, want int
@ -71,6 +87,8 @@ func Test_parseConfig(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
fid, ok, err := parseConfig(bytes.NewBufferString(tc.rc), tc.puid)
if err == nil && tc.wantErr != "" {
t.Errorf("parseConfig: error = %v; wantErr %q",

View File

@ -7,6 +7,7 @@ import (
)
func TestBuild(t *testing.T) {
t.Parallel()
c := command.New(nil, nil, "test", nil)
stubHandler := func([]string) error { panic("unreachable") }

View File

@ -14,6 +14,8 @@ import (
)
func TestParse(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
buildTree func(wout, wlog io.Writer) command.Command
@ -251,6 +253,7 @@ Commands:
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
wout, wlog := new(bytes.Buffer), new(bytes.Buffer)
c := tc.buildTree(wout, wlog)

View File

@ -6,15 +6,19 @@ import (
)
func TestParseUnreachable(t *testing.T) {
t.Parallel()
// top level bypasses name matching and recursive calls to Parse
// returns when encountering zero-length args
t.Run("zero-length args", func(t *testing.T) {
t.Parallel()
defer checkRecover(t, "Parse", "attempted to parse with zero length args")
_ = newNode(panicWriter{}, nil, " ", " ").Parse(nil)
})
// top level must not have siblings
t.Run("toplevel siblings", func(t *testing.T) {
t.Parallel()
defer checkRecover(t, "Parse", "invalid toplevel state")
n := newNode(panicWriter{}, nil, " ", "")
n.append(newNode(panicWriter{}, nil, " ", " "))
@ -23,6 +27,7 @@ func TestParseUnreachable(t *testing.T) {
// a node with descendents must not have a direct handler
t.Run("sub handle conflict", func(t *testing.T) {
t.Parallel()
defer checkRecover(t, "Parse", "invalid subcommand tree state")
n := newNode(panicWriter{}, nil, " ", " ")
n.adopt(newNode(panicWriter{}, nil, " ", " "))
@ -32,6 +37,7 @@ func TestParseUnreachable(t *testing.T) {
// this would only happen if a node was matched twice
t.Run("parsed flag set", func(t *testing.T) {
t.Parallel()
defer checkRecover(t, "Parse", "invalid set state")
n := newNode(panicWriter{}, nil, " ", "")
set := flag.NewFlagSet("parsed", flag.ContinueOnError)

View File

@ -10,7 +10,10 @@ import (
)
func TestAutoEtcOp(t *testing.T) {
t.Parallel()
t.Run("nonrepeatable", func(t *testing.T) {
t.Parallel()
wantErr := OpRepeatError("autoetc")
if err := (&AutoEtcOp{Prefix: "81ceabb30d37bbdb3868004629cb84e9"}).apply(&setupState{nonrepeatable: nrAutoEtc}, nil); !errors.Is(err, wantErr) {
t.Errorf("apply: error = %v, want %v", err, wantErr)
@ -280,6 +283,7 @@ func TestAutoEtcOp(t *testing.T) {
})
t.Run("host path rel", func(t *testing.T) {
t.Parallel()
op := &AutoEtcOp{Prefix: "048090b6ed8f9ebb10e275ff5d8c0659"}
wantHostPath := "/etc/.host/048090b6ed8f9ebb10e275ff5d8c0659"
wantHostRel := ".host/048090b6ed8f9ebb10e275ff5d8c0659"

View File

@ -6,6 +6,7 @@ import (
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/message"
)
func init() { gob.Register(new(AutoRootOp)) }
@ -81,7 +82,7 @@ func (r *AutoRootOp) String() string {
}
// IsAutoRootBindable returns whether a dir entry name is selected for AutoRoot.
func IsAutoRootBindable(msg Msg, name string) bool {
func IsAutoRootBindable(msg message.Msg, name string) bool {
switch name {
case "proc", "dev", "tmp", "mnt", "etc":

View File

@ -8,10 +8,12 @@ import (
"hakurei.app/container/bits"
"hakurei.app/container/check"
"hakurei.app/container/stub"
"hakurei.app/message"
)
func TestAutoRootOp(t *testing.T) {
t.Run("nonrepeatable", func(t *testing.T) {
t.Parallel()
wantErr := OpRepeatError("autoroot")
if err := new(AutoRootOp).apply(&setupState{nonrepeatable: nrAutoRoot}, nil); !errors.Is(err, wantErr) {
t.Errorf("apply: error = %v, want %v", err, wantErr)
@ -179,6 +181,8 @@ func TestAutoRootOp(t *testing.T) {
}
func TestIsAutoRootBindable(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
want bool
@ -195,7 +199,8 @@ func TestIsAutoRootBindable(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var msg Msg
t.Parallel()
var msg message.Msg
if tc.log {
msg = &kstub{nil, stub.New(t, func(s *stub.Stub[syscallDispatcher]) syscallDispatcher { panic("unreachable") }, stub.Expect{Calls: []stub.Call{
call("verbose", stub.ExpectArgs{[]any{"got unexpected root entry"}}, nil, nil),

View File

@ -3,6 +3,8 @@ package container
import "testing"
func TestCapToIndex(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
cap uintptr
@ -14,6 +16,7 @@ func TestCapToIndex(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := capToIndex(tc.cap); got != tc.want {
t.Errorf("capToIndex: %#x, want %#x", got, tc.want)
}
@ -22,6 +25,8 @@ func TestCapToIndex(t *testing.T) {
}
func TestCapToMask(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
cap uintptr
@ -33,6 +38,7 @@ func TestCapToMask(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := capToMask(tc.cap); got != tc.want {
t.Errorf("capToMask: %#x, want %#x", got, tc.want)
}

View File

@ -18,6 +18,8 @@ import (
func unsafeAbs(_ string) *Absolute
func TestAbsoluteError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
@ -27,8 +29,8 @@ func TestAbsoluteError(t *testing.T) {
}{
{"EINVAL", new(AbsoluteError), syscall.EINVAL, true},
{"not EINVAL", new(AbsoluteError), syscall.EBADE, false},
{"ne val", new(AbsoluteError), &AbsoluteError{"etc"}, false},
{"equals", &AbsoluteError{"etc"}, &AbsoluteError{"etc"}, true},
{"ne val", new(AbsoluteError), &AbsoluteError{Pathname: "etc"}, false},
{"equals", &AbsoluteError{Pathname: "etc"}, &AbsoluteError{Pathname: "etc"}, true},
}
for _, tc := range testCases {
@ -38,14 +40,18 @@ func TestAbsoluteError(t *testing.T) {
}
t.Run("string", func(t *testing.T) {
t.Parallel()
want := `path "etc" is not absolute`
if got := (&AbsoluteError{"etc"}).Error(); got != want {
if got := (&AbsoluteError{Pathname: "etc"}).Error(); got != want {
t.Errorf("Error: %q, want %q", got, want)
}
})
}
func TestNewAbs(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
@ -54,12 +60,14 @@ func TestNewAbs(t *testing.T) {
wantErr error
}{
{"good", "/etc", MustAbs("/etc"), nil},
{"not absolute", "etc", nil, &AbsoluteError{"etc"}},
{"zero", "", nil, &AbsoluteError{""}},
{"not absolute", "etc", nil, &AbsoluteError{Pathname: "etc"}},
{"zero", "", nil, &AbsoluteError{Pathname: ""}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
got, err := NewAbs(tc.pathname)
if !reflect.DeepEqual(got, tc.want) {
t.Errorf("NewAbs: %#v, want %#v", got, tc.want)
@ -71,6 +79,8 @@ func TestNewAbs(t *testing.T) {
}
t.Run("must", func(t *testing.T) {
t.Parallel()
defer func() {
wantPanic := `path "etc" is not absolute`
@ -85,6 +95,8 @@ func TestNewAbs(t *testing.T) {
func TestAbsoluteString(t *testing.T) {
t.Run("passthrough", func(t *testing.T) {
t.Parallel()
pathname := "/etc"
if got := unsafeAbs(pathname).String(); got != pathname {
t.Errorf("String: %q, want %q", got, pathname)
@ -92,6 +104,8 @@ func TestAbsoluteString(t *testing.T) {
})
t.Run("zero", func(t *testing.T) {
t.Parallel()
defer func() {
wantPanic := "attempted use of zero Absolute"
@ -105,6 +119,8 @@ func TestAbsoluteString(t *testing.T) {
}
func TestAbsoluteIs(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
a, v *Absolute
@ -120,6 +136,8 @@ func TestAbsoluteIs(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.a.Is(tc.v); got != tc.want {
t.Errorf("Is: %v, want %v", got, tc.want)
}
@ -133,6 +151,8 @@ type sCheck struct {
}
func TestCodecAbsolute(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
a *Absolute
@ -153,7 +173,7 @@ func TestCodecAbsolute(t *testing.T) {
`"/etc"`, `{"val":"/etc","magic":3236757504}`},
{"not absolute", nil,
&AbsoluteError{"etc"},
&AbsoluteError{Pathname: "etc"},
"\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\a\xff\x80\x00\x03etc",
",\xff\x83\x03\x01\x01\x06sCheck\x01\xff\x84\x00\x01\x02\x01\bPathname\x01\xff\x80\x00\x01\x05Magic\x01\x04\x00\x00\x00\t\x7f\x05\x01\x02\xff\x82\x00\x00\x00\x0f\xff\x84\x01\x03etc\x01\xfb\x01\x81\xda\x00\x00\x00",
@ -167,13 +187,18 @@ func TestCodecAbsolute(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("gob", func(t *testing.T) {
if tc.gob == "\x00" && tc.sGob == "\x00" {
// these values mark the current test to skip gob
return
}
t.Parallel()
t.Run("encode", func(t *testing.T) {
t.Parallel()
// encode is unchecked
if errors.Is(tc.wantErr, syscall.EINVAL) {
return
@ -210,6 +235,8 @@ func TestCodecAbsolute(t *testing.T) {
})
t.Run("decode", func(t *testing.T) {
t.Parallel()
{
var gotA *Absolute
err := gob.NewDecoder(strings.NewReader(tc.gob)).Decode(&gotA)
@ -244,7 +271,11 @@ func TestCodecAbsolute(t *testing.T) {
})
t.Run("json", func(t *testing.T) {
t.Parallel()
t.Run("marshal", func(t *testing.T) {
t.Parallel()
// marshal is unchecked
if errors.Is(tc.wantErr, syscall.EINVAL) {
return
@ -279,6 +310,8 @@ func TestCodecAbsolute(t *testing.T) {
})
t.Run("unmarshal", func(t *testing.T) {
t.Parallel()
{
var gotA *Absolute
err := json.Unmarshal([]byte(tc.json), &gotA)
@ -314,6 +347,8 @@ func TestCodecAbsolute(t *testing.T) {
}
t.Run("json passthrough", func(t *testing.T) {
t.Parallel()
wantErr := "invalid character ':' looking for beginning of value"
if err := new(Absolute).UnmarshalJSON([]byte(":3")); err == nil || err.Error() != wantErr {
t.Errorf("UnmarshalJSON: error = %v, want %s", err, wantErr)
@ -322,7 +357,11 @@ func TestCodecAbsolute(t *testing.T) {
}
func TestAbsoluteWrap(t *testing.T) {
t.Parallel()
t.Run("join", func(t *testing.T) {
t.Parallel()
want := "/etc/nix/nix.conf"
if got := MustAbs("/etc").Append("nix", "nix.conf"); got.String() != want {
t.Errorf("Append: %q, want %q", got, want)
@ -330,6 +369,8 @@ func TestAbsoluteWrap(t *testing.T) {
})
t.Run("dir", func(t *testing.T) {
t.Parallel()
want := "/"
if got := MustAbs("/etc").Dir(); got.String() != want {
t.Errorf("Dir: %q, want %q", got, want)
@ -337,6 +378,8 @@ func TestAbsoluteWrap(t *testing.T) {
})
t.Run("sort", func(t *testing.T) {
t.Parallel()
want := []*Absolute{MustAbs("/etc"), MustAbs("/proc"), MustAbs("/sys")}
got := []*Absolute{MustAbs("/proc"), MustAbs("/sys"), MustAbs("/etc")}
SortAbs(got)
@ -346,6 +389,8 @@ func TestAbsoluteWrap(t *testing.T) {
})
t.Run("compact", func(t *testing.T) {
t.Parallel()
want := []*Absolute{MustAbs("/etc"), MustAbs("/proc"), MustAbs("/sys")}
if got := CompactAbs([]*Absolute{MustAbs("/etc"), MustAbs("/proc"), MustAbs("/proc"), MustAbs("/sys")}); !reflect.DeepEqual(got, want) {
t.Errorf("CompactAbs: %#v, want %#v", got, want)

View File

@ -7,6 +7,8 @@ import (
)
func TestEscapeOverlayDataSegment(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
s string
@ -19,6 +21,8 @@ func TestEscapeOverlayDataSegment(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := check.EscapeOverlayDataSegment(tc.s); got != tc.want {
t.Errorf("escapeOverlayDataSegment: %s, want %s", got, tc.want)
}

View File

@ -18,6 +18,7 @@ import (
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/container/seccomp"
"hakurei.app/message"
)
const (
@ -52,7 +53,7 @@ type (
cmd *exec.Cmd
ctx context.Context
msg Msg
msg message.Msg
Params
}
@ -396,9 +397,9 @@ func (p *Container) ProcessState() *os.ProcessState {
}
// New returns the address to a new instance of [Container] that requires further initialisation before use.
func New(ctx context.Context, msg Msg) *Container {
func New(ctx context.Context, msg message.Msg) *Container {
if msg == nil {
msg = NewMsg(nil)
msg = message.NewMsg(nil)
}
p := &Container{ctx: ctx, msg: msg, Params: Params{Ops: new(Ops)}}
@ -409,7 +410,7 @@ func New(ctx context.Context, msg Msg) *Container {
}
// NewCommand calls [New] and initialises the [Params.Path] and [Params.Args] fields.
func NewCommand(ctx context.Context, msg Msg, pathname *check.Absolute, name string, args ...string) *Container {
func NewCommand(ctx context.Context, msg message.Msg, pathname *check.Absolute, name string, args ...string) *Container {
z := New(ctx, msg)
z.Path = pathname
z.Args = append([]string{name}, args...)

View File

@ -26,9 +26,12 @@ import (
"hakurei.app/container/vfs"
"hakurei.app/hst"
"hakurei.app/ldd"
"hakurei.app/message"
)
func TestStartError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -136,6 +139,8 @@ func TestStartError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("error", func(t *testing.T) {
if got := tc.err.Error(); got != tc.s {
t.Errorf("Error: %q, want %q", got, tc.s)
@ -152,13 +157,13 @@ func TestStartError(t *testing.T) {
})
t.Run("msg", func(t *testing.T) {
if got, ok := container.GetErrorMessage(tc.err); !ok {
if got, ok := message.GetMessage(tc.err); !ok {
if tc.msg != "" {
t.Errorf("GetErrorMessage: err does not implement MessageError")
t.Errorf("GetMessage: err does not implement MessageError")
}
return
} else if got != tc.msg {
t.Errorf("GetErrorMessage: %q, want %q", got, tc.msg)
t.Errorf("GetMessage: %q, want %q", got, tc.msg)
}
})
})
@ -218,10 +223,10 @@ var containerTestCases = []struct {
{"tmpfs", true, false, false, true,
earlyOps(new(container.Ops).
Tmpfs(hst.AbsTmp, 0, 0755),
Tmpfs(hst.AbsPrivateTmp, 0, 0755),
),
earlyMnt(
ent("/", hst.Tmp, "rw,nosuid,nodev,relatime", "tmpfs", "ephemeral", ignore),
ent("/", hst.PrivateTmp, "rw,nosuid,nodev,relatime", "tmpfs", "ephemeral", ignore),
),
9, 9, nil, 0, bits.PresetStrict},
@ -275,7 +280,7 @@ var containerTestCases = []struct {
}
return new(container.Ops).
Overlay(hst.AbsTmp, upper, work, lower0, lower1),
Overlay(hst.AbsPrivateTmp, upper, work, lower0, lower1),
context.WithValue(context.WithValue(context.WithValue(context.WithValue(t.Context(),
testVal("lower1"), lower1),
testVal("lower0"), lower0),
@ -284,7 +289,7 @@ var containerTestCases = []struct {
},
func(t *testing.T, ctx context.Context) []*vfs.MountInfoEntry {
return []*vfs.MountInfoEntry{
ent("/", hst.Tmp, "rw", "overlay", "overlay",
ent("/", hst.PrivateTmp, "rw", "overlay", "overlay",
"rw,lowerdir="+
container.InternalToHostOvlEscape(ctx.Value(testVal("lower0")).(*check.Absolute).String())+":"+
container.InternalToHostOvlEscape(ctx.Value(testVal("lower1")).(*check.Absolute).String())+
@ -310,13 +315,13 @@ var containerTestCases = []struct {
}
return new(container.Ops).
OverlayEphemeral(hst.AbsTmp, lower0, lower1),
OverlayEphemeral(hst.AbsPrivateTmp, lower0, lower1),
t.Context()
},
func(t *testing.T, ctx context.Context) []*vfs.MountInfoEntry {
return []*vfs.MountInfoEntry{
// contains random suffix
ent("/", hst.Tmp, "rw", "overlay", "overlay", ignore),
ent("/", hst.PrivateTmp, "rw", "overlay", "overlay", ignore),
}
},
1 << 3, 1 << 14, nil, 0, bits.PresetStrict},
@ -333,14 +338,14 @@ var containerTestCases = []struct {
}
}
return new(container.Ops).
OverlayReadonly(hst.AbsTmp, lower0, lower1),
OverlayReadonly(hst.AbsPrivateTmp, lower0, lower1),
context.WithValue(context.WithValue(t.Context(),
testVal("lower1"), lower1),
testVal("lower0"), lower0)
},
func(t *testing.T, ctx context.Context) []*vfs.MountInfoEntry {
return []*vfs.MountInfoEntry{
ent("/", hst.Tmp, "rw", "overlay", "overlay",
ent("/", hst.PrivateTmp, "rw", "overlay", "overlay",
"ro,lowerdir="+
container.InternalToHostOvlEscape(ctx.Value(testVal("lower0")).(*check.Absolute).String())+":"+
container.InternalToHostOvlEscape(ctx.Value(testVal("lower1")).(*check.Absolute).String())+
@ -351,6 +356,8 @@ var containerTestCases = []struct {
}
func TestContainer(t *testing.T) {
t.Parallel()
t.Run("cancel", testContainerCancel(nil, func(t *testing.T, c *container.Container) {
wantErr := context.Canceled
wantExitCode := 0
@ -384,6 +391,8 @@ func TestContainer(t *testing.T) {
for i, tc := range containerTestCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
wantOps, wantOpsCtx := tc.ops(t)
wantMnt := tc.mnt(t, wantOpsCtx)
@ -503,6 +512,7 @@ func testContainerCancel(
waitCheck func(t *testing.T, c *container.Container),
) func(t *testing.T) {
return func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(t.Context(), helperDefaultTimeout)
c := helperNewContainer(ctx, "block")
@ -545,7 +555,8 @@ func testContainerCancel(
}
func TestContainerString(t *testing.T) {
msg := container.NewMsg(nil)
t.Parallel()
msg := message.NewMsg(nil)
c := container.NewCommand(t.Context(), msg, check.MustAbs("/run/current-system/sw/bin/ldd"), "ldd", "/usr/bin/env")
c.SeccompFlags |= seccomp.AllowMultiarch
c.SeccompRules = seccomp.Preset(
@ -711,7 +722,7 @@ func TestMain(m *testing.M) {
}
func helperNewContainerLibPaths(ctx context.Context, libPaths *[]*check.Absolute, args ...string) (c *container.Container) {
msg := container.NewMsg(nil)
msg := message.NewMsg(nil)
c = container.NewCommand(ctx, msg, absHelperInnerPath, "helper", args...)
c.Env = append(c.Env, envDoCheck+"=1")
c.Bind(check.MustAbs(os.Args[0]), absHelperInnerPath, 0)

View File

@ -11,6 +11,7 @@ import (
"syscall"
"hakurei.app/container/seccomp"
"hakurei.app/message"
)
type osFile interface {
@ -37,7 +38,7 @@ type syscallDispatcher interface {
setNoNewPrivs() error
// lastcap provides [LastCap].
lastcap(msg Msg) uintptr
lastcap(msg message.Msg) uintptr
// capset provides capset.
capset(hdrp *capHeader, datap *[2]capData) error
// capBoundingSetDrop provides capBoundingSetDrop.
@ -52,9 +53,9 @@ type syscallDispatcher interface {
receive(key string, e any, fdp *uintptr) (closeFunc func() error, err error)
// bindMount provides procPaths.bindMount.
bindMount(msg Msg, source, target string, flags uintptr) error
bindMount(msg message.Msg, source, target string, flags uintptr) error
// remount provides procPaths.remount.
remount(msg Msg, target string, flags uintptr) error
remount(msg message.Msg, target string, flags uintptr) error
// mountTmpfs provides mountTmpfs.
mountTmpfs(fsname, target string, flags uintptr, size int, perm os.FileMode) error
// ensureFile provides ensureFile.
@ -122,11 +123,11 @@ type syscallDispatcher interface {
wait4(pid int, wstatus *syscall.WaitStatus, options int, rusage *syscall.Rusage) (wpid int, err error)
// printf provides the Printf method of [log.Logger].
printf(msg Msg, format string, v ...any)
printf(msg message.Msg, format string, v ...any)
// fatal provides the Fatal method of [log.Logger]
fatal(msg Msg, v ...any)
fatal(msg message.Msg, v ...any)
// fatalf provides the Fatalf method of [log.Logger]
fatalf(msg Msg, format string, v ...any)
fatalf(msg message.Msg, format string, v ...any)
}
// direct implements syscallDispatcher on the current kernel.
@ -140,7 +141,7 @@ func (direct) setPtracer(pid uintptr) error { return SetPtracer(pid) }
func (direct) setDumpable(dumpable uintptr) error { return SetDumpable(dumpable) }
func (direct) setNoNewPrivs() error { return SetNoNewPrivs() }
func (direct) lastcap(msg Msg) uintptr { return LastCap(msg) }
func (direct) lastcap(msg message.Msg) uintptr { return LastCap(msg) }
func (direct) capset(hdrp *capHeader, datap *[2]capData) error { return capset(hdrp, datap) }
func (direct) capBoundingSetDrop(cap uintptr) error { return capBoundingSetDrop(cap) }
func (direct) capAmbientClearAll() error { return capAmbientClearAll() }
@ -150,10 +151,10 @@ func (direct) receive(key string, e any, fdp *uintptr) (func() error, error) {
return Receive(key, e, fdp)
}
func (direct) bindMount(msg Msg, source, target string, flags uintptr) error {
func (direct) bindMount(msg message.Msg, source, target string, flags uintptr) error {
return hostProc.bindMount(msg, source, target, flags)
}
func (direct) remount(msg Msg, target string, flags uintptr) error {
func (direct) remount(msg message.Msg, target string, flags uintptr) error {
return hostProc.remount(msg, target, flags)
}
func (k direct) mountTmpfs(fsname, target string, flags uintptr, size int, perm os.FileMode) error {
@ -221,6 +222,6 @@ func (direct) wait4(pid int, wstatus *syscall.WaitStatus, options int, rusage *s
return syscall.Wait4(pid, wstatus, options, rusage)
}
func (direct) printf(msg Msg, format string, v ...any) { msg.GetLogger().Printf(format, v...) }
func (direct) fatal(msg Msg, v ...any) { msg.GetLogger().Fatal(v...) }
func (direct) fatalf(msg Msg, format string, v ...any) { msg.GetLogger().Fatalf(format, v...) }
func (direct) printf(msg message.Msg, format string, v ...any) { msg.GetLogger().Printf(format, v...) }
func (direct) fatal(msg message.Msg, v ...any) { msg.GetLogger().Fatal(v...) }
func (direct) fatalf(msg message.Msg, format string, v ...any) { msg.GetLogger().Fatalf(format, v...) }

View File

@ -18,6 +18,7 @@ import (
"hakurei.app/container/seccomp"
"hakurei.app/container/stub"
"hakurei.app/message"
)
type opValidTestCase struct {
@ -31,10 +32,12 @@ func checkOpsValid(t *testing.T, testCases []opValidTestCase) {
t.Run("valid", func(t *testing.T) {
t.Helper()
t.Parallel()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
if got := tc.op.Valid(); got != tc.want {
t.Errorf("Valid: %v, want %v", got, tc.want)
@ -55,10 +58,12 @@ func checkOpsBuilder(t *testing.T, testCases []opsBuilderTestCase) {
t.Run("build", func(t *testing.T) {
t.Helper()
t.Parallel()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
if !slices.EqualFunc(*tc.ops, tc.want, func(op Op, v Op) bool { return op.Is(v) }) {
t.Errorf("Ops: %#v, want %#v", tc.ops, tc.want)
@ -79,10 +84,12 @@ func checkOpIs(t *testing.T, testCases []opIsTestCase) {
t.Run("is", func(t *testing.T) {
t.Helper()
t.Parallel()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
if got := tc.op.Is(tc.v); got != tc.want {
t.Errorf("Is: %v, want %v", got, tc.want)
@ -105,10 +112,12 @@ func checkOpMeta(t *testing.T, testCases []opMetaTestCase) {
t.Run("meta", func(t *testing.T) {
t.Helper()
t.Parallel()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
t.Run("prefix", func(t *testing.T) {
t.Helper()
@ -149,6 +158,7 @@ func checkSimple(t *testing.T, fname string, testCases []simpleTestCase) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
wait4signal := make(chan struct{})
k := &kstub{wait4signal, stub.New(t, func(s *stub.Stub[syscallDispatcher]) syscallDispatcher { return &kstub{wait4signal, s} }, tc.want)}
@ -182,10 +192,12 @@ func checkOpBehaviour(t *testing.T, testCases []opBehaviourTestCase) {
t.Run("behaviour", func(t *testing.T) {
t.Helper()
t.Parallel()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
k := &kstub{nil, stub.New(t,
func(s *stub.Stub[syscallDispatcher]) syscallDispatcher { return &kstub{nil, s} },
@ -329,7 +341,7 @@ func (k *kstub) setDumpable(dumpable uintptr) error {
}
func (k *kstub) setNoNewPrivs() error { k.Helper(); return k.Expects("setNoNewPrivs").Err }
func (k *kstub) lastcap(msg Msg) uintptr {
func (k *kstub) lastcap(msg message.Msg) uintptr {
k.Helper()
k.checkMsg(msg)
return k.Expects("lastcap").Ret.(uintptr)
@ -409,7 +421,7 @@ func (k *kstub) receive(key string, e any, fdp *uintptr) (closeFunc func() error
return
}
func (k *kstub) bindMount(msg Msg, source, target string, flags uintptr) error {
func (k *kstub) bindMount(msg message.Msg, source, target string, flags uintptr) error {
k.Helper()
k.checkMsg(msg)
return k.Expects("bindMount").Error(
@ -418,7 +430,7 @@ func (k *kstub) bindMount(msg Msg, source, target string, flags uintptr) error {
stub.CheckArg(k.Stub, "flags", flags, 2))
}
func (k *kstub) remount(msg Msg, target string, flags uintptr) error {
func (k *kstub) remount(msg message.Msg, target string, flags uintptr) error {
k.Helper()
k.checkMsg(msg)
return k.Expects("remount").Error(
@ -702,7 +714,7 @@ func (k *kstub) wait4(pid int, wstatus *syscall.WaitStatus, options int, rusage
return
}
func (k *kstub) printf(_ Msg, format string, v ...any) {
func (k *kstub) printf(_ message.Msg, format string, v ...any) {
k.Helper()
if k.Expects("printf").Error(
stub.CheckArg(k.Stub, "format", format, 0),
@ -711,7 +723,7 @@ func (k *kstub) printf(_ Msg, format string, v ...any) {
}
}
func (k *kstub) fatal(_ Msg, v ...any) {
func (k *kstub) fatal(_ message.Msg, v ...any) {
k.Helper()
if k.Expects("fatal").Error(
stub.CheckArgReflect(k.Stub, "v", v, 0)) != nil {
@ -720,7 +732,7 @@ func (k *kstub) fatal(_ Msg, v ...any) {
panic(stub.PanicExit)
}
func (k *kstub) fatalf(_ Msg, format string, v ...any) {
func (k *kstub) fatalf(_ message.Msg, format string, v ...any) {
k.Helper()
if k.Expects("fatalf").Error(
stub.CheckArg(k.Stub, "format", format, 0),
@ -730,7 +742,7 @@ func (k *kstub) fatalf(_ Msg, format string, v ...any) {
panic(stub.PanicExit)
}
func (k *kstub) checkMsg(msg Msg) {
func (k *kstub) checkMsg(msg message.Msg) {
k.Helper()
var target *kstub

View File

@ -59,6 +59,7 @@ func messagePrefixP[V any, T interface {
return zeroString, false
}
// MountError wraps errors returned by syscall.Mount.
type MountError struct {
Source, Target, Fstype string
@ -74,6 +75,7 @@ func (e *MountError) Unwrap() error {
return e.Errno
}
func (e *MountError) Message() string { return "cannot " + e.Error() }
func (e *MountError) Error() string {
if e.Flags&syscall.MS_BIND != 0 {
if e.Flags&syscall.MS_REMOUNT != 0 {
@ -90,6 +92,15 @@ func (e *MountError) Error() string {
return "mount " + e.Target + ": " + e.Errno.Error()
}
// optionalErrorUnwrap calls [errors.Unwrap] and returns the resulting value
// if it is not nil, or the original value if it is.
func optionalErrorUnwrap(err error) error {
if underlyingErr := errors.Unwrap(err); underlyingErr != nil {
return underlyingErr
}
return err
}
// errnoFallback returns the concrete errno from an error, or a [os.PathError] fallback.
func errnoFallback(op, path string, err error) (syscall.Errno, *os.PathError) {
var errno syscall.Errno

View File

@ -14,6 +14,8 @@ import (
)
func TestMessageFromError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -54,6 +56,7 @@ func TestMessageFromError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
got, ok := messageFromError(tc.err)
if got != tc.want {
t.Errorf("messageFromError: %q, want %q", got, tc.want)
@ -66,6 +69,8 @@ func TestMessageFromError(t *testing.T) {
}
func TestMountError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -111,6 +116,7 @@ func TestMountError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("is", func(t *testing.T) {
if !errors.Is(tc.err, tc.errno) {
t.Errorf("Is: %#v is not %v", tc.err, tc.errno)
@ -125,6 +131,7 @@ func TestMountError(t *testing.T) {
}
t.Run("zero", func(t *testing.T) {
t.Parallel()
if errors.Is(new(MountError), syscall.Errno(0)) {
t.Errorf("Is: zero MountError unexpected true")
}
@ -132,6 +139,8 @@ func TestMountError(t *testing.T) {
}
func TestErrnoFallback(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -154,6 +163,7 @@ func TestErrnoFallback(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
errno, err := errnoFallback(tc.name, Nonexistent, tc.err)
if errno != tc.wantErrno {
t.Errorf("errnoFallback: errno = %v, want %v", errno, tc.wantErrno)

View File

@ -3,6 +3,8 @@ package container
import (
"os"
"sync"
"hakurei.app/message"
)
var (
@ -10,7 +12,7 @@ var (
executableOnce sync.Once
)
func copyExecutable(msg Msg) {
func copyExecutable(msg message.Msg) {
if name, err := os.Executable(); err != nil {
msg.BeforeExit()
msg.GetLogger().Fatalf("cannot read executable path: %v", err)
@ -19,7 +21,7 @@ func copyExecutable(msg Msg) {
}
}
func MustExecutable(msg Msg) string {
func MustExecutable(msg message.Msg) string {
executableOnce.Do(func() { copyExecutable(msg) })
return executable
}

View File

@ -5,13 +5,14 @@ import (
"testing"
"hakurei.app/container"
"hakurei.app/message"
)
func TestExecutable(t *testing.T) {
t.Parallel()
for i := 0; i < 16; i++ {
if got := container.MustExecutable(container.NewMsg(nil)); got != os.Args[0] {
t.Errorf("MustExecutable: %q, want %q",
got, os.Args[0])
if got := container.MustExecutable(message.NewMsg(nil)); got != os.Args[0] {
t.Errorf("MustExecutable: %q, want %q", got, os.Args[0])
}
}
}

View File

@ -14,6 +14,7 @@ import (
"hakurei.app/container/fhs"
"hakurei.app/container/seccomp"
"hakurei.app/message"
)
const (
@ -61,7 +62,7 @@ type (
setupState struct {
nonrepeatable uintptr
*Params
Msg
message.Msg
}
)
@ -95,14 +96,14 @@ type initParams struct {
}
// Init is called by [TryArgv0] if the current process is the container init.
func Init(msg Msg) {
func Init(msg message.Msg) {
if msg == nil {
panic("attempting to call initEntrypoint with nil msg")
}
initEntrypoint(direct{}, msg)
}
func initEntrypoint(k syscallDispatcher, msg Msg) {
func initEntrypoint(k syscallDispatcher, msg message.Msg) {
k.lockOSThread()
if k.getpid() != 1 {
@ -125,7 +126,7 @@ func initEntrypoint(k syscallDispatcher, msg Msg) {
k.fatal(msg, "invalid setup descriptor")
}
if errors.Is(err, ErrReceiveEnv) {
k.fatal(msg, "HAKUREI_SETUP not set")
k.fatal(msg, setupEnv+" not set")
}
k.fatalf(msg, "cannot decode init setup payload: %v", err)
@ -177,7 +178,7 @@ func initEntrypoint(k syscallDispatcher, msg Msg) {
lastcap := k.lastcap(msg)
if err := k.mount(zeroString, fhs.Root, zeroString, MS_SILENT|MS_SLAVE|MS_REC, zeroString); err != nil {
k.fatalf(msg, "cannot make / rslave: %v", err)
k.fatalf(msg, "cannot make / rslave: %v", optionalErrorUnwrap(err))
}
state := &setupState{Params: &params.Params, Msg: msg}
@ -201,7 +202,7 @@ func initEntrypoint(k syscallDispatcher, msg Msg) {
}
if err := k.mount(SourceTmpfsRootfs, intermediateHostPath, FstypeTmpfs, MS_NODEV|MS_NOSUID, zeroString); err != nil {
k.fatalf(msg, "cannot mount intermediate root: %v", err)
k.fatalf(msg, "cannot mount intermediate root: %v", optionalErrorUnwrap(err))
}
if err := k.chdir(intermediateHostPath); err != nil {
k.fatalf(msg, "cannot enter intermediate host path: %v", err)
@ -211,7 +212,7 @@ func initEntrypoint(k syscallDispatcher, msg Msg) {
k.fatalf(msg, "%v", err)
}
if err := k.mount(sysrootDir, sysrootDir, zeroString, MS_SILENT|MS_BIND|MS_REC, zeroString); err != nil {
k.fatalf(msg, "cannot bind sysroot: %v", err)
k.fatalf(msg, "cannot bind sysroot: %v", optionalErrorUnwrap(err))
}
if err := k.mkdir(hostDir, 0755); err != nil {
@ -245,7 +246,7 @@ func initEntrypoint(k syscallDispatcher, msg Msg) {
// setup requiring host root complete at this point
if err := k.mount(hostDir, hostDir, zeroString, MS_SILENT|MS_REC|MS_PRIVATE, zeroString); err != nil {
k.fatalf(msg, "cannot make host root rprivate: %v", err)
k.fatalf(msg, "cannot make host root rprivate: %v", optionalErrorUnwrap(err))
}
if err := k.unmount(hostDir, MNT_DETACH); err != nil {
k.fatalf(msg, "cannot unmount host root: %v", err)
@ -448,11 +449,11 @@ const initName = "init"
// TryArgv0 calls [Init] if the last element of argv0 is "init".
// If a nil msg is passed, the system logger is used instead.
func TryArgv0(msg Msg) {
func TryArgv0(msg message.Msg) {
if msg == nil {
log.SetPrefix(initName + ": ")
log.SetFlags(0)
msg = NewMsg(log.Default())
msg = message.NewMsg(log.Default())
}
if len(os.Args) > 0 && path.Base(os.Args[0]) == initName {

View File

@ -13,6 +13,7 @@ import (
)
func TestInitEntrypoint(t *testing.T) {
t.Parallel()
checkSimple(t, "initEntrypoint", []simpleTestCase{
{"getpid", func(k *kstub) error { initEntrypoint(k, k); return nil }, stub.Expect{
@ -2649,6 +2650,7 @@ func TestInitEntrypoint(t *testing.T) {
}
func TestOpsGrow(t *testing.T) {
t.Parallel()
ops := new(Ops)
ops.Grow(1 << 4)
if got := cap(*ops); got == 0 {

View File

@ -12,6 +12,8 @@ import (
)
func TestBindMountOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"ENOENT not optional", new(Params), &BindMountOp{
Source: check.MustAbs("/bin/"),
@ -164,7 +166,10 @@ func TestBindMountOp(t *testing.T) {
})
t.Run("unreachable", func(t *testing.T) {
t.Parallel()
t.Run("nil sourceFinal not optional", func(t *testing.T) {
t.Parallel()
wantErr := OpStateError("bind")
if err := new(BindMountOp).apply(nil, nil); !errors.Is(err, wantErr) {
t.Errorf("apply: error = %v, want %v", err, wantErr)

View File

@ -9,6 +9,8 @@ import (
)
func TestMountDevOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"mountTmpfs", &Params{ParentPerm: 0750, RetainSession: true}, &MountDevOp{
Target: check.MustAbs("/dev/"),

View File

@ -9,6 +9,8 @@ import (
)
func TestMkdirOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"success", new(Params), &MkdirOp{
Path: check.MustAbs("/.hakurei"),

View File

@ -10,7 +10,11 @@ import (
)
func TestMountOverlayOp(t *testing.T) {
t.Parallel()
t.Run("argument error", func(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err *OverlayArgumentError
@ -30,6 +34,7 @@ func TestMountOverlayOp(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.err.Error(); got != tc.want {
t.Errorf("Error: %q, want %q", got, tc.want)
}
@ -270,7 +275,10 @@ func TestMountOverlayOp(t *testing.T) {
})
t.Run("unreachable", func(t *testing.T) {
t.Parallel()
t.Run("nil Upper non-nil Work not ephemeral", func(t *testing.T) {
t.Parallel()
wantErr := OpStateError("overlay")
if err := (&MountOverlayOp{
Work: check.MustAbs("/"),

View File

@ -22,15 +22,6 @@ func (f *Ops) Place(name *check.Absolute, data []byte) *Ops {
return f
}
// PlaceP is like Place but writes the address of [TmpfileOp.Data] to the pointer dataP points to.
func (f *Ops) PlaceP(name *check.Absolute, dataP **[]byte) *Ops {
t := &TmpfileOp{Path: name}
*dataP = &t.Data
*f = append(*f, t)
return f
}
// TmpfileOp places a file on container Path containing Data.
type TmpfileOp struct {
Path *check.Absolute

View File

@ -14,6 +14,7 @@ func TestTmpfileOp(t *testing.T) {
samplePath = check.MustAbs("/etc/passwd")
sampleData = []byte(sampleDataString)
)
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"createTemp", &Params{ParentPerm: 0700}, &TmpfileOp{
@ -82,18 +83,8 @@ func TestTmpfileOp(t *testing.T) {
})
checkOpsBuilder(t, []opsBuilderTestCase{
{"noref", new(Ops).Place(samplePath, sampleData), Ops{
&TmpfileOp{
Path: samplePath,
Data: sampleData,
},
}},
{"ref", new(Ops).PlaceP(samplePath, new(*[]byte)), Ops{
&TmpfileOp{
Path: samplePath,
Data: []byte{},
},
{"full", new(Ops).Place(samplePath, sampleData), Ops{
&TmpfileOp{Path: samplePath, Data: sampleData},
}},
})

View File

@ -9,6 +9,8 @@ import (
)
func TestMountProcOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"mkdir", &Params{ParentPerm: 0755},
&MountProcOp{

View File

@ -9,6 +9,8 @@ import (
)
func TestRemountOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"success", new(Params), &RemountOp{
Target: check.MustAbs("/"),

View File

@ -9,6 +9,8 @@ import (
)
func TestSymlinkOp(t *testing.T) {
t.Parallel()
checkOpBehaviour(t, []opBehaviourTestCase{
{"mkdir", &Params{ParentPerm: 0700}, &SymlinkOp{
Target: check.MustAbs("/etc/nixos"),

View File

@ -10,7 +10,10 @@ import (
)
func TestMountTmpfsOp(t *testing.T) {
t.Parallel()
t.Run("size error", func(t *testing.T) {
t.Parallel()
tmpfsSizeError := TmpfsSizeError(-1)
want := "tmpfs size -1 out of bounds"
if got := tmpfsSizeError.Error(); got != want {

View File

@ -8,6 +8,8 @@ import (
)
func TestLandlockString(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
rulesetAttr *container.RulesetAttr
@ -46,6 +48,7 @@ func TestLandlockString(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.rulesetAttr.String(); got != tc.want {
t.Errorf("String: %s, want %s", got, tc.want)
}
@ -54,6 +57,7 @@ func TestLandlockString(t *testing.T) {
}
func TestLandlockAttrSize(t *testing.T) {
t.Parallel()
want := 24
if got := unsafe.Sizeof(container.RulesetAttr{}); got != uintptr(want) {
t.Errorf("Sizeof: %d, want %d", got, want)

View File

@ -7,6 +7,7 @@ import (
. "syscall"
"hakurei.app/container/vfs"
"hakurei.app/message"
)
/*
@ -87,7 +88,7 @@ const (
)
// bindMount mounts source on target and recursively applies flags if MS_REC is set.
func (p *procPaths) bindMount(msg Msg, source, target string, flags uintptr) error {
func (p *procPaths) bindMount(msg message.Msg, source, target string, flags uintptr) error {
// syscallDispatcher.bindMount and procPaths.remount must not be called from this function
if err := p.k.mount(source, target, FstypeNULL, MS_SILENT|MS_BIND|flags&MS_REC, zeroString); err != nil {
@ -97,7 +98,7 @@ func (p *procPaths) bindMount(msg Msg, source, target string, flags uintptr) err
}
// remount applies flags on target, recursively if MS_REC is set.
func (p *procPaths) remount(msg Msg, target string, flags uintptr) error {
func (p *procPaths) remount(msg message.Msg, target string, flags uintptr) error {
// syscallDispatcher methods bindMount, remount must not be called from this function
var targetFinal string
@ -159,7 +160,7 @@ func (p *procPaths) remount(msg Msg, target string, flags uintptr) error {
}
// remountWithFlags remounts mount point described by [vfs.MountInfoNode].
func remountWithFlags(k syscallDispatcher, msg Msg, n *vfs.MountInfoNode, mf uintptr) error {
func remountWithFlags(k syscallDispatcher, msg message.Msg, n *vfs.MountInfoNode, mf uintptr) error {
// syscallDispatcher methods bindMount, remount must not be called from this function
kf, unmatched := n.Flags()

View File

@ -10,6 +10,8 @@ import (
)
func TestBindMount(t *testing.T) {
t.Parallel()
checkSimple(t, "bindMount", []simpleTestCase{
{"mount", func(k *kstub) error {
return newProcPaths(k, hostPath).bindMount(nil, "/host/nix", "/sysroot/nix", syscall.MS_RDONLY)
@ -34,6 +36,8 @@ func TestBindMount(t *testing.T) {
}
func TestRemount(t *testing.T) {
t.Parallel()
const sampleMountinfoNix = `254 407 253:0 / /host rw,relatime master:1 - ext4 /dev/disk/by-label/nixos rw
255 254 0:28 / /host/mnt/.ro-cwd ro,noatime master:2 - 9p cwd ro,access=client,msize=16384,trans=virtio
256 254 0:29 / /host/nix/.ro-store rw,relatime master:3 - 9p nix-store rw,cache=f,access=client,msize=16384,trans=virtio
@ -216,6 +220,8 @@ func TestRemount(t *testing.T) {
}
func TestRemountWithFlags(t *testing.T) {
t.Parallel()
checkSimple(t, "remountWithFlags", []simpleTestCase{
{"noop unmatched", func(k *kstub) error {
return remountWithFlags(k, k, &vfs.MountInfoNode{MountInfoEntry: &vfs.MountInfoEntry{VfsOptstr: "rw,relatime,cat"}}, 0)
@ -236,6 +242,8 @@ func TestRemountWithFlags(t *testing.T) {
}
func TestMountTmpfs(t *testing.T) {
t.Parallel()
checkSimple(t, "mountTmpfs", []simpleTestCase{
{"mkdirAll", func(k *kstub) error {
return mountTmpfs(k, "ephemeral", "/sysroot/run/user/1000", 0, 1<<10, 0700)
@ -260,6 +268,8 @@ func TestMountTmpfs(t *testing.T) {
}
func TestParentPerm(t *testing.T) {
t.Parallel()
testCases := []struct {
perm os.FileMode
want os.FileMode
@ -275,6 +285,7 @@ func TestParentPerm(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.perm.String(), func(t *testing.T) {
t.Parallel()
if got := parentPerm(tc.perm); got != tc.want {
t.Errorf("parentPerm: %#o, want %#o", got, tc.want)
}

View File

@ -31,7 +31,7 @@ func Receive(key string, e any, fdp *uintptr) (func() error, error) {
return nil, ErrReceiveEnv
} else {
if fd, err := strconv.Atoi(s); err != nil {
return nil, errors.Unwrap(err)
return nil, optionalErrorUnwrap(err)
} else {
setup = os.NewFile(uintptr(fd), "setup")
if setup == nil {

View File

@ -117,7 +117,7 @@ func Export(fd int, rules []NativeRule, flags ExportFlag) error {
var ret C.int
rulesPinner := new(runtime.Pinner)
var rulesPinner runtime.Pinner
for i := range rules {
rule := &rules[i]
rulesPinner.Pin(rule)
@ -189,6 +189,5 @@ func syscallResolveName(s string) (trap int) {
v := C.CString(s)
trap = int(C.seccomp_syscall_resolve_name(v))
C.free(unsafe.Pointer(v))
return
}

View File

@ -13,6 +13,8 @@ import (
)
func TestExport(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
flags ExportFlag
@ -32,14 +34,15 @@ func TestExport(t *testing.T) {
{"hakurei tty", 0, PresetExt | PresetDenyNS | PresetDenyDevel, false},
}
buf := make([]byte, 8)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
e := New(Preset(tc.presets, tc.flags), tc.flags)
want := bpfExpected[bpfPreset{tc.flags, tc.presets}]
digest := sha512.New()
if _, err := io.CopyBuffer(digest, e, buf); (err != nil) != tc.wantErr {
if _, err := io.Copy(digest, e); (err != nil) != tc.wantErr {
t.Errorf("Exporter: error = %v, wantErr %v", err, tc.wantErr)
return
}
@ -47,7 +50,7 @@ func TestExport(t *testing.T) {
t.Errorf("Close: error = %v", err)
}
if got := digest.Sum(nil); !slices.Equal(got, want) {
t.Fatalf("Export() hash = %x, want %x",
t.Fatalf("Export: hash = %x, want %x",
got, want)
return
}

View File

@ -21,9 +21,7 @@ Methods of Encoder are not safe for concurrent use.
An Encoder must not be copied after first use.
*/
type Encoder struct {
*exporter
}
type Encoder struct{ *exporter }
func (e *Encoder) Read(p []byte) (n int, err error) {
if err = e.prepare(); err != nil {

View File

@ -10,6 +10,8 @@ import (
)
func TestLibraryError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
sample *seccomp.LibraryError
@ -41,6 +43,8 @@ func TestLibraryError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if errors.Is(tc.sample, tc.compare) != tc.wantIs {
t.Errorf("errors.Is(%#v, %#v) did not return %v",
tc.sample, tc.compare, tc.wantIs)
@ -54,6 +58,8 @@ func TestLibraryError(t *testing.T) {
}
t.Run("invalid", func(t *testing.T) {
t.Parallel()
wantPanic := "invalid libseccomp error"
defer func() {
if r := recover(); r != wantPanic {

View File

@ -5,15 +5,17 @@ import (
)
func TestSyscallResolveName(t *testing.T) {
t.Parallel()
for name, want := range Syscalls() {
t.Run(name, func(t *testing.T) {
t.Parallel()
if got := syscallResolveName(name); got != want {
t.Errorf("syscallResolveName(%q) = %d, want %d",
name, got, want)
t.Errorf("syscallResolveName(%q) = %d, want %d", name, got, want)
}
if got, ok := SyscallResolveName(name); !ok || got != want {
t.Errorf("SyscallResolveName(%q) = %d, want %d",
name, got, want)
t.Errorf("SyscallResolveName(%q) = %d, want %d", name, got, want)
}
})
}

View File

@ -8,13 +8,17 @@ import (
)
func TestCallError(t *testing.T) {
t.Parallel()
t.Run("contains false", func(t *testing.T) {
t.Parallel()
if err := new(stub.Call).Error(true, false, true); !reflect.DeepEqual(err, stub.ErrCheck) {
t.Errorf("Error: %#v, want %#v", err, stub.ErrCheck)
}
})
t.Run("passthrough", func(t *testing.T) {
t.Parallel()
wantErr := stub.UniqueError(0xbabe)
if err := (&stub.Call{Err: wantErr}).Error(true); !reflect.DeepEqual(err, wantErr) {
t.Errorf("Error: %#v, want %#v", err, wantErr)

View File

@ -9,7 +9,10 @@ import (
)
func TestUniqueError(t *testing.T) {
t.Parallel()
t.Run("format", func(t *testing.T) {
t.Parallel()
want := "unique error 2989 injected by the test suite"
if got := stub.UniqueError(0xbad).Error(); got != want {
t.Errorf("Error: %q, want %q", got, want)
@ -17,13 +20,17 @@ func TestUniqueError(t *testing.T) {
})
t.Run("is", func(t *testing.T) {
t.Parallel()
t.Run("type", func(t *testing.T) {
t.Parallel()
if errors.Is(stub.UniqueError(0), syscall.ENOTRECOVERABLE) {
t.Error("Is: unexpected true")
}
})
t.Run("val", func(t *testing.T) {
t.Parallel()
if errors.Is(stub.UniqueError(0), stub.UniqueError(1)) {
t.Error("Is: unexpected true")
}

View File

@ -32,13 +32,20 @@ func (o *overrideTFailNow) Fail() {
}
func TestHandleExit(t *testing.T) {
t.Parallel()
t.Run("exit", func(t *testing.T) {
t.Parallel()
defer stub.HandleExit(t)
panic(stub.PanicExit)
})
t.Run("goexit", func(t *testing.T) {
t.Parallel()
t.Run("FailNow", func(t *testing.T) {
t.Parallel()
ot := &overrideTFailNow{T: t}
defer func() {
if !ot.failNow {
@ -50,6 +57,8 @@ func TestHandleExit(t *testing.T) {
})
t.Run("Fail", func(t *testing.T) {
t.Parallel()
ot := &overrideTFailNow{T: t}
defer func() {
if !ot.fail {
@ -62,11 +71,16 @@ func TestHandleExit(t *testing.T) {
})
t.Run("nil", func(t *testing.T) {
t.Parallel()
defer stub.HandleExit(t)
})
t.Run("passthrough", func(t *testing.T) {
t.Parallel()
t.Run("toplevel", func(t *testing.T) {
t.Parallel()
defer func() {
want := 0xcafebabe
if r := recover(); r != want {
@ -79,6 +93,8 @@ func TestHandleExit(t *testing.T) {
})
t.Run("new", func(t *testing.T) {
t.Parallel()
defer func() {
want := 0xcafe
if r := recover(); r != want {

View File

@ -45,12 +45,17 @@ func New[K any](tb testing.TB, makeK func(s *Stub[K]) K, want Expect) *Stub[K] {
return &Stub[K]{TB: tb, makeK: makeK, want: want, wg: new(sync.WaitGroup)}
}
func (s *Stub[K]) FailNow() { panic(panicFailNow) }
func (s *Stub[K]) Fatal(args ...any) { s.Error(args...); panic(panicFatal) }
func (s *Stub[K]) Fatalf(format string, args ...any) { s.Errorf(format, args...); panic(panicFatalf) }
func (s *Stub[K]) SkipNow() { panic("invalid call to SkipNow") }
func (s *Stub[K]) Skip(...any) { panic("invalid call to Skip") }
func (s *Stub[K]) Skipf(string, ...any) { panic("invalid call to Skipf") }
func (s *Stub[K]) FailNow() { s.Helper(); panic(panicFailNow) }
func (s *Stub[K]) Fatal(args ...any) { s.Helper(); s.Error(args...); panic(panicFatal) }
func (s *Stub[K]) Fatalf(format string, args ...any) {
s.Helper()
s.Errorf(format, args...)
panic(panicFatalf)
}
func (s *Stub[K]) SkipNow() { s.Helper(); panic("invalid call to SkipNow") }
func (s *Stub[K]) Skip(...any) { s.Helper(); panic("invalid call to Skip") }
func (s *Stub[K]) Skipf(string, ...any) { s.Helper(); panic("invalid call to Skipf") }
// New calls f in a new goroutine
func (s *Stub[K]) New(f func(k K)) {

View File

@ -36,49 +36,65 @@ func (t *overrideT) Errorf(format string, args ...any) {
}
func TestStub(t *testing.T) {
t.Parallel()
t.Run("goexit", func(t *testing.T) {
t.Parallel()
t.Run("FailNow", func(t *testing.T) {
t.Parallel()
defer func() {
if r := recover(); r != panicFailNow {
t.Errorf("recover: %v", r)
}
}()
new(stubHolder).FailNow()
stubHolder{&Stub[stubHolder]{TB: t}}.FailNow()
})
t.Run("SkipNow", func(t *testing.T) {
t.Parallel()
defer func() {
want := "invalid call to SkipNow"
if r := recover(); r != want {
t.Errorf("recover: %v, want %v", r, want)
}
}()
new(stubHolder).SkipNow()
stubHolder{&Stub[stubHolder]{TB: t}}.SkipNow()
})
t.Run("Skip", func(t *testing.T) {
t.Parallel()
defer func() {
want := "invalid call to Skip"
if r := recover(); r != want {
t.Errorf("recover: %v, want %v", r, want)
}
}()
new(stubHolder).Skip()
stubHolder{&Stub[stubHolder]{TB: t}}.Skip()
})
t.Run("Skipf", func(t *testing.T) {
t.Parallel()
defer func() {
want := "invalid call to Skipf"
if r := recover(); r != want {
t.Errorf("recover: %v, want %v", r, want)
}
}()
new(stubHolder).Skipf("")
stubHolder{&Stub[stubHolder]{TB: t}}.Skipf("")
})
})
t.Run("new", func(t *testing.T) {
t.Parallel()
t.Run("success", func(t *testing.T) {
t.Parallel()
s := New(t, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{Calls: []Call{
{"New", ExpectArgs{}, nil, nil},
}, Tracks: []Expect{{Calls: []Call{
@ -112,6 +128,8 @@ func TestStub(t *testing.T) {
})
t.Run("overrun", func(t *testing.T) {
t.Parallel()
ot := &overrideT{T: t}
ot.error.Store(checkError(t, "New: track overrun"))
s := New(ot, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{Calls: []Call{
@ -135,7 +153,11 @@ func TestStub(t *testing.T) {
})
t.Run("expects", func(t *testing.T) {
t.Parallel()
t.Run("overrun", func(t *testing.T) {
t.Parallel()
ot := &overrideT{T: t}
ot.error.Store(checkError(t, "Expects: advancing beyond expected calls"))
s := New(ot, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{})
@ -143,7 +165,11 @@ func TestStub(t *testing.T) {
})
t.Run("separator", func(t *testing.T) {
t.Parallel()
t.Run("overrun", func(t *testing.T) {
t.Parallel()
ot := &overrideT{T: t}
ot.errorf.Store(checkErrorf(t, "Expects: func = %s, separator overrun", "meow"))
s := New(ot, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{Calls: []Call{
@ -153,6 +179,8 @@ func TestStub(t *testing.T) {
})
t.Run("mismatch", func(t *testing.T) {
t.Parallel()
ot := &overrideT{T: t}
ot.errorf.Store(checkErrorf(t, "Expects: separator, want %s", "panic"))
s := New(ot, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{Calls: []Call{
@ -163,6 +191,8 @@ func TestStub(t *testing.T) {
})
t.Run("mismatch", func(t *testing.T) {
t.Parallel()
ot := &overrideT{T: t}
ot.errorf.Store(checkErrorf(t, "Expects: func = %s, want %s", "meow", "nya"))
s := New(ot, func(s *Stub[stubHolder]) stubHolder { return stubHolder{s} }, Expect{Calls: []Call{
@ -176,6 +206,8 @@ func TestStub(t *testing.T) {
func TestCheckArg(t *testing.T) {
t.Run("oob negative", func(t *testing.T) {
t.Parallel()
defer func() {
want := "invalid call to CheckArg"
if r := recover(); r != want {
@ -191,12 +223,14 @@ func TestCheckArg(t *testing.T) {
{"panic", ExpectArgs{PanicExit}, nil, nil},
{"meow", ExpectArgs{-1}, nil, nil},
}})
t.Run("match", func(t *testing.T) {
s.Expects("panic")
if !CheckArg(s, "v", PanicExit, 0) {
t.Errorf("CheckArg: unexpected false")
}
})
t.Run("mismatch", func(t *testing.T) {
defer HandleExit(t)
s.Expects("meow")
@ -205,6 +239,7 @@ func TestCheckArg(t *testing.T) {
t.Errorf("CheckArg: unexpected true")
}
})
t.Run("oob", func(t *testing.T) {
s.pos++
defer func() {
@ -218,7 +253,11 @@ func TestCheckArg(t *testing.T) {
}
func TestCheckArgReflect(t *testing.T) {
t.Parallel()
t.Run("oob lower", func(t *testing.T) {
t.Parallel()
defer func() {
want := "invalid call to CheckArgReflect"
if r := recover(); r != want {

View File

@ -7,6 +7,7 @@ import (
"sync"
"hakurei.app/container/fhs"
"hakurei.app/message"
)
var (
@ -23,7 +24,7 @@ const (
kernelCapLastCapPath = fhs.ProcSys + "kernel/cap_last_cap"
)
func mustReadSysctl(msg Msg) {
func mustReadSysctl(msg message.Msg) {
sysctlOnce.Do(func() {
if v, err := os.ReadFile(kernelOverflowuidPath); err != nil {
msg.GetLogger().Fatalf("cannot read %q: %v", kernelOverflowuidPath, err)
@ -45,6 +46,6 @@ func mustReadSysctl(msg Msg) {
})
}
func OverflowUid(msg Msg) int { mustReadSysctl(msg); return kernelOverflowuid }
func OverflowGid(msg Msg) int { mustReadSysctl(msg); return kernelOverflowgid }
func LastCap(msg Msg) uintptr { mustReadSysctl(msg); return uintptr(kernelCapLastCap) }
func OverflowUid(msg message.Msg) int { mustReadSysctl(msg); return kernelOverflowuid }
func OverflowGid(msg message.Msg) int { mustReadSysctl(msg); return kernelOverflowgid }
func LastCap(msg message.Msg) uintptr { mustReadSysctl(msg); return uintptr(kernelCapLastCap) }

View File

@ -7,6 +7,8 @@ import (
)
func TestUnmangle(t *testing.T) {
t.Parallel()
testCases := []struct {
want string
sample string
@ -17,6 +19,7 @@ func TestUnmangle(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.want, func(t *testing.T) {
t.Parallel()
got := vfs.Unmangle(tc.sample)
if got != tc.want {
t.Errorf("Unmangle: %q, want %q",

View File

@ -17,6 +17,8 @@ import (
)
func TestDecoderError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err *vfs.DecoderError
@ -35,13 +37,17 @@ func TestDecoderError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("error", func(t *testing.T) {
t.Parallel()
if got := tc.err.Error(); got != tc.want {
t.Errorf("Error: %s, want %s", got, tc.want)
}
})
t.Run("is", func(t *testing.T) {
t.Parallel()
if !errors.Is(tc.err, tc.target) {
t.Errorf("Is: unexpected false")
}
@ -54,6 +60,8 @@ func TestDecoderError(t *testing.T) {
}
func TestMountInfo(t *testing.T) {
t.Parallel()
testCases := []mountInfoTest{
{"count", sampleMountinfoBase + `
21 20 0:53/ /mnt/test rw,relatime - tmpfs rw
@ -187,7 +195,10 @@ id 20 0:53 / /mnt/test rw,relatime shared:212 - tmpfs rw
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("decode", func(t *testing.T) {
t.Parallel()
var got *vfs.MountInfo
d := vfs.NewMountInfoDecoder(strings.NewReader(tc.sample))
err := d.Decode(&got)
@ -206,6 +217,7 @@ id 20 0:53 / /mnt/test rw,relatime shared:212 - tmpfs rw
})
t.Run("iter", func(t *testing.T) {
t.Parallel()
d := vfs.NewMountInfoDecoder(strings.NewReader(tc.sample))
tc.check(t, d, "Entries",
d.Entries(), d.Err)
@ -217,6 +229,7 @@ id 20 0:53 / /mnt/test rw,relatime shared:212 - tmpfs rw
})
t.Run("yield", func(t *testing.T) {
t.Parallel()
d := vfs.NewMountInfoDecoder(strings.NewReader(tc.sample))
v := false
d.Entries()(func(entry *vfs.MountInfoEntry) bool { v = !v; return v })

View File

@ -10,6 +10,8 @@ import (
)
func TestUnfold(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
sample string
@ -50,6 +52,8 @@ func TestUnfold(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
d := vfs.NewMountInfoDecoder(strings.NewReader(tc.sample))
got, err := d.Unfold(tc.target)

View File

@ -11,27 +11,29 @@ import (
)
func TestArgsString(t *testing.T) {
t.Parallel()
wantString := strings.Join(wantArgs, " ")
if got := argsWt.(fmt.Stringer).String(); got != wantString {
t.Errorf("String: %q, want %q",
got, wantString)
t.Errorf("String: %q, want %q", got, wantString)
}
}
func TestNewCheckedArgs(t *testing.T) {
t.Parallel()
args := []string{"\x00"}
if _, err := helper.NewCheckedArgs(args); !errors.Is(err, syscall.EINVAL) {
t.Errorf("NewCheckedArgs: error = %v, wantErr %v",
err, syscall.EINVAL)
t.Errorf("NewCheckedArgs: error = %v, wantErr %v", err, syscall.EINVAL)
}
t.Run("must panic", func(t *testing.T) {
t.Parallel()
badPayload := []string{"\x00"}
defer func() {
wantPanic := "invalid argument"
if r := recover(); r != wantPanic {
t.Errorf("MustNewCheckedArgs: panic = %v, wantPanic %v",
r, wantPanic)
t.Errorf("MustNewCheckedArgs: panic = %v, wantPanic %v", r, wantPanic)
}
}()
helper.MustNewCheckedArgs(badPayload)

View File

@ -12,12 +12,13 @@ import (
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/helper/proc"
"hakurei.app/message"
)
// New initialises a Helper instance with wt as the null-terminated argument writer.
func New(
ctx context.Context,
msg container.Msg,
msg message.Msg,
pathname *check.Absolute, name string,
wt io.WriterTo,
stat bool,

View File

@ -68,15 +68,8 @@ func genericStub(argsFile, statFile *os.File) {
}
}
// simulate status pipe behaviour
if statFile != nil {
if _, err := statFile.Write([]byte{'x'}); err != nil {
panic("cannot write to status pipe: " + err.Error())
}
done := make(chan struct{})
go func() {
// wait for status pipe close
// simulate status pipe behaviour
var epoll int
if fd, err := syscall.EpollCreate1(0); err != nil {
panic("cannot open epoll fd: " + err.Error())
@ -91,6 +84,12 @@ func genericStub(argsFile, statFile *os.File) {
if err := syscall.EpollCtl(epoll, syscall.EPOLL_CTL_ADD, int(statFile.Fd()), &syscall.EpollEvent{}); err != nil {
panic("cannot add status pipe to epoll: " + err.Error())
}
if _, err := statFile.Write([]byte{'x'}); err != nil {
panic("cannot write to status pipe: " + err.Error())
}
// wait for status pipe close
events := make([]syscall.EpollEvent, 1)
if _, err := syscall.EpollWait(epoll, events, -1); err != nil {
panic("cannot poll status pipe: " + err.Error())
@ -99,8 +98,5 @@ func genericStub(argsFile, statFile *os.File) {
panic(strconv.Itoa(int(events[0].Events)))
}
close(done)
}()
<-done
}
}

View File

@ -8,8 +8,4 @@ import (
"hakurei.app/helper"
)
func TestMain(m *testing.M) {
container.TryArgv0(nil)
helper.InternalHelperStub()
os.Exit(m.Run())
}
func TestMain(m *testing.M) { container.TryArgv0(nil); helper.InternalHelperStub(); os.Exit(m.Run()) }

View File

@ -8,9 +8,11 @@ import (
"hakurei.app/container/check"
)
const Tmp = "/.hakurei"
// PrivateTmp is a private writable path in a hakurei container.
const PrivateTmp = "/.hakurei"
var AbsTmp = check.MustAbs(Tmp)
// AbsPrivateTmp is a [check.Absolute] representation of [PrivateTmp].
var AbsPrivateTmp = check.MustAbs(PrivateTmp)
const (
// WaitDelayDefault is used when WaitDelay has its zero value.
@ -57,18 +59,18 @@ type (
// Init user namespace supplementary groups inherited by all container processes.
Groups []string `json:"groups"`
// High level configuration applied to the underlying [container.Params].
// High level configuration applied to the underlying [container].
Container *ContainerConfig `json:"container"`
}
// ContainerConfig describes the container configuration to be applied to an underlying [container.Params].
// ContainerConfig describes the container configuration to be applied to an underlying [container].
ContainerConfig struct {
// Container UTS namespace hostname.
Hostname string `json:"hostname,omitempty"`
// Duration in nanoseconds to wait for after interrupting the initial process.
// Defaults to [WaitDelayDefault] if less than or equals to zero,
// or [WaitDelayMax] if greater than [WaitDelayMax].
// Defaults to [WaitDelayDefault] if zero, or [WaitDelayMax] if greater than [WaitDelayMax].
// Values lesser than zero is equivalent to zero, bypassing [WaitDelayDefault].
WaitDelay time.Duration `json:"wait_delay,omitempty"`
// Emit Flatpak-compatible seccomp filter programs.

View File

@ -9,6 +9,8 @@ import (
)
func TestConfigValidate(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
config *hst.Config
@ -45,6 +47,7 @@ func TestConfigValidate(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if err := tc.config.Validate(); !reflect.DeepEqual(err, tc.wantErr) {
t.Errorf("Validate: error = %#v, want %#v", err, tc.wantErr)
}
@ -53,6 +56,8 @@ func TestConfigValidate(t *testing.T) {
}
func TestExtraPermConfig(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
config *hst.ExtraPermConfig
@ -62,8 +67,8 @@ func TestExtraPermConfig(t *testing.T) {
{"nil path", &hst.ExtraPermConfig{Path: nil}, "<invalid>"},
{"r", &hst.ExtraPermConfig{Path: fhs.AbsRoot, Read: true}, "r--:/"},
{"r+", &hst.ExtraPermConfig{Ensure: true, Path: fhs.AbsRoot, Read: true}, "r--+:/"},
{"w", &hst.ExtraPermConfig{Path: hst.AbsTmp, Write: true}, "-w-:/.hakurei"},
{"w+", &hst.ExtraPermConfig{Ensure: true, Path: hst.AbsTmp, Write: true}, "-w-+:/.hakurei"},
{"w", &hst.ExtraPermConfig{Path: hst.AbsPrivateTmp, Write: true}, "-w-:/.hakurei"},
{"w+", &hst.ExtraPermConfig{Ensure: true, Path: hst.AbsPrivateTmp, Write: true}, "-w-+:/.hakurei"},
{"x", &hst.ExtraPermConfig{Path: fhs.AbsRunUser, Execute: true}, "--x:/run/user/"},
{"x+", &hst.ExtraPermConfig{Ensure: true, Path: fhs.AbsRunUser, Execute: true}, "--x+:/run/user/"},
{"rwx", &hst.ExtraPermConfig{Path: fhs.AbsTmp, Read: true, Write: true, Execute: true}, "rwx:/tmp/"},
@ -72,6 +77,7 @@ func TestExtraPermConfig(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.config.String(); got != tc.want {
t.Errorf("String: %q, want %q", got, tc.want)
}

View File

@ -5,11 +5,13 @@ import (
"slices"
"testing"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/message"
)
func TestBadInterfaceError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -23,19 +25,22 @@ func TestBadInterfaceError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if gotError := tc.err.Error(); gotError != tc.want {
t.Errorf("Error: %s, want %s", gotError, tc.want)
}
if gotMessage, ok := container.GetErrorMessage(tc.err); !ok {
t.Error("GetErrorMessage: ok = false")
if gotMessage, ok := message.GetMessage(tc.err); !ok {
t.Error("GetMessage: ok = false")
} else if gotMessage != tc.want {
t.Errorf("GetErrorMessage: %s, want %s", gotMessage, tc.want)
t.Errorf("GetMessage: %s, want %s", gotMessage, tc.want)
}
})
}
}
func TestBusConfigInterfaces(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
c *hst.BusConfig
@ -63,6 +68,7 @@ func TestBusConfigInterfaces(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var got []string
if tc.cutoff > 0 {
var i int
@ -86,6 +92,8 @@ func TestBusConfigInterfaces(t *testing.T) {
}
func TestBusConfigCheckInterfaces(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
c *hst.BusConfig
@ -101,6 +109,7 @@ func TestBusConfigCheckInterfaces(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if err := tc.c.CheckInterfaces(tc.name); !reflect.DeepEqual(err, tc.err) {
t.Errorf("CheckInterfaces: error = %#v, want %#v", err, tc.err)
}

View File

@ -11,14 +11,20 @@ import (
type Enablement byte
const (
// EWayland exposes a wayland pathname socket via security-context-v1.
EWayland Enablement = 1 << iota
// EX11 adds the target user via X11 ChangeHosts and exposes the X11 pathname socket.
EX11
// EDBus enables the per-container xdg-dbus-proxy daemon.
EDBus
// EPulse copies the PulseAudio cookie to [hst.PrivateTmp] and exposes the PulseAudio socket.
EPulse
// EM is a noop.
EM
)
// String returns a string representation of the flags set on [Enablement].
func (e Enablement) String() string {
switch e {
case 0:
@ -51,17 +57,17 @@ func (e Enablement) String() string {
// NewEnablements returns the address of [Enablement] as [Enablements].
func NewEnablements(e Enablement) *Enablements { return (*Enablements)(&e) }
// enablementsJSON is the [json] representation of the [Enablement] bit field.
type enablementsJSON struct {
// Enablements is the [json] adapter for [Enablement].
type Enablements Enablement
// enablementsJSON is the [json] representation of [Enablements].
type enablementsJSON = struct {
Wayland bool `json:"wayland,omitempty"`
X11 bool `json:"x11,omitempty"`
DBus bool `json:"dbus,omitempty"`
Pulse bool `json:"pulse,omitempty"`
}
// Enablements is the [json] adapter for [Enablement].
type Enablements Enablement
// Unwrap returns the underlying [Enablement].
func (e *Enablements) Unwrap() Enablement {
if e == nil {

View File

@ -10,6 +10,8 @@ import (
)
func TestEnablementString(t *testing.T) {
t.Parallel()
testCases := []struct {
flags hst.Enablement
want string
@ -38,6 +40,7 @@ func TestEnablementString(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.want, func(t *testing.T) {
t.Parallel()
if got := tc.flags.String(); got != tc.want {
t.Errorf("String: %q, want %q", got, tc.want)
}
@ -46,6 +49,8 @@ func TestEnablementString(t *testing.T) {
}
func TestEnablements(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
e *hst.Enablements
@ -63,7 +68,10 @@ func TestEnablements(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("marshal", func(t *testing.T) {
t.Parallel()
if got, err := json.Marshal(tc.e); err != nil {
t.Fatalf("Marshal: error = %v", err)
} else if string(got) != tc.data {
@ -81,6 +89,8 @@ func TestEnablements(t *testing.T) {
})
t.Run("unmarshal", func(t *testing.T) {
t.Parallel()
{
got := new(hst.Enablements)
if err := json.Unmarshal([]byte(tc.data), &got); err != nil {
@ -116,6 +126,8 @@ func TestEnablements(t *testing.T) {
}
t.Run("unwrap", func(t *testing.T) {
t.Parallel()
t.Run("nil", func(t *testing.T) {
if got := (*hst.Enablements)(nil).Unwrap(); got != 0 {
t.Errorf("Unwrap: %v", got)
@ -130,6 +142,8 @@ func TestEnablements(t *testing.T) {
})
t.Run("passthrough", func(t *testing.T) {
t.Parallel()
if _, err := (*hst.Enablements)(nil).MarshalJSON(); !errors.Is(err, syscall.EINVAL) {
t.Errorf("MarshalJSON: error = %v", err)
}

View File

@ -47,17 +47,16 @@ type Ops interface {
Etc(host *check.Absolute, prefix string) Ops
}
// ApplyState holds the address of [container.Ops] and any relevant application state.
// ApplyState holds the address of [Ops] and any relevant application state.
type ApplyState struct {
// AutoEtcPrefix is the prefix for [container.AutoEtcOp].
// AutoEtcPrefix is the prefix for [FSBind] in autoetc [FSBind.Special] condition.
AutoEtcPrefix string
Ops
}
var (
ErrFSNull = errors.New("unexpected null in mount point")
)
// ErrFSNull is returned by [json] on encountering a null [FilesystemConfig] value.
var ErrFSNull = errors.New("unexpected null in mount point")
// FSTypeError is returned when [ContainerConfig.Filesystem] contains an entry with invalid type.
type FSTypeError string
@ -90,7 +89,7 @@ func (f *FilesystemConfigJSON) Valid() bool {
return f != nil && f.FilesystemConfig != nil && f.FilesystemConfig.Valid()
}
// fsType holds the string representation of a [FilesystemConfig]'s concrete type.
// fsType holds the string representation of the concrete type of [FilesystemConfig].
type fsType struct {
Type string `json:"type"`
}

View File

@ -15,6 +15,8 @@ import (
)
func TestFilesystemConfigJSON(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
want hst.FilesystemConfigJSON
@ -86,7 +88,10 @@ func TestFilesystemConfigJSON(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("marshal", func(t *testing.T) {
t.Parallel()
wantErr := tc.wantErr
if errors.As(wantErr, new(hst.FSTypeError)) {
// for unsupported implementation tc
@ -122,6 +127,7 @@ func TestFilesystemConfigJSON(t *testing.T) {
})
t.Run("unmarshal", func(t *testing.T) {
t.Parallel()
if tc.data == "\x00" && tc.sData == "\x00" {
if errors.As(tc.wantErr, new(hst.FSImplError)) {
// this error is only returned on marshal
@ -163,6 +169,8 @@ func TestFilesystemConfigJSON(t *testing.T) {
}
t.Run("valid", func(t *testing.T) {
t.Parallel()
if got := (*hst.FilesystemConfigJSON).Valid(nil); got {
t.Errorf("Valid: %v, want false", got)
}
@ -177,6 +185,7 @@ func TestFilesystemConfigJSON(t *testing.T) {
})
t.Run("passthrough", func(t *testing.T) {
t.Parallel()
if err := new(hst.FilesystemConfigJSON).UnmarshalJSON(make([]byte, 0)); err == nil {
t.Errorf("UnmarshalJSON: error = %v", err)
}
@ -184,7 +193,10 @@ func TestFilesystemConfigJSON(t *testing.T) {
}
func TestFSErrors(t *testing.T) {
t.Parallel()
t.Run("type", func(t *testing.T) {
t.Parallel()
want := `invalid filesystem type "cat"`
if got := hst.FSTypeError("cat").Error(); got != want {
t.Errorf("Error: %q, want %q", got, want)
@ -192,6 +204,8 @@ func TestFSErrors(t *testing.T) {
})
t.Run("impl", func(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
val hst.FilesystemConfig
@ -205,6 +219,7 @@ func TestFSErrors(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
err := hst.FSImplError{Value: tc.val}
if got := err.Error(); got != tc.want {
t.Errorf("Error: %q, want %q", got, tc.want)
@ -242,13 +257,17 @@ type fsTestCase struct {
func checkFs(t *testing.T, testCases []fsTestCase) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("valid", func(t *testing.T) {
t.Parallel()
if got := tc.fs.Valid(); got != tc.valid {
t.Errorf("Valid: %v, want %v", got, tc.valid)
}
})
t.Run("ops", func(t *testing.T) {
t.Parallel()
ops := new(container.Ops)
tc.fs.Apply(&hst.ApplyState{AutoEtcPrefix: ":3", Ops: opsAdapter{ops}})
if !reflect.DeepEqual(ops, &tc.ops) {
@ -265,18 +284,21 @@ func checkFs(t *testing.T, testCases []fsTestCase) {
})
t.Run("path", func(t *testing.T) {
t.Parallel()
if got := tc.fs.Path(); !reflect.DeepEqual(got, tc.path) {
t.Errorf("Target: %q, want %q", got, tc.path)
}
})
t.Run("host", func(t *testing.T) {
t.Parallel()
if got := tc.fs.Host(); !reflect.DeepEqual(got, tc.host) {
t.Errorf("Host: %q, want %q", got, tc.host)
}
})
t.Run("string", func(t *testing.T) {
t.Parallel()
if tc.str == "\x00" {
return
}

View File

@ -16,22 +16,23 @@ const FilesystemBind = "bind"
// FSBind represents a host to container bind mount.
type FSBind struct {
// mount point in container, same as Source if empty
// Pathname in the container mount namespace. Same as Source if nil.
Target *check.Absolute `json:"dst,omitempty"`
// host filesystem path to make available to the container
// Pathname in the init mount namespace. Must not be nil.
Source *check.Absolute `json:"src"`
// do not mount Target read-only
// Do not remount Target read-only.
// This has no effect if Source is mounted read-only in the init mount namespace.
Write bool `json:"write,omitempty"`
// do not disable device files on Target, implies Write
// Allow access to devices (special files) on Target, implies Write.
Device bool `json:"dev,omitempty"`
// create Source as a directory if it does not exist
// Create Source as a directory in the init mount namespace if it does not exist.
Ensure bool `json:"ensure,omitempty"`
// skip this mount point if Source does not exist
// Silently skip this mount point if Source does not exist in the init mount namespace.
Optional bool `json:"optional,omitempty"`
// enable special behaviour:
// for autoroot, Target must be set to [fhs.AbsRoot];
// for autoetc, Target must be set to [fhs.AbsEtc]
/* Enable special behaviour:
For autoroot: Target must be [fhs.Root].
For autoetc: Target must be [fhs.Etc]. */
Special bool `json:"special,omitempty"`
}

View File

@ -9,6 +9,8 @@ import (
)
func TestFSBind(t *testing.T) {
t.Parallel()
checkFs(t, []fsTestCase{
{"nil", (*hst.FSBind)(nil), false, nil, nil, nil, "<invalid>"},
{"ensure optional", &hst.FSBind{Source: m("/"), Ensure: true, Optional: true},

View File

@ -15,13 +15,13 @@ const FilesystemEphemeral = "ephemeral"
// FSEphemeral represents an ephemeral container mount point.
type FSEphemeral struct {
// mount point in container
Target *check.Absolute `json:"dst,omitempty"`
// do not mount filesystem read-only
// Pathname in the container mount namespace.
Target *check.Absolute `json:"dst"`
// Do not mount filesystem read-only.
Write bool `json:"write,omitempty"`
// upper limit on the size of the filesystem
// Upper limit on the size of the filesystem.
Size int `json:"size,omitempty"`
// initial permission bits of the new filesystem
// Initial permission bits of the new filesystem.
Perm os.FileMode `json:"perm,omitempty"`
}

View File

@ -9,6 +9,8 @@ import (
)
func TestFSEphemeral(t *testing.T) {
t.Parallel()
checkFs(t, []fsTestCase{
{"nil", (*hst.FSEphemeral)(nil), false, nil, nil, nil, "<invalid>"},
@ -36,15 +38,15 @@ func TestFSEphemeral(t *testing.T) {
"+ephemeral(-rwxr-xr-x):/run/nscd"},
{"negative size", &hst.FSEphemeral{
Target: hst.AbsTmp,
Target: hst.AbsPrivateTmp,
Write: true,
Size: -1,
}, true, container.Ops{&container.MountTmpfsOp{
FSName: "ephemeral",
Path: hst.AbsTmp,
Path: hst.AbsPrivateTmp,
Flags: syscall.MS_NOSUID | syscall.MS_NODEV,
Perm: 0755,
}}, hst.AbsTmp, nil,
}}, hst.AbsPrivateTmp, nil,
"w+ephemeral(-rwxr-xr-x):/.hakurei"},
})
}

View File

@ -14,11 +14,11 @@ const FilesystemLink = "link"
// FSLink represents a symlink in the container filesystem.
type FSLink struct {
// link path in container
// Pathname in the container mount namespace.
Target *check.Absolute `json:"dst"`
// linkname the symlink points to
// Arbitrary linkname value store in the symlink.
Linkname string `json:"linkname"`
// whether to dereference linkname before creating the link
// Whether to treat Linkname as an absolute pathname and dereference before creating the link.
Dereference bool `json:"dereference,omitempty"`
}

View File

@ -8,6 +8,8 @@ import (
)
func TestFSLink(t *testing.T) {
t.Parallel()
checkFs(t, []fsTestCase{
{"nil", (*hst.FSLink)(nil), false, nil, nil, nil, "<invalid>"},
{"zero", new(hst.FSLink), false, nil, nil, nil, "<invalid>"},

View File

@ -14,14 +14,14 @@ const FilesystemOverlay = "overlay"
// FSOverlay represents an overlay mount point.
type FSOverlay struct {
// mount point in container
// Pathname in the container mount namespace.
Target *check.Absolute `json:"dst"`
// any filesystem, does not need to be on a writable filesystem, must not be nil
// Any filesystem, does not need to be on a writable filesystem, must not be nil.
Lower []*check.Absolute `json:"lower"`
// the upperdir is normally on a writable filesystem, leave as nil to mount Lower readonly
// The upperdir is normally on a writable filesystem, leave as nil to mount Lower readonly.
Upper *check.Absolute `json:"upper,omitempty"`
// the workdir needs to be an empty directory on the same filesystem as Upper, must not be nil if Upper is populated
// The workdir needs to be an empty directory on the same filesystem as Upper, must not be nil if Upper is populated.
Work *check.Absolute `json:"work,omitempty"`
}

View File

@ -9,6 +9,8 @@ import (
)
func TestFSOverlay(t *testing.T) {
t.Parallel()
checkFs(t, []fsTestCase{
{"nil", (*hst.FSOverlay)(nil), false, nil, nil, nil, "<invalid>"},
{"nil lower", &hst.FSOverlay{Target: m("/etc"), Lower: []*check.Absolute{nil}}, false, nil, nil, nil, "<invalid>"},

View File

@ -12,9 +12,12 @@ import (
// An AppError is returned while starting an app according to [hst.Config].
type AppError struct {
Step string
// A user-facing description of where the error occurred.
Step string `json:"step"`
// The underlying error value.
Err error
Msg string
// An arbitrary error message, overriding the return value of Message if not empty.
Msg string `json:"message,omitempty"`
}
func (e *AppError) Error() string { return e.Err.Error() }
@ -38,13 +41,13 @@ func (e *AppError) Message() string {
// Paths contains environment-dependent paths used by hakurei.
type Paths struct {
// temporary directory returned by [os.TempDir] (usually `/tmp`)
// Temporary directory returned by [os.TempDir], usually equivalent to [fhs.AbsTmp].
TempDir *check.Absolute `json:"temp_dir"`
// path to shared directory (usually `/tmp/hakurei.%d`, [Info.User])
// Shared directory specific to the hsu userid, usually (`/tmp/hakurei.%d`, [Info.User]).
SharePath *check.Absolute `json:"share_path"`
// XDG_RUNTIME_DIR value (usually `/run/user/%d`, uid)
// Checked XDG_RUNTIME_DIR value, usually (`/run/user/%d`, uid).
RuntimePath *check.Absolute `json:"runtime_path"`
// application runtime directory (usually `/run/user/%d/hakurei`)
// Shared directory specific to the hsu userid located in RuntimePath, usually (`/run/user/%d/hakurei`, uid).
RunDirPath *check.Absolute `json:"run_dir_path"`
}
@ -117,11 +120,10 @@ func Template() *Config {
{&FSEphemeral{Target: fhs.AbsTmp, Write: true, Perm: 0755}},
{&FSOverlay{
Target: check.MustAbs("/nix/store"),
Lower: []*check.Absolute{check.MustAbs("/mnt-root/nix/.ro-store")},
Upper: check.MustAbs("/mnt-root/nix/.rw-store/upper"),
Work: check.MustAbs("/mnt-root/nix/.rw-store/work"),
Lower: []*check.Absolute{fhs.AbsVarLib.Append("hakurei/base/org.nixos/ro-store")},
Upper: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/upper"),
Work: fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/work"),
}},
{&FSBind{Source: check.MustAbs("/nix/store")}},
{&FSLink{Target: fhs.AbsRun.Append("current-system"), Linkname: "/run/current-system", Dereference: true}},
{&FSLink{Target: fhs.AbsRun.Append("opengl-driver"), Linkname: "/run/opengl-driver", Dereference: true}},
{&FSBind{Source: fhs.AbsVarLib.Append("hakurei/u0/org.chromium.Chromium"),

View File

@ -8,12 +8,14 @@ import (
"syscall"
"testing"
"hakurei.app/container"
"hakurei.app/container/stub"
"hakurei.app/hst"
"hakurei.app/message"
)
func TestAppError(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
err error
@ -58,26 +60,31 @@ func TestAppError(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
t.Run("error", func(t *testing.T) {
t.Parallel()
if got := tc.err.Error(); got != tc.s {
t.Errorf("Error: %s, want %s", got, tc.s)
}
})
t.Run("message", func(t *testing.T) {
gotMessage, gotMessageOk := container.GetErrorMessage(tc.err)
t.Parallel()
gotMessage, gotMessageOk := message.GetMessage(tc.err)
if want := tc.message != "\x00"; gotMessageOk != want {
t.Errorf("GetErrorMessage: ok = %v, want %v", gotMessage, want)
t.Errorf("GetMessage: ok = %v, want %v", gotMessage, want)
}
if gotMessageOk {
if gotMessage != tc.message {
t.Errorf("GetErrorMessage: %s, want %s", gotMessage, tc.message)
t.Errorf("GetMessage: %s, want %s", gotMessage, tc.message)
}
}
})
t.Run("is", func(t *testing.T) {
t.Parallel()
if !errors.Is(tc.err, tc.is) {
t.Errorf("Is: unexpected false for %v", tc.is)
}
@ -90,6 +97,8 @@ func TestAppError(t *testing.T) {
}
func TestTemplate(t *testing.T) {
t.Parallel()
const want = `{
"id": "org.chromium.Chromium",
"enablements": {
@ -193,14 +202,10 @@ func TestTemplate(t *testing.T) {
"type": "overlay",
"dst": "/nix/store",
"lower": [
"/mnt-root/nix/.ro-store"
"/var/lib/hakurei/base/org.nixos/ro-store"
],
"upper": "/mnt-root/nix/.rw-store/upper",
"work": "/mnt-root/nix/.rw-store/work"
},
{
"type": "bind",
"src": "/nix/store"
"upper": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper",
"work": "/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work"
},
{
"type": "link",

View File

@ -6,13 +6,13 @@ import (
"log"
"os"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/internal/app/state"
"hakurei.app/message"
)
// Main runs an app according to [hst.Config] and terminates. Main does not return.
func Main(ctx context.Context, msg container.Msg, config *hst.Config) {
func Main(ctx context.Context, msg message.Msg, config *hst.Config) {
var id state.ID
if err := state.NewAppID(&id); err != nil {
log.Fatal(err)

View File

@ -9,7 +9,6 @@ import (
"io"
"io/fs"
"log"
"maps"
"os/exec"
"os/user"
"reflect"
@ -23,12 +22,14 @@ import (
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system"
"hakurei.app/system/acl"
)
func TestApp(t *testing.T) {
msg := container.NewMsg(nil)
t.Parallel()
msg := message.NewMsg(nil)
msg.SwapVerbose(testing.Verbose())
testCases := []struct {
@ -98,7 +99,7 @@ func TestApp(t *testing.T) {
Ops: new(container.Ops).
Root(m("/"), bits.BindWritable).
Proc(m("/proc/")).
Tmpfs(hst.AbsTmp, 4096, 0755).
Tmpfs(hst.AbsPrivateTmp, 4096, 0755).
DevWritable(m("/dev/"), true).
Tmpfs(m("/dev/shm"), 0, 01777).
Bind(m("/dev/kvm"), m("/dev/kvm"), bits.BindWritable|bits.BindDevice|bits.BindOptional).
@ -250,9 +251,9 @@ func TestApp(t *testing.T) {
Args: []string{"zsh", "-c", "exec chromium "},
Env: []string{
"DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/65534/bus",
"DBUS_SYSTEM_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket",
"DBUS_SYSTEM_BUS_ADDRESS=unix:path=/var/run/dbus/system_bus_socket",
"HOME=/home/chronos",
"PULSE_COOKIE=" + hst.Tmp + "/pulse-cookie",
"PULSE_COOKIE=" + hst.PrivateTmp + "/pulse-cookie",
"PULSE_SERVER=unix:/run/user/65534/pulse/native",
"SHELL=/run/current-system/sw/bin/zsh",
"TERM=xterm-256color",
@ -265,7 +266,7 @@ func TestApp(t *testing.T) {
Ops: new(container.Ops).
Root(m("/"), bits.BindWritable).
Proc(m("/proc/")).
Tmpfs(hst.AbsTmp, 4096, 0755).
Tmpfs(hst.AbsPrivateTmp, 4096, 0755).
DevWritable(m("/dev/"), true).
Tmpfs(m("/dev/shm"), 0, 01777).
Bind(m("/dev/dri"), m("/dev/dri"), bits.BindWritable|bits.BindDevice|bits.BindOptional).
@ -282,9 +283,9 @@ func TestApp(t *testing.T) {
Place(m("/etc/group"), []byte("hakurei:x:65534:\n")).
Bind(m("/tmp/hakurei.0/ebf083d1b175911782d413369b64ce7c/wayland"), m("/run/user/65534/wayland-0"), 0).
Bind(m("/run/user/1971/hakurei/ebf083d1b175911782d413369b64ce7c/pulse"), m("/run/user/65534/pulse/native"), 0).
Place(m(hst.Tmp+"/pulse-cookie"), bytes.Repeat([]byte{0}, pulseCookieSizeMax)).
Place(m(hst.PrivateTmp+"/pulse-cookie"), bytes.Repeat([]byte{0}, pulseCookieSizeMax)).
Bind(m("/tmp/hakurei.0/ebf083d1b175911782d413369b64ce7c/bus"), m("/run/user/65534/bus"), 0).
Bind(m("/tmp/hakurei.0/ebf083d1b175911782d413369b64ce7c/system_bus_socket"), m("/run/dbus/system_bus_socket"), 0).
Bind(m("/tmp/hakurei.0/ebf083d1b175911782d413369b64ce7c/system_bus_socket"), m("/var/run/dbus/system_bus_socket"), 0).
Remount(m("/"), syscall.MS_RDONLY),
SeccompPresets: bits.PresetExt | bits.PresetDenyDevel,
HostNet: true,
@ -394,9 +395,9 @@ func TestApp(t *testing.T) {
Args: []string{"/nix/store/yqivzpzzn7z5x0lq9hmbzygh45d8rhqd-chromium-start"},
Env: []string{
"DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1971/bus",
"DBUS_SYSTEM_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket",
"DBUS_SYSTEM_BUS_ADDRESS=unix:path=/var/run/dbus/system_bus_socket",
"HOME=/var/lib/persist/module/hakurei/0/1",
"PULSE_COOKIE=" + hst.Tmp + "/pulse-cookie",
"PULSE_COOKIE=" + hst.PrivateTmp + "/pulse-cookie",
"PULSE_SERVER=unix:/run/user/1971/pulse/native",
"SHELL=/run/current-system/sw/bin/zsh",
"TERM=xterm-256color",
@ -408,7 +409,7 @@ func TestApp(t *testing.T) {
},
Ops: new(container.Ops).
Proc(m("/proc/")).
Tmpfs(hst.AbsTmp, 4096, 0755).
Tmpfs(hst.AbsPrivateTmp, 4096, 0755).
DevWritable(m("/dev/"), true).
Tmpfs(m("/dev/shm"), 0, 01777).
Bind(m("/bin"), m("/bin"), 0).
@ -432,9 +433,9 @@ func TestApp(t *testing.T) {
Place(m("/etc/group"), []byte("hakurei:x:100:\n")).
Bind(m("/run/user/1971/wayland-0"), m("/run/user/1971/wayland-0"), 0).
Bind(m("/run/user/1971/hakurei/8e2c76b066dabe574cf073bdb46eb5c1/pulse"), m("/run/user/1971/pulse/native"), 0).
Place(m(hst.Tmp+"/pulse-cookie"), bytes.Repeat([]byte{0}, pulseCookieSizeMax)).
Place(m(hst.PrivateTmp+"/pulse-cookie"), bytes.Repeat([]byte{0}, pulseCookieSizeMax)).
Bind(m("/tmp/hakurei.0/8e2c76b066dabe574cf073bdb46eb5c1/bus"), m("/run/user/1971/bus"), 0).
Bind(m("/tmp/hakurei.0/8e2c76b066dabe574cf073bdb46eb5c1/system_bus_socket"), m("/run/dbus/system_bus_socket"), 0).
Bind(m("/tmp/hakurei.0/8e2c76b066dabe574cf073bdb46eb5c1/system_bus_socket"), m("/var/run/dbus/system_bus_socket"), 0).
Remount(m("/"), syscall.MS_RDONLY),
SeccompPresets: bits.PresetExt | bits.PresetDenyTTY | bits.PresetDenyDevel,
HostNet: true,
@ -445,30 +446,20 @@ func TestApp(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
gr, gw := io.Pipe()
var gotSys *system.I
{
sPriv := outcomeState{
ID: &tc.id,
Identity: tc.config.Identity,
UserID: (&Hsu{k: tc.k}).MustIDMsg(msg),
EnvPaths: copyPaths(tc.k),
Container: tc.config.Container,
}
sPriv.populateEarly(tc.k, msg, tc.config)
sPriv := newOutcomeState(tc.k, msg, &tc.id, tc.config, &Hsu{k: tc.k})
if err := sPriv.populateLocal(tc.k, msg); err != nil {
t.Fatalf("populateLocal: error = %#v", err)
}
gotSys = system.New(t.Context(), msg, sPriv.uid.unwrap())
stateSys := outcomeStateSys{sys: gotSys, outcomeState: &sPriv}
for _, op := range sPriv.Shim.Ops {
if err := op.toSystem(&stateSys, tc.config); err != nil {
if err := sPriv.newSys(tc.config, gotSys).toSystem(); err != nil {
t.Fatalf("toSystem: error = %#v", err)
}
}
go func() {
e := gob.NewEncoder(gw)
@ -479,7 +470,7 @@ func TestApp(t *testing.T) {
}()
}
var gotParams container.Params
var gotParams *container.Params
{
var sShim outcomeState
@ -491,17 +482,13 @@ func TestApp(t *testing.T) {
t.Fatalf("populateLocal: error = %#v", err)
}
stateParams := outcomeStateParams{params: &gotParams, outcomeState: &sShim}
if sShim.Container.Env == nil {
stateParams.env = make(map[string]string, envAllocSize)
} else {
stateParams.env = maps.Clone(sShim.Container.Env)
}
stateParams := sShim.newParams()
for _, op := range sShim.Shim.Ops {
if err := op.toContainer(&stateParams); err != nil {
if err := op.toContainer(stateParams); err != nil {
t.Fatalf("toContainer: error = %#v", err)
}
}
gotParams = stateParams.params
}
t.Run("sys", func(t *testing.T) {
@ -511,8 +498,8 @@ func TestApp(t *testing.T) {
})
t.Run("params", func(t *testing.T) {
if !reflect.DeepEqual(&gotParams, tc.wantParams) {
t.Errorf("toContainer: params =\n%s\n, want\n%s", mustMarshal(&gotParams), mustMarshal(tc.wantParams))
if !reflect.DeepEqual(gotParams, tc.wantParams) {
t.Errorf("toContainer: params =\n%s\n, want\n%s", mustMarshal(gotParams), mustMarshal(tc.wantParams))
}
})
})
@ -576,6 +563,7 @@ type stubNixOS struct {
func (k *stubNixOS) new(func(k syscallDispatcher)) { panic("not implemented") }
func (k *stubNixOS) getpid() int { return 0xdeadbeef }
func (k *stubNixOS) getuid() int { return 1971 }
func (k *stubNixOS) getgid() int { return 100 }
@ -595,6 +583,8 @@ func (k *stubNixOS) lookupEnv(key string) (string, bool) {
return "/run/user/1971", true
case "XDG_CONFIG_HOME":
return "/home/ophestra/xdg/config", true
case "DBUS_SYSTEM_BUS_ADDRESS":
return "", false
default:
panic(fmt.Sprintf("attempted to access unexpected environment variable %q", key))
}
@ -669,7 +659,7 @@ func (k *stubNixOS) evalSymlinks(path string) (string, error) {
return "/run/user/1971", nil
case "/tmp/hakurei.0":
return "/tmp/hakurei.0", nil
case "/run/dbus":
case "/var/run/dbus":
return "/run/dbus", nil
case "/dev/kvm":
return "/dev/kvm", nil
@ -744,8 +734,8 @@ func (k *stubNixOS) cmdOutput(cmd *exec.Cmd) ([]byte, error) {
}
}
func (k *stubNixOS) overflowUid(container.Msg) int { return 65534 }
func (k *stubNixOS) overflowGid(container.Msg) int { return 65534 }
func (k *stubNixOS) overflowUid(message.Msg) int { return 65534 }
func (k *stubNixOS) overflowGid(message.Msg) int { return 65534 }
func (k *stubNixOS) mustHsuPath() *check.Absolute { return m("/proc/nonexistent/hsu") }

View File

@ -12,6 +12,7 @@ import (
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/internal"
"hakurei.app/message"
)
// osFile represents [os.File].
@ -28,6 +29,8 @@ type syscallDispatcher interface {
// just synchronising access is not enough, as this is for test instrumentation.
new(f func(k syscallDispatcher))
// getpid provides [os.Getpid].
getpid() int
// getuid provides [os.Getuid].
getuid() int
// getgid provides [os.Getgid].
@ -53,9 +56,9 @@ type syscallDispatcher interface {
cmdOutput(cmd *exec.Cmd) ([]byte, error)
// overflowUid provides [container.OverflowUid].
overflowUid(msg container.Msg) int
overflowUid(msg message.Msg) int
// overflowGid provides [container.OverflowGid].
overflowGid(msg container.Msg) int
overflowGid(msg message.Msg) int
// mustHsuPath provides [internal.MustHsuPath].
mustHsuPath() *check.Absolute
@ -69,6 +72,7 @@ type direct struct{}
func (k direct) new(f func(k syscallDispatcher)) { go f(k) }
func (direct) getpid() int { return os.Getpid() }
func (direct) getuid() int { return os.Getuid() }
func (direct) getgid() int { return os.Getgid() }
func (direct) lookupEnv(key string) (string, bool) { return os.LookupEnv(key) }
@ -90,8 +94,8 @@ func (direct) lookupGroupId(name string) (gid string, err error) {
func (direct) cmdOutput(cmd *exec.Cmd) ([]byte, error) { return cmd.Output() }
func (direct) overflowUid(msg container.Msg) int { return container.OverflowUid(msg) }
func (direct) overflowGid(msg container.Msg) int { return container.OverflowGid(msg) }
func (direct) overflowUid(msg message.Msg) int { return container.OverflowUid(msg) }
func (direct) overflowGid(msg message.Msg) int { return container.OverflowGid(msg) }
func (direct) mustHsuPath() *check.Absolute { return internal.MustHsuPath() }

View File

@ -1,16 +1,312 @@
package app
import (
"bytes"
"io/fs"
"log"
"os"
"os/exec"
"reflect"
"slices"
"testing"
"time"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/container/stub"
"hakurei.app/hst"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system"
)
// call initialises a [stub.Call].
// This keeps composites analysis happy without making the test cases too bloated.
func call(name string, args stub.ExpectArgs, ret any, err error) stub.Call {
return stub.Call{Name: name, Args: args, Ret: ret, Err: err}
}
// checkExpectUid is the uid value used by checkOpBehaviour to initialise [system.I].
const checkExpectUid = 0xcafebabe
// wantAutoEtcPrefix is the autoetc prefix corresponding to checkExpectInstanceId.
const wantAutoEtcPrefix = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
// checkExpectInstanceId is the [state.ID] value used by checkOpBehaviour to initialise outcomeState.
var checkExpectInstanceId = *(*state.ID)(bytes.Repeat([]byte{0xaa}, len(state.ID{})))
type opBehaviourTestCase struct {
name string
newOp func(isShim, clearUnexported bool) outcomeOp
newConfig func() *hst.Config
pStateSys func(state *outcomeStateSys)
toSystem []stub.Call
wantSys *system.I
extraCheckSys func(t *testing.T, state *outcomeStateSys)
wantErrSystem error
pStateContainer func(state *outcomeStateParams)
toContainer []stub.Call
wantParams *container.Params
extraCheckParams func(t *testing.T, state *outcomeStateParams)
wantErrContainer error
}
func checkOpBehaviour(t *testing.T, testCases []opBehaviourTestCase) {
t.Helper()
wantNewState := []stub.Call{
// newOutcomeState
call("getpid", stub.ExpectArgs{}, 0xdead, nil),
call("isVerbose", stub.ExpectArgs{}, true, nil),
call("mustHsuPath", stub.ExpectArgs{}, m(container.Nonexistent), nil),
call("cmdOutput", stub.ExpectArgs{container.Nonexistent, os.Stderr, []string{}, "/"}, []byte("0"), nil),
call("tempdir", stub.ExpectArgs{}, container.Nonexistent+"/tmp", nil),
call("lookupEnv", stub.ExpectArgs{"XDG_RUNTIME_DIR"}, container.Nonexistent+"/xdg_runtime_dir", nil),
call("getuid", stub.ExpectArgs{}, 1000, nil),
call("getgid", stub.ExpectArgs{}, 100, nil),
// populateLocal
call("verbosef", stub.ExpectArgs{"process share directory at %q, runtime directory at %q", []any{
m(container.Nonexistent + "/tmp/hakurei.0"),
m(container.Nonexistent + "/xdg_runtime_dir/hakurei"),
}}, nil, nil),
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Helper()
t.Parallel()
wantCallsFull := slices.Concat(wantNewState, tc.toSystem, []stub.Call{{Name: stub.CallSeparator}})
if tc.wantErrSystem == nil {
wantCallsFull = append(wantCallsFull, slices.Concat(wantNewState, tc.toContainer)...)
}
wantConfig := tc.newConfig()
k := &kstub{panicDispatcher{}, stub.New(t,
func(s *stub.Stub[syscallDispatcher]) syscallDispatcher { return &kstub{panicDispatcher{}, s} },
stub.Expect{Calls: wantCallsFull},
)}
defer stub.HandleExit(t)
{
config := tc.newConfig()
s := newOutcomeState(k, k, &checkExpectInstanceId, config, &Hsu{k: k})
if err := s.populateLocal(k, k); err != nil {
t.Fatalf("populateLocal: error = %v", err)
}
stateSys := s.newSys(config, newI())
if tc.pStateSys != nil {
tc.pStateSys(stateSys)
}
op := tc.newOp(false, true)
if err := op.toSystem(stateSys); !reflect.DeepEqual(err, tc.wantErrSystem) {
t.Fatalf("toSystem: error = %#v, want %#v", err, tc.wantErrSystem)
}
k.Expects(stub.CallSeparator)
if !reflect.DeepEqual(config, wantConfig) {
t.Errorf("toSystem clobbered config: %#v, want %#v", config, wantConfig)
}
if tc.wantErrSystem != nil {
goto out
}
if !stateSys.sys.Equal(tc.wantSys) {
t.Errorf("toSystem: %#v, want %#v", stateSys.sys, tc.wantSys)
}
if tc.extraCheckSys != nil {
tc.extraCheckSys(t, stateSys)
}
if wantOpSys := tc.newOp(true, false); !reflect.DeepEqual(op, wantOpSys) {
t.Errorf("toSystem: op = %#v, want %#v", op, wantOpSys)
}
}
{
config := tc.newConfig()
s := newOutcomeState(k, k, &checkExpectInstanceId, config, &Hsu{k: k})
stateParams := s.newParams()
if err := s.populateLocal(k, k); err != nil {
t.Fatalf("populateLocal: error = %v", err)
}
if tc.pStateContainer != nil {
tc.pStateContainer(stateParams)
}
op := tc.newOp(true, true)
if err := op.toContainer(stateParams); !reflect.DeepEqual(err, tc.wantErrContainer) {
t.Fatalf("toContainer: error = %#v, want %#v", err, tc.wantErrContainer)
}
if tc.wantErrContainer != nil {
goto out
}
if !reflect.DeepEqual(stateParams.params, tc.wantParams) {
t.Errorf("toContainer:\n%s\nwant\n%s", mustMarshal(stateParams.params), mustMarshal(tc.wantParams))
}
if tc.extraCheckParams != nil {
tc.extraCheckParams(t, stateParams)
}
}
out:
k.VisitIncomplete(func(s *stub.Stub[syscallDispatcher]) {
count := k.Pos() - 1 // separator
if count-len(wantNewState) < len(tc.toSystem) {
t.Errorf("toSystem: %d calls, want %d", count-len(wantNewState), len(tc.toSystem))
} else {
t.Errorf("toContainer: %d calls, want %d", count-len(tc.toSystem)-2*len(wantNewState), len(tc.toContainer))
}
})
})
}
}
func newI() *system.I { return system.New(panicMsgContext{}, panicMsgContext{}, checkExpectUid) }
type kstub struct {
panicDispatcher
*stub.Stub[syscallDispatcher]
}
func (k *kstub) getpid() int { k.Helper(); return k.Expects("getpid").Ret.(int) }
func (k *kstub) getuid() int { k.Helper(); return k.Expects("getuid").Ret.(int) }
func (k *kstub) getgid() int { k.Helper(); return k.Expects("getgid").Ret.(int) }
func (k *kstub) lookupEnv(key string) (string, bool) {
k.Helper()
expect := k.Expects("lookupEnv")
if expect.Error(
stub.CheckArg(k.Stub, "key", key, 0)) != nil {
k.FailNow()
}
if expect.Ret == nil {
return "\x00", false
}
return expect.Ret.(string), true
}
func (k *kstub) readdir(name string) ([]os.DirEntry, error) {
k.Helper()
expect := k.Expects("readdir")
return expect.Ret.([]os.DirEntry), expect.Error(
stub.CheckArg(k.Stub, "name", name, 0))
}
func (k *kstub) tempdir() string { k.Helper(); return k.Expects("tempdir").Ret.(string) }
func (k *kstub) evalSymlinks(path string) (string, error) {
k.Helper()
expect := k.Expects("evalSymlinks")
return expect.Ret.(string), expect.Error(
stub.CheckArg(k.Stub, "path", path, 0))
}
func (k *kstub) cmdOutput(cmd *exec.Cmd) ([]byte, error) {
k.Helper()
expect := k.Expects("cmdOutput")
return expect.Ret.([]byte), expect.Error(
stub.CheckArg(k.Stub, "cmd.Path", cmd.Path, 0),
stub.CheckArgReflect(k.Stub, "cmd.Stderr", cmd.Stderr, 1),
stub.CheckArgReflect(k.Stub, "cmd.Env", cmd.Env, 2),
stub.CheckArg(k.Stub, "cmd.Dir", cmd.Dir, 3))
}
func (k *kstub) mustHsuPath() *check.Absolute {
k.Helper()
return k.Expects("mustHsuPath").Ret.(*check.Absolute)
}
func (k *kstub) GetLogger() *log.Logger { panic("unreachable") }
func (k *kstub) IsVerbose() bool { k.Helper(); return k.Expects("isVerbose").Ret.(bool) }
func (k *kstub) SwapVerbose(verbose bool) bool {
k.Helper()
expect := k.Expects("swapVerbose")
if expect.Error(
stub.CheckArg(k.Stub, "verbose", verbose, 0)) != nil {
k.FailNow()
}
return expect.Ret.(bool)
}
// ignoreValue marks a value to be ignored by the test suite.
type ignoreValue struct{}
func (k *kstub) Verbose(v ...any) {
k.Helper()
expect := k.Expects("verbose")
// translate ignores in v
if want, ok := expect.Args[0].([]any); ok && len(v) == len(want) {
for i, a := range want {
if _, ok = a.(ignoreValue); ok {
v[i] = ignoreValue{}
}
}
}
if expect.Error(
stub.CheckArgReflect(k.Stub, "v", v, 0)) != nil {
k.FailNow()
}
}
func (k *kstub) Verbosef(format string, v ...any) {
k.Helper()
if k.Expects("verbosef").Error(
stub.CheckArg(k.Stub, "format", format, 0),
stub.CheckArgReflect(k.Stub, "v", v, 1)) != nil {
k.FailNow()
}
}
func (k *kstub) Suspend() bool { k.Helper(); return k.Expects("suspend").Ret.(bool) }
func (k *kstub) Resume() bool { k.Helper(); return k.Expects("resume").Ret.(bool) }
func (k *kstub) BeforeExit() { k.Helper(); k.Expects("beforeExit") }
// stubDir returns a slice of [os.DirEntry] with only their Name method implemented.
func stubDir(names ...string) []os.DirEntry {
d := make([]os.DirEntry, len(names))
for i, name := range names {
d[i] = nameDentry(name)
}
return d
}
// nameDentry implements the Name method on [os.DirEntry].
type nameDentry string
func (e nameDentry) Name() string { return string(e) }
func (nameDentry) IsDir() bool { panic("unreachable") }
func (nameDentry) Type() fs.FileMode { panic("unreachable") }
func (nameDentry) Info() (fs.FileInfo, error) { panic("unreachable") }
// panicMsgContext implements [message.Msg] and [context.Context] with methods wrapping panic.
// This should be assigned to test cases to be checked against.
type panicMsgContext struct{}
func (panicMsgContext) GetLogger() *log.Logger { panic("unreachable") }
func (panicMsgContext) IsVerbose() bool { panic("unreachable") }
func (panicMsgContext) SwapVerbose(bool) bool { panic("unreachable") }
func (panicMsgContext) Verbose(...any) { panic("unreachable") }
func (panicMsgContext) Verbosef(string, ...any) { panic("unreachable") }
func (panicMsgContext) Suspend() bool { panic("unreachable") }
func (panicMsgContext) Resume() bool { panic("unreachable") }
func (panicMsgContext) BeforeExit() { panic("unreachable") }
func (panicMsgContext) Deadline() (time.Time, bool) { panic("unreachable") }
func (panicMsgContext) Done() <-chan struct{} { panic("unreachable") }
func (panicMsgContext) Err() error { panic("unreachable") }
func (panicMsgContext) Value(any) any { panic("unreachable") }
// panicDispatcher implements syscallDispatcher with methods wrapping panic.
// This type is meant to be embedded in partial syscallDispatcher implementations.
type panicDispatcher struct{}
func (panicDispatcher) new(func(k syscallDispatcher)) { panic("unreachable") }
func (panicDispatcher) getpid() int { panic("unreachable") }
func (panicDispatcher) getuid() int { panic("unreachable") }
func (panicDispatcher) getgid() int { panic("unreachable") }
func (panicDispatcher) lookupEnv(string) (string, bool) { panic("unreachable") }
@ -21,7 +317,7 @@ func (panicDispatcher) tempdir() string { panic("unreachab
func (panicDispatcher) evalSymlinks(string) (string, error) { panic("unreachable") }
func (panicDispatcher) lookupGroupId(string) (string, error) { panic("unreachable") }
func (panicDispatcher) cmdOutput(*exec.Cmd) ([]byte, error) { panic("unreachable") }
func (panicDispatcher) overflowUid(container.Msg) int { panic("unreachable") }
func (panicDispatcher) overflowGid(container.Msg) int { panic("unreachable") }
func (panicDispatcher) overflowUid(message.Msg) int { panic("unreachable") }
func (panicDispatcher) overflowGid(message.Msg) int { panic("unreachable") }
func (panicDispatcher) mustHsuPath() *check.Absolute { panic("unreachable") }
func (panicDispatcher) fatalf(string, ...any) { panic("unreachable") }

View File

@ -13,6 +13,8 @@ import (
)
func TestEnvPaths(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
env *EnvPaths
@ -48,6 +50,7 @@ func TestEnvPaths(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if tc.wantPanic != "" {
defer func() {
if r := recover(); r != tc.wantPanic {
@ -66,6 +69,8 @@ func TestEnvPaths(t *testing.T) {
}
func TestCopyPaths(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
env map[string]string
@ -84,6 +89,7 @@ func TestCopyPaths(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if tc.fatal != "" {
defer stub.HandleExit(t)
}

View File

@ -1,19 +1,16 @@
package app
import (
"bytes"
"context"
"encoding/gob"
"errors"
"fmt"
"io"
"os"
"os/user"
"sync/atomic"
"hakurei.app/container"
"hakurei.app/hst"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system"
)
@ -24,16 +21,14 @@ func newWithMessageError(msg string, err error) error {
// An outcome is the runnable state of a hakurei container via [hst.Config].
type outcome struct {
// initial [hst.Config] gob stream for state data;
// this is prepared ahead of time as config is clobbered during seal creation
ct io.WriterTo
// Supplementary group ids. Populated during finalise.
supp []string
// Resolved priv side operating system interactions. Populated during finalise.
sys *system.I
// Transmitted to shim. Populated during finalise.
state *outcomeState
// Kept for saving to [state].
config *hst.Config
// Whether the current process is in outcome.main.
active atomic.Bool
@ -42,7 +37,7 @@ type outcome struct {
syscallDispatcher
}
func (k *outcome) finalise(ctx context.Context, msg container.Msg, id *state.ID, config *hst.Config) error {
func (k *outcome) finalise(ctx context.Context, msg message.Msg, id *state.ID, config *hst.Config) error {
if ctx == nil || id == nil {
// unreachable
panic("invalid call to finalise")
@ -57,16 +52,6 @@ func (k *outcome) finalise(ctx context.Context, msg container.Msg, id *state.ID,
return err
}
// TODO(ophestra): do not clobber during finalise
{
// encode initial configuration for state tracking
ct := new(bytes.Buffer)
if err := gob.NewEncoder(ct).Encode(config); err != nil {
return &hst.AppError{Step: "encode initial config", Err: err}
}
k.ct = ct
}
// hsu expects numerical group ids
supp := make([]string, len(config.Groups))
for i, name := range config.Groups {
@ -83,28 +68,19 @@ func (k *outcome) finalise(ctx context.Context, msg container.Msg, id *state.ID,
}
// early validation complete at this point
s := outcomeState{
ID: id,
Identity: config.Identity,
UserID: (&Hsu{k: k}).MustIDMsg(msg),
EnvPaths: copyPaths(k.syscallDispatcher),
Container: config.Container,
}
s.populateEarly(k.syscallDispatcher, msg, config)
s := newOutcomeState(k.syscallDispatcher, msg, id, config, &Hsu{k: k})
if err := s.populateLocal(k.syscallDispatcher, msg); err != nil {
return err
}
sys := system.New(k.ctx, msg, s.uid.unwrap())
stateSys := outcomeStateSys{sys: sys, outcomeState: &s}
for _, op := range s.Shim.Ops {
if err := op.toSystem(&stateSys, config); err != nil {
if err := s.newSys(config, sys).toSystem(); err != nil {
return err
}
}
k.sys = sys
k.supp = supp
k.state = &s
k.state = s
k.config = config
return nil
}

View File

@ -9,9 +9,9 @@ import (
"strconv"
"sync"
"hakurei.app/container"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/message"
)
// Hsu caches responses from cmd/hsu.
@ -74,7 +74,7 @@ func (h *Hsu) ID() (int, error) {
func (h *Hsu) MustID() int { return h.MustIDMsg(nil) }
// MustIDMsg implements MustID with a custom [container.Msg].
func (h *Hsu) MustIDMsg(msg container.Msg) int {
func (h *Hsu) MustIDMsg(msg message.Msg) int {
id, err := h.ID()
if err == nil {
return id
@ -87,7 +87,7 @@ func (h *Hsu) MustIDMsg(msg container.Msg) int {
}
os.Exit(1)
return -0xdeadbeef
} else if m, ok := container.GetErrorMessage(err); ok {
} else if m, ok := message.GetMessage(err); ok {
log.Fatal(m)
return -0xdeadbeef
} else {

View File

@ -1,13 +1,15 @@
package app
import (
"os"
"errors"
"maps"
"strconv"
"hakurei.app/container"
"hakurei.app/container/check"
"hakurei.app/hst"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system"
"hakurei.app/system/acl"
)
@ -55,13 +57,10 @@ type outcomeState struct {
sc hst.Paths
*EnvPaths
// Matched paths to cover. Populated by spFilesystemOp.
HidePaths []*check.Absolute
// Copied via populateLocal.
k syscallDispatcher
// Copied via populateLocal.
msg container.Msg
msg message.Msg
}
// valid checks outcomeState to be safe for use with outcomeOp.
@ -73,13 +72,21 @@ func (s *outcomeState) valid() bool {
s.EnvPaths != nil
}
// populateEarly populates exported fields via syscallDispatcher.
// This must only be called from the priv side.
func (s *outcomeState) populateEarly(k syscallDispatcher, msg container.Msg, config *hst.Config) {
s.Shim = &shimParams{PrivPID: os.Getpid(), Verbose: msg.IsVerbose(), Ops: fromConfig(config)}
// newOutcomeState returns the address of a new outcomeState with its exported fields populated via syscallDispatcher.
func newOutcomeState(k syscallDispatcher, msg message.Msg, id *state.ID, config *hst.Config, hsu *Hsu) *outcomeState {
s := outcomeState{
Shim: &shimParams{PrivPID: k.getpid(), Verbose: msg.IsVerbose()},
ID: id,
Identity: config.Identity,
UserID: hsu.MustIDMsg(msg),
EnvPaths: copyPaths(k),
Container: config.Container,
}
// enforce bounds and default early
if s.Container.WaitDelay <= 0 {
if s.Container.WaitDelay < 0 {
s.Shim.WaitDelay = 0
} else if s.Container.WaitDelay == 0 {
s.Shim.WaitDelay = hst.WaitDelayDefault
} else if s.Container.WaitDelay > hst.WaitDelayMax {
s.Shim.WaitDelay = hst.WaitDelayMax
@ -93,12 +100,12 @@ func (s *outcomeState) populateEarly(k syscallDispatcher, msg container.Msg, con
s.Mapuid, s.Mapgid = k.overflowUid(msg), k.overflowGid(msg)
}
return
return &s
}
// populateLocal populates unexported fields from transmitted exported fields.
// These fields are cheaper to recompute per-process.
func (s *outcomeState) populateLocal(k syscallDispatcher, msg container.Msg) error {
func (s *outcomeState) populateLocal(k syscallDispatcher, msg message.Msg) error {
if !s.valid() || k == nil || msg == nil {
return newWithMessage("impossible outcome state reached")
}
@ -141,10 +148,43 @@ type outcomeStateSys struct {
// Process-specific directory in XDG_RUNTIME_DIR, nil if unused.
runtimeSharePath *check.Absolute
// Copied from [hst.Config]. Safe for read by outcomeOp.toSystem.
appId string
// Copied from [hst.Config]. Safe for read by outcomeOp.toSystem.
et hst.Enablement
// Copied from [hst.Config]. Safe for read by spWaylandOp.toSystem only.
directWayland bool
// Copied header from [hst.Config]. Safe for read by spFinalOp.toSystem only.
extraPerms []*hst.ExtraPermConfig
// Copied address from [hst.Config]. Safe for read by spDBusOp.toSystem only.
sessionBus, systemBus *hst.BusConfig
sys *system.I
*outcomeState
}
// newSys returns the address of a new outcomeStateSys embedding the current outcomeState.
func (s *outcomeState) newSys(config *hst.Config, sys *system.I) *outcomeStateSys {
return &outcomeStateSys{
appId: config.ID, et: config.Enablements.Unwrap(),
directWayland: config.DirectWayland, extraPerms: config.ExtraPerms,
sessionBus: config.SessionBus, systemBus: config.SystemBus,
sys: sys, outcomeState: s,
}
}
// newParams returns the address of a new outcomeStateParams embedding the current outcomeState.
func (s *outcomeState) newParams() *outcomeStateParams {
stateParams := outcomeStateParams{params: new(container.Params), outcomeState: s}
if s.Container.Env == nil {
stateParams.env = make(map[string]string, envAllocSize)
} else {
stateParams.env = maps.Clone(s.Container.Env)
}
return &stateParams
}
// ensureRuntimeDir must be called if access to paths within XDG_RUNTIME_DIR is required.
func (state *outcomeStateSys) ensureRuntimeDir() {
if state.useRuntimeDir {
@ -200,12 +240,15 @@ type outcomeStateParams struct {
*outcomeState
}
// errNotEnabled is returned by outcomeOp.toSystem and used internally to exclude an outcomeOp from transmission.
var errNotEnabled = errors.New("op not enabled in the configuration")
// An outcomeOp inflicts an outcome on [system.I] and contains enough information to
// inflict it on [container.Params] in a separate process.
// An implementation of outcomeOp must store cross-process states in exported fields only.
type outcomeOp interface {
// toSystem inflicts the current outcome on [system.I] in the priv side process.
toSystem(state *outcomeStateSys, config *hst.Config) error
toSystem(state *outcomeStateSys) error
// toContainer inflicts the current outcome on [container.Params] in the shim process.
// The implementation must not write to the Env field of [container.Params] as it will be overwritten
@ -213,36 +256,45 @@ type outcomeOp interface {
toContainer(state *outcomeStateParams) error
}
// fromConfig returns a corresponding slice of outcomeOp for [hst.Config].
// toSystem calls the outcomeOp.toSystem method on all outcomeOp implementations and populates shimParams.Ops.
// This function assumes the caller has already called the Validate method on [hst.Config]
// and checked that it returns nil.
func fromConfig(config *hst.Config) (ops []outcomeOp) {
ops = []outcomeOp{
func (state *outcomeStateSys) toSystem() error {
if state.Shim == nil || state.Shim.Ops != nil {
return newWithMessage("invalid ops state reached")
}
ops := [...]outcomeOp{
// must run first
&spParamsOp{},
// TODO(ophestra): move this late for #8 and #9
spFilesystemOp{},
&spFilesystemOp{},
spRuntimeOp{},
spTmpdirOp{},
spAccountOp{},
// optional via enablements
&spWaylandOp{},
&spX11Op{},
&spPulseOp{},
&spDBusOp{},
spFinalOp{},
}
et := config.Enablements.Unwrap()
if et&hst.EWayland != 0 {
ops = append(ops, &spWaylandOp{})
}
if et&hst.EX11 != 0 {
ops = append(ops, &spX11Op{})
}
if et&hst.EPulse != 0 {
ops = append(ops, &spPulseOp{})
}
if et&hst.EDBus != 0 {
ops = append(ops, &spDBusOp{})
state.Shim.Ops = make([]outcomeOp, 0, len(ops))
for _, op := range ops {
if err := op.toSystem(state); err != nil {
// this error is used internally to exclude this outcomeOp from transmission
if errors.Is(err, errNotEnabled) {
continue
}
ops = append(ops, spFinal{})
return
return err
}
state.Shim.Ops = append(state.Shim.Ops, op)
}
return nil
}

View File

@ -1,7 +1,6 @@
package app
import (
"reflect"
"testing"
"hakurei.app/hst"
@ -9,6 +8,8 @@ import (
)
func TestOutcomeStateValid(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
s *outcomeState
@ -24,55 +25,10 @@ func TestOutcomeStateValid(t *testing.T) {
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got := tc.s.valid(); got != tc.want {
t.Errorf("valid: %v, want %v", got, tc.want)
}
})
}
}
func TestFromConfig(t *testing.T) {
testCases := []struct {
name string
config *hst.Config
want []outcomeOp
}{
{"ne", new(hst.Config), []outcomeOp{
&spParamsOp{},
spFilesystemOp{},
spRuntimeOp{},
spTmpdirOp{},
spAccountOp{},
spFinal{},
}},
{"wayland pulse", &hst.Config{Enablements: hst.NewEnablements(hst.EWayland | hst.EPulse)}, []outcomeOp{
&spParamsOp{},
spFilesystemOp{},
spRuntimeOp{},
spTmpdirOp{},
spAccountOp{},
&spWaylandOp{},
&spPulseOp{},
spFinal{},
}},
{"all", &hst.Config{Enablements: hst.NewEnablements(0xff)}, []outcomeOp{
&spParamsOp{},
spFilesystemOp{},
spRuntimeOp{},
spTmpdirOp{},
spAccountOp{},
&spWaylandOp{},
&spX11Op{},
&spPulseOp{},
&spDBusOp{},
spFinal{},
}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
if got := fromConfig(tc.config); !reflect.DeepEqual(got, tc.want) {
t.Errorf("fromConfig: %#v, want %#v", got, tc.want)
}
})
}
}

View File

@ -5,6 +5,8 @@ import (
)
func TestDeepContainsH(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
basepath string
@ -75,6 +77,7 @@ func TestDeepContainsH(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
if got, err := deepContainsH(tc.basepath, tc.targpath); (err != nil) != tc.wantErr {
t.Errorf("deepContainsH() error = %v, wantErr %v", err, tc.wantErr)
} else if got != tc.want {

View File

@ -17,6 +17,7 @@ import (
"hakurei.app/hst"
"hakurei.app/internal"
"hakurei.app/internal/app/state"
"hakurei.app/message"
"hakurei.app/system"
)
@ -40,7 +41,7 @@ type mainState struct {
cmdWait chan error
k *outcome
container.Msg
message.Msg
uintptr
}
@ -207,7 +208,7 @@ func (ms mainState) fatal(fallback string, ferr error) {
}
// main carries out outcome and terminates. main does not return.
func (k *outcome) main(msg container.Msg) {
func (k *outcome) main(msg message.Msg) {
if !k.active.CompareAndSwap(false, true) {
panic("outcome: attempted to run twice")
}
@ -291,8 +292,9 @@ func (k *outcome) main(msg container.Msg) {
if err := c.Save(&state.State{
ID: k.state.id.unwrap(),
PID: ms.cmd.Process.Pid,
Config: k.config,
Time: *ms.Time,
}, k.ct); err != nil {
}); err != nil {
ms.fatal("cannot save state entry:", err)
}
}); err != nil {
@ -311,10 +313,10 @@ func (k *outcome) main(msg container.Msg) {
os.Exit(0)
}
// printMessageError prints the error message according to [container.GetErrorMessage],
// printMessageError prints the error message according to [message.GetMessage],
// or fallback prepended to err if an error message is not available.
func printMessageError(fallback string, err error) {
m, ok := container.GetErrorMessage(err)
m, ok := message.GetMessage(err)
if !ok {
log.Println(fallback, err)
return

View File

@ -5,7 +5,6 @@ import (
"errors"
"io"
"log"
"maps"
"os"
"os/exec"
"os/signal"
@ -18,6 +17,7 @@ import (
"hakurei.app/container/bits"
"hakurei.app/container/seccomp"
"hakurei.app/hst"
"hakurei.app/message"
)
//#include "shim-signal.h"
@ -47,17 +47,13 @@ type shimParams struct {
}
// valid checks shimParams to be safe for use.
func (p *shimParams) valid() bool {
return p != nil &&
p.Ops != nil &&
p.PrivPID > 0
}
func (p *shimParams) valid() bool { return p != nil && p.PrivPID > 0 }
// ShimMain is the main function of the shim process and runs as the unconstrained target user.
func ShimMain() {
log.SetPrefix("shim: ")
log.SetFlags(0)
msg := container.NewMsg(log.Default())
msg := message.NewMsg(log.Default())
if err := container.SetDumpable(container.SUID_DUMP_DISABLE); err != nil {
log.Fatalf("cannot set SUID_DUMP_DISABLE: %s", err)
@ -81,7 +77,7 @@ func ShimMain() {
closeSetup = f
if err = state.populateLocal(direct{}, msg); err != nil {
if m, ok := container.GetErrorMessage(err); ok {
if m, ok := message.GetMessage(err); ok {
log.Fatal(m)
} else {
log.Fatalf("cannot populate local state: %v", err)
@ -105,16 +101,10 @@ func ShimMain() {
log.Fatalf("cannot set parent-death signal: %v", errno)
}
var params container.Params
stateParams := outcomeStateParams{params: &params, outcomeState: &state}
if state.Container.Env == nil {
stateParams.env = make(map[string]string, envAllocSize)
} else {
stateParams.env = maps.Clone(state.Container.Env)
}
stateParams := state.newParams()
for _, op := range state.Shim.Ops {
if err := op.toContainer(&stateParams); err != nil {
if m, ok := container.GetErrorMessage(err); ok {
if err := op.toContainer(stateParams); err != nil {
if m, ok := message.GetMessage(err); ok {
log.Fatal(m)
} else {
log.Fatalf("cannot create container state: %v", err)
@ -133,7 +123,7 @@ func ShimMain() {
switch buf[0] {
case 0: // got SIGCONT from monitor: shim exit requested
if fp := cancelContainer.Load(); params.ForwardCancel && fp != nil && *fp != nil {
if fp := cancelContainer.Load(); stateParams.params.ForwardCancel && fp != nil && *fp != nil {
(*fp)()
// shim now bound by ShimWaitDelay, implemented below
continue
@ -161,7 +151,7 @@ func ShimMain() {
}
}()
if params.Ops == nil {
if stateParams.params.Ops == nil {
log.Fatal("invalid container params")
}
@ -174,7 +164,7 @@ func ShimMain() {
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
cancelContainer.Store(&stop)
z := container.New(ctx, msg)
z.Params = params
z.Params = *stateParams.params
z.Stdin, z.Stdout, z.Stderr = os.Stdin, os.Stdout, os.Stderr
// bounds and default enforced in finalise.go

View File

@ -6,7 +6,6 @@ import (
"syscall"
"hakurei.app/container/fhs"
"hakurei.app/hst"
)
func init() { gob.Register(spAccountOp{}) }
@ -14,31 +13,36 @@ func init() { gob.Register(spAccountOp{}) }
// spAccountOp sets up user account emulation inside the container.
type spAccountOp struct{}
func (s spAccountOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
const fallbackUsername = "chronos"
func (s spAccountOp) toSystem(state *outcomeStateSys) error {
// do checks here to fail before fork/exec
if state.Container == nil || state.Container.Home == nil || state.Container.Shell == nil {
// unreachable
return syscall.ENOTRECOVERABLE
}
if state.Container.Username == "" {
state.Container.Username = fallbackUsername
} else if !isValidUsername(state.Container.Username) {
// default is applied in toContainer
if state.Container.Username != "" && !isValidUsername(state.Container.Username) {
return newWithMessage(fmt.Sprintf("invalid user name %q", state.Container.Username))
}
return nil
}
func (s spAccountOp) toContainer(state *outcomeStateParams) error {
const fallbackUsername = "chronos"
username := state.Container.Username
if username == "" {
username = fallbackUsername
}
state.params.Dir = state.Container.Home
state.env["HOME"] = state.Container.Home.String()
state.env["USER"] = state.Container.Username
state.env["USER"] = username
state.env["SHELL"] = state.Container.Shell.String()
state.params.
Place(fhs.AbsEtc.Append("passwd"),
[]byte(state.Container.Username+":x:"+
[]byte(username+":x:"+
state.mapuid.String()+":"+
state.mapgid.String()+
":Hakurei:"+

View File

@ -0,0 +1,89 @@
package app
import (
"maps"
"os"
"syscall"
"testing"
"hakurei.app/container"
"hakurei.app/container/stub"
"hakurei.app/hst"
)
func TestSpAccountOp(t *testing.T) {
t.Parallel()
config := hst.Template()
checkOpBehaviour(t, []opBehaviourTestCase{
{"invalid state", func(bool, bool) outcomeOp { return spAccountOp{} }, func() *hst.Config {
c := hst.Template()
c.Container.Shell = nil
return c
}, nil, []stub.Call{
// this op performs basic validation and does not make calls during toSystem
}, nil, nil, syscall.ENOTRECOVERABLE, nil, nil, nil, nil, nil},
{"invalid user name", func(bool, bool) outcomeOp { return spAccountOp{} }, func() *hst.Config {
c := hst.Template()
c.Container.Username = "9"
return c
}, nil, []stub.Call{
// this op performs basic validation and does not make calls during toSystem
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: `invalid user name "9"`,
}, nil, nil, nil, nil, nil},
{"success fallback username", func(bool, bool) outcomeOp { return spAccountOp{} }, func() *hst.Config {
c := hst.Template()
c.Container.Username = ""
return c
}, nil, []stub.Call{
// this op performs basic validation and does not make calls during toSystem
}, newI(), nil, nil, func(state *outcomeStateParams) {
state.params.Ops = new(container.Ops)
}, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Dir: config.Container.Home,
Ops: new(container.Ops).
Place(m("/etc/passwd"), []byte("chronos:x:1000:100:Hakurei:/data/data/org.chromium.Chromium:/run/current-system/sw/bin/zsh\n")).
Place(m("/etc/group"), []byte("hakurei:x:100:\n")),
}, func(t *testing.T, state *outcomeStateParams) {
wantEnv := map[string]string{
"HOME": config.Container.Home.String(),
"USER": config.Container.Username,
"SHELL": config.Container.Shell.String(),
}
maps.Copy(wantEnv, config.Container.Env)
if !maps.Equal(state.env, wantEnv) {
t.Errorf("toContainer: env = %#v, want %#v", state.env, wantEnv)
}
}, nil},
{"success", func(bool, bool) outcomeOp { return spAccountOp{} }, hst.Template, nil, []stub.Call{
// this op performs basic validation and does not make calls during toSystem
}, newI(), nil, nil, func(state *outcomeStateParams) {
state.params.Ops = new(container.Ops)
}, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Dir: config.Container.Home,
Ops: new(container.Ops).
Place(m("/etc/passwd"), []byte("chronos:x:1000:100:Hakurei:/data/data/org.chromium.Chromium:/run/current-system/sw/bin/zsh\n")).
Place(m("/etc/group"), []byte("hakurei:x:100:\n")),
}, func(t *testing.T, state *outcomeStateParams) {
wantEnv := map[string]string{
"HOME": config.Container.Home.String(),
"USER": config.Container.Username,
"SHELL": config.Container.Shell.String(),
}
maps.Copy(wantEnv, config.Container.Env)
if !maps.Equal(state.env, wantEnv) {
t.Errorf("toContainer: env = %#v, want %#v", state.env, wantEnv)
}
}, nil},
})
}

View File

@ -15,6 +15,7 @@ import (
"hakurei.app/container/fhs"
"hakurei.app/container/seccomp"
"hakurei.app/hst"
"hakurei.app/message"
"hakurei.app/system/dbus"
)
@ -31,7 +32,7 @@ type spParamsOp struct {
TermSet bool
}
func (s *spParamsOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
func (s *spParamsOp) toSystem(state *outcomeStateSys) error {
s.Term, s.TermSet = state.k.lookupEnv("TERM")
state.sys.Ensure(state.sc.SharePath, 0711)
return nil
@ -64,7 +65,7 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
// the container is canceled when shim is requested to exit or receives an interrupt or termination signal;
// this behaviour is implemented in the shim
state.params.ForwardCancel = state.Container.WaitDelay >= 0
state.params.ForwardCancel = state.Shim.WaitDelay > 0
if state.Container.Multiarch {
state.params.SeccompFlags |= seccomp.AllowMultiarch
@ -104,7 +105,7 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
// early mount points
state.params.
Proc(fhs.AbsProc).
Tmpfs(hst.AbsTmp, 1<<12, 0755)
Tmpfs(hst.AbsPrivateTmp, 1<<12, 0755)
if !state.Container.Device {
state.params.DevWritable(fhs.AbsDev, true)
} else {
@ -116,12 +117,15 @@ func (s *spParamsOp) toContainer(state *outcomeStateParams) error {
return nil
}
func init() { gob.Register(spFilesystemOp{}) }
func init() { gob.Register(new(spFilesystemOp)) }
// spFilesystemOp applies configured filesystems to [container.Params], excluding the optional root filesystem.
type spFilesystemOp struct{}
type spFilesystemOp struct {
// Matched paths to cover. Stored during toSystem.
HidePaths []*check.Absolute
}
func (s spFilesystemOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
func (s *spFilesystemOp) toSystem(state *outcomeStateSys) error {
/* retrieve paths and hide them if they're made available in the sandbox;
this feature tries to improve user experience of permissive defaults, and
@ -135,7 +139,12 @@ func (s spFilesystemOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
varRunNscd,
}
_, systemBusAddr := dbus.Address()
// dbus.Address does not go through syscallDispatcher
systemBusAddr := dbus.FallbackSystemBusAddress
if addr, ok := state.k.lookupEnv(dbus.SystemBusAddress); ok {
systemBusAddr = addr
}
if entries, err := dbus.Parse([]byte(systemBusAddr)); err != nil {
return &hst.AppError{Step: "parse dbus address", Err: err}
} else {
@ -243,16 +252,9 @@ func (s spFilesystemOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
for i, ok := range hidePathMatch {
if ok {
if a, err := check.NewAbs(hidePaths[i]); err != nil {
var absoluteError *check.AbsoluteError
if !errors.As(err, &absoluteError) {
return newWithMessageError(absoluteError.Error(), absoluteError)
}
if absoluteError == nil {
return newWithMessage("impossible path checking state reached")
}
return newWithMessage("invalid path hiding candidate " + strconv.Quote(absoluteError.Pathname))
return newWithMessage("invalid path hiding candidate " + strconv.Quote(hidePaths[i]))
} else {
state.HidePaths = append(state.HidePaths, a)
s.HidePaths = append(s.HidePaths, a)
}
}
}
@ -260,7 +262,7 @@ func (s spFilesystemOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
return nil
}
func (s spFilesystemOp) toContainer(state *outcomeStateParams) error {
func (s *spFilesystemOp) toContainer(state *outcomeStateParams) error {
for i, c := range state.filesystem {
if !c.Valid() {
return newWithMessage("invalid filesystem at index " + strconv.Itoa(i))
@ -268,7 +270,7 @@ func (s spFilesystemOp) toContainer(state *outcomeStateParams) error {
c.Apply(&state.as)
}
for _, a := range state.HidePaths {
for _, a := range s.HidePaths {
state.params.Tmpfs(a, 1<<13, 0755)
}
@ -299,7 +301,7 @@ func resolveRoot(c *hst.ContainerConfig) (rootfs hst.FilesystemConfig, filesyste
}
// evalSymlinks calls syscallDispatcher.evalSymlinks but discards errors unwrapping to [fs.ErrNotExist].
func evalSymlinks(msg container.Msg, k syscallDispatcher, v *string) error {
func evalSymlinks(msg message.Msg, k syscallDispatcher, v *string) error {
if p, err := k.evalSymlinks(*v); err != nil {
if !errors.Is(err, fs.ErrNotExist) {
return err

View File

@ -0,0 +1,427 @@
package app
import (
"errors"
"maps"
"os"
"reflect"
"syscall"
"testing"
"hakurei.app/container"
"hakurei.app/container/bits"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/container/seccomp"
"hakurei.app/container/stub"
"hakurei.app/hst"
"hakurei.app/system/dbus"
)
func TestSpParamsOp(t *testing.T) {
t.Parallel()
config := hst.Template()
checkOpBehaviour(t, []opBehaviourTestCase{
{"invalid program path", func(isShim, _ bool) outcomeOp {
if !isShim {
return new(spParamsOp)
}
return &spParamsOp{Term: "xterm", TermSet: true}
}, func() *hst.Config {
c := hst.Template()
c.Container.Path = nil
return c
}, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{"TERM"}, "xterm", nil),
}, newI().
Ensure(m(container.Nonexistent+"/tmp/hakurei.0"), 0711), nil, nil, nil, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: "invalid program path",
}},
{"success defaultargs secure", func(isShim, _ bool) outcomeOp {
if !isShim {
return new(spParamsOp)
}
return &spParamsOp{Term: "xterm", TermSet: true}
}, func() *hst.Config {
c := hst.Template()
c.Container.Args = nil
c.Container.Multiarch = false
c.Container.SeccompCompat = false
c.Container.Devel = false
c.Container.Userns = false
c.Container.Tty = false
c.Container.Device = false
return c
}, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{"TERM"}, "xterm", nil),
}, newI().
Ensure(m(container.Nonexistent+"/tmp/hakurei.0"), 0711), nil, nil, nil, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Hostname: config.Container.Hostname,
HostNet: config.Container.HostNet,
HostAbstract: config.Container.HostAbstract,
Path: config.Container.Path,
Args: []string{config.Container.Path.String()},
SeccompPresets: bits.PresetExt | bits.PresetDenyDevel | bits.PresetDenyNS | bits.PresetDenyTTY,
Uid: 1000,
Gid: 100,
Ops: new(container.Ops).
Root(m("/var/lib/hakurei/base/org.debian"), bits.BindWritable).
Proc(fhs.AbsProc).Tmpfs(hst.AbsPrivateTmp, 1<<12, 0755).
DevWritable(fhs.AbsDev, true).
Tmpfs(fhs.AbsDev.Append("shm"), 0, 01777),
}, func(t *testing.T, state *outcomeStateParams) {
wantEnv := map[string]string{
"TERM": "xterm",
}
maps.Copy(wantEnv, config.Container.Env)
if !maps.Equal(state.env, wantEnv) {
t.Errorf("toContainer: env = %#v, want %#v", state.env, wantEnv)
}
const wantAutoEtcPrefix = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
if state.as.AutoEtcPrefix != wantAutoEtcPrefix {
t.Errorf("toContainer: as.AutoEtcPrefix = %q, want %q", state.as.AutoEtcPrefix, wantAutoEtcPrefix)
}
wantFilesystems := config.Container.Filesystem[1:]
if !reflect.DeepEqual(state.filesystem, wantFilesystems) {
t.Errorf("toContainer: filesystem = %#v, want %#v", state.filesystem, wantFilesystems)
}
}, nil},
{"success", func(isShim, _ bool) outcomeOp {
if !isShim {
return new(spParamsOp)
}
return &spParamsOp{Term: "xterm", TermSet: true}
}, hst.Template, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{"TERM"}, "xterm", nil),
}, newI().
Ensure(m(container.Nonexistent+"/tmp/hakurei.0"), 0711), nil, nil, nil, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Hostname: config.Container.Hostname,
RetainSession: config.Container.Tty,
HostNet: config.Container.HostNet,
HostAbstract: config.Container.HostAbstract,
Path: config.Container.Path,
Args: config.Container.Args,
SeccompFlags: seccomp.AllowMultiarch,
Uid: 1000,
Gid: 100,
Ops: new(container.Ops).
Root(m("/var/lib/hakurei/base/org.debian"), bits.BindWritable).
Proc(fhs.AbsProc).Tmpfs(hst.AbsPrivateTmp, 1<<12, 0755).
Bind(fhs.AbsDev, fhs.AbsDev, bits.BindWritable|bits.BindDevice).
Tmpfs(fhs.AbsDev.Append("shm"), 0, 01777),
}, func(t *testing.T, state *outcomeStateParams) {
wantEnv := map[string]string{
"TERM": "xterm",
}
maps.Copy(wantEnv, config.Container.Env)
if !maps.Equal(state.env, wantEnv) {
t.Errorf("toContainer: env = %#v, want %#v", state.env, wantEnv)
}
if state.as.AutoEtcPrefix != wantAutoEtcPrefix {
t.Errorf("toContainer: as.AutoEtcPrefix = %q, want %q", state.as.AutoEtcPrefix, wantAutoEtcPrefix)
}
wantFilesystems := config.Container.Filesystem[1:]
if !reflect.DeepEqual(state.filesystem, wantFilesystems) {
t.Errorf("toContainer: filesystem = %#v, want %#v", state.filesystem, wantFilesystems)
}
}, nil},
})
}
func TestSpFilesystemOp(t *testing.T) {
const nePrefix = container.Nonexistent + "/eval"
var stubDebianRoot = stubDir("bin", "dev", "etc", "home", "lib64", "lost+found",
"mnt", "nix", "proc", "root", "run", "srv", "sys", "tmp", "usr", "var")
config := hst.Template()
newConfigSmall := func() *hst.Config {
c := hst.Template()
c.Container.Filesystem = []hst.FilesystemConfigJSON{
{FilesystemConfig: &hst.FSBind{Target: fhs.AbsEtc, Source: fhs.AbsEtc, Special: true}},
{FilesystemConfig: &hst.FSOverlay{Target: m("/nix/store"), Lower: []*check.Absolute{
fhs.AbsVarLib.Append("hakurei/base/org.nixos/.ro-store"),
fhs.AbsVarLib.Append("hakurei/base/org.nixos/org.chromium.Chromium"),
}}},
{FilesystemConfig: &hst.FSEphemeral{Target: hst.AbsPrivateTmp}},
}
c.Container.Device = false
return c
}
configSmall := newConfigSmall()
checkOpBehaviour(t, []opBehaviourTestCase{
{"readdir", func(bool, bool) outcomeOp {
return new(spFilesystemOp)
}, hst.Template, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/dbus"}, nePrefix+"/run/dbus", nil),
call("readdir", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian"}, []os.DirEntry{}, stub.UniqueError(2)),
}, nil, nil, &hst.AppError{
Step: "access autoroot source",
Err: stub.UniqueError(2),
}, nil, nil, nil, nil, nil},
{"invalid dbus address", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, func() *hst.Config {
c := newConfigSmall()
c.Container.Filesystem = append(c.Container.Filesystem, hst.FilesystemConfigJSON{FilesystemConfig: invalidFSHost(false)})
return c
}, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid", nil),
}, nil, nil, &hst.AppError{
Step: "parse dbus address",
Err: &dbus.BadAddressError{
Type: dbus.ErrNoColon,
EntryVal: []byte("invalid"),
PairPos: -1,
},
}, nil, nil, nil, nil, nil},
{"invalid fs early", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, func() *hst.Config {
c := newConfigSmall()
c.Container.Filesystem = append(c.Container.Filesystem, hst.FilesystemConfigJSON{FilesystemConfig: invalidFSHost(false)})
return c
}, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: "invalid filesystem at index 3",
}, nil, nil, nil, nil, nil},
{"evalSymlinks early", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, "", stub.UniqueError(0)),
}, nil, nil, &hst.AppError{
Step: "evaluate path hiding target",
Err: stub.UniqueError(0),
}, nil, nil, nil, nil, nil},
{"host nil abs", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, func() *hst.Config {
c := newConfigSmall()
c.Container.Filesystem = append(c.Container.Filesystem, hst.FilesystemConfigJSON{FilesystemConfig: invalidFSHost(true)})
return c
}, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/.ro-store"}, nePrefix+"/var/lib/hakurei/base/org.nixos/.ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium"}, "var/lib/hakurei/base/org.nixos/org.chromium.Chromium", nil),
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: "impossible path hiding state reached",
}, nil, nil, nil, nil, nil},
{"evalSymlinks late", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", stub.UniqueError(1)),
}, nil, nil, &hst.AppError{
Step: "evaluate path hiding source",
Err: stub.UniqueError(1),
}, nil, nil, nil, nil, nil},
{"invalid contains", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/.ro-store"}, nePrefix+"/var/lib/hakurei/base/org.nixos/.ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium"}, "var/lib/hakurei/base/org.nixos/org.chromium.Chromium", nil),
call("verbosef", stub.ExpectArgs{"hiding path %q from %q", []any{"/proc/nonexistent/eval/etc/dbus", "/etc/"}}, nil, nil),
}, nil, nil, &hst.AppError{
Step: "determine path hiding outcome",
Err: errors.New("Rel: can't make /proc/nonexistent/eval/xdg_runtime_dir relative to var/lib/hakurei/base/org.nixos/org.chromium.Chromium"),
}, nil, nil, nil, nil, nil},
{"invalid hide", func(bool, bool) outcomeOp { return new(spFilesystemOp) }, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, "xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, "tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "nscd", nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, "nonexistent/dbus", nil),
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, "nonexistent", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/.ro-store"}, ".ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium"}, "org.chromium.Chromium", nil),
call("verbosef", stub.ExpectArgs{"hiding path %q from %q", []any{"nonexistent/dbus", "/etc/"}}, nil, nil),
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: `invalid path hiding candidate "nonexistent/dbus"`,
}, nil, nil, nil, nil, nil},
{"invalid fs", func(isShim, clearUnexported bool) outcomeOp {
if !isShim {
return new(spFilesystemOp)
}
return &spFilesystemOp{HidePaths: []*check.Absolute{m("/proc/nonexistent/eval/etc/dbus")}}
}, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/.ro-store"}, nePrefix+"/var/lib/hakurei/base/org.nixos/.ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium"}, nePrefix+"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium", nil),
call("verbosef", stub.ExpectArgs{"hiding path %q from %q", []any{"/proc/nonexistent/eval/etc/dbus", "/etc/"}}, nil, nil),
}, newI(), nil, nil, func(state *outcomeStateParams) {
state.filesystem = configSmall.Container.Filesystem
state.params.Ops = new(container.Ops)
state.as = hst.ApplyState{AutoEtcPrefix: wantAutoEtcPrefix, Ops: opsAdapter{state.params.Ops}}
state.filesystem = append(state.filesystem, hst.FilesystemConfigJSON{})
}, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, nil, nil, &hst.AppError{
Step: "finalise",
Err: os.ErrInvalid,
Msg: "invalid filesystem at index 3",
}},
{"success noroot nodev envdbus strangedbus dbusnotabs hide", func(isShim, clearUnexported bool) outcomeOp {
if !isShim {
return new(spFilesystemOp)
}
return &spFilesystemOp{HidePaths: []*check.Absolute{m("/proc/nonexistent/eval/etc/dbus")}}
}, newConfigSmall, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, "invalid:meow=0;unix:path=/system_bus_socket;unix:path=system_bus_socket", nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is in an unusual location", []any{"/system_bus_socket"}}, nil, nil),
call("verbosef", stub.ExpectArgs{"dbus socket %q is not absolute", []any{"system_bus_socket"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/"}, nePrefix+"/etc/dbus", nil), // to match hidePaths
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/.ro-store"}, nePrefix+"/var/lib/hakurei/base/org.nixos/.ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium"}, nePrefix+"/var/lib/hakurei/base/org.nixos/org.chromium.Chromium", nil),
call("verbosef", stub.ExpectArgs{"hiding path %q from %q", []any{"/proc/nonexistent/eval/etc/dbus", "/etc/"}}, nil, nil),
}, newI(), nil, nil, func(state *outcomeStateParams) {
state.filesystem = configSmall.Container.Filesystem
state.params.Ops = new(container.Ops)
state.as = hst.ApplyState{AutoEtcPrefix: wantAutoEtcPrefix, Ops: opsAdapter{state.params.Ops}}
}, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Ops: new(container.Ops).
Etc(fhs.AbsEtc, wantAutoEtcPrefix).
OverlayReadonly(
check.MustAbs("/nix/store"),
fhs.AbsVarLib.Append("hakurei/base/org.nixos/.ro-store"),
fhs.AbsVarLib.Append("hakurei/base/org.nixos/org.chromium.Chromium")).
Readonly(hst.AbsPrivateTmp, 0755).
Tmpfs(m("/proc/nonexistent/eval/etc/dbus"), 1<<13, 0755).
Remount(fhs.AbsDev, syscall.MS_RDONLY),
}, nil, nil},
{"success", func(bool, bool) outcomeOp {
return new(spFilesystemOp)
}, hst.Template, nil, []stub.Call{
call("lookupEnv", stub.ExpectArgs{dbus.SystemBusAddress}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/xdg_runtime_dir"}, nePrefix+"/xdg_runtime_dir", nil),
call("evalSymlinks", stub.ExpectArgs{container.Nonexistent + "/tmp/hakurei.0"}, nePrefix+"/tmp/hakurei.0", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/nscd"}, "", &os.PathError{Op: "lstat", Path: "/var/run/nscd", Err: os.ErrNotExist}),
call("verbosef", stub.ExpectArgs{"path %q does not yet exist", []any{"/var/run/nscd"}}, nil, nil),
call("evalSymlinks", stub.ExpectArgs{"/var/run/dbus"}, nePrefix+"/run/dbus", nil),
call("readdir", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian"}, stubDebianRoot, nil),
call("evalSymlinks", stub.ExpectArgs{"/etc/"}, nePrefix+"/etc", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper"}, nePrefix+"/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/upper", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work"}, nePrefix+"/var/lib/hakurei/nix/u0/org.chromium.Chromium/rw-store/work", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.nixos/ro-store"}, nePrefix+"/var/lib/hakurei/base/org.nixos/ro-store", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/u0/org.chromium.Chromium"}, nePrefix+"/var/lib/hakurei/u0/org.chromium.Chromium", nil),
call("evalSymlinks", stub.ExpectArgs{"/dev/dri"}, nePrefix+"/dev/dri", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/bin"}, nePrefix+"/var/lib/hakurei/base/org.debian/bin", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/home"}, nePrefix+"/var/lib/hakurei/base/org.debian/home", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/lib64"}, nePrefix+"/var/lib/hakurei/base/org.debian/lib64", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/lost+found"}, nePrefix+"/var/lib/hakurei/base/org.debian/lost+found", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/nix"}, nePrefix+"/var/lib/hakurei/base/org.debian/nix", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/root"}, nePrefix+"/var/lib/hakurei/base/org.debian/root", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/run"}, nePrefix+"/var/lib/hakurei/base/org.debian/run", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/srv"}, nePrefix+"/var/lib/hakurei/base/org.debian/srv", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/sys"}, nePrefix+"/var/lib/hakurei/base/org.debian/sys", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/usr"}, nePrefix+"/var/lib/hakurei/base/org.debian/usr", nil),
call("evalSymlinks", stub.ExpectArgs{"/var/lib/hakurei/base/org.debian/var"}, nePrefix+"/var/lib/hakurei/base/org.debian/var", nil),
}, newI(), nil, nil, func(state *outcomeStateParams) {
state.filesystem = config.Container.Filesystem[1:]
state.params.Ops = new(container.Ops)
state.as = hst.ApplyState{AutoEtcPrefix: wantAutoEtcPrefix, Ops: opsAdapter{state.params.Ops}}
}, []stub.Call{
// this op configures the container state and does not make calls during toContainer
}, &container.Params{
Ops: new(container.Ops).
Etc(fhs.AbsEtc, wantAutoEtcPrefix).
Tmpfs(fhs.AbsTmp, 0, 0755).
Overlay(
check.MustAbs("/nix/store"),
fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/upper"),
fhs.AbsVarLib.Append("hakurei/nix/u0/org.chromium.Chromium/rw-store/work"),
fhs.AbsVarLib.Append("hakurei/base/org.nixos/ro-store")).
Link(fhs.AbsRun.Append("current-system"), "/run/current-system", true).
Link(fhs.AbsRun.Append("opengl-driver"), "/run/opengl-driver", true).
Bind(
fhs.AbsVarLib.Append("hakurei/u0/org.chromium.Chromium"),
check.MustAbs("/data/data/org.chromium.Chromium"),
bits.BindWritable|bits.BindEnsure).
Bind(fhs.AbsDev.Append("dri"), fhs.AbsDev.Append("dri"), bits.BindDevice|bits.BindWritable|bits.BindOptional),
}, nil, nil},
})
}
// invalidFSHost implements the Host method of [hst.FilesystemConfig] with an invalid response.
type invalidFSHost bool
func (f invalidFSHost) Valid() bool { return bool(f) }
func (invalidFSHost) Path() *check.Absolute { panic("unreachable") }
func (invalidFSHost) Host() []*check.Absolute { return []*check.Absolute{nil} }
func (invalidFSHost) Apply(*hst.ApplyState) { panic("unreachable") }
func (invalidFSHost) String() string { panic("unreachable") }

View File

@ -18,23 +18,27 @@ type spDBusOp struct {
ProxySystem bool
}
func (s *spDBusOp) toSystem(state *outcomeStateSys, config *hst.Config) error {
if config.SessionBus == nil {
config.SessionBus = dbus.NewConfig(config.ID, true, true)
func (s *spDBusOp) toSystem(state *outcomeStateSys) error {
if state.et&hst.EDBus == 0 {
return errNotEnabled
}
if state.sessionBus == nil {
state.sessionBus = dbus.NewConfig(state.appId, true, true)
}
// downstream socket paths
sessionPath, systemPath := state.instance().Append("bus"), state.instance().Append("system_bus_socket")
if err := state.sys.ProxyDBus(
config.SessionBus, config.SystemBus,
state.sessionBus, state.systemBus,
sessionPath, systemPath,
); err != nil {
return err
}
state.sys.UpdatePerm(sessionPath, acl.Read, acl.Write)
if config.SystemBus != nil {
if state.systemBus != nil {
s.ProxySystem = true
state.sys.UpdatePerm(systemPath, acl.Read, acl.Write)
}
@ -46,7 +50,7 @@ func (s *spDBusOp) toContainer(state *outcomeStateParams) error {
state.env["DBUS_SESSION_BUS_ADDRESS"] = "unix:path=" + sessionInner.String()
state.params.Bind(state.instancePath().Append("bus"), sessionInner, 0)
if s.ProxySystem {
systemInner := fhs.AbsRun.Append("dbus/system_bus_socket")
systemInner := fhs.AbsVar.Append("run/dbus/system_bus_socket")
state.env["DBUS_SYSTEM_BUS_ADDRESS"] = "unix:path=" + systemInner.String()
state.params.Bind(state.instancePath().Append("system_bus_socket"), systemInner, 0)
}

View File

@ -13,15 +13,15 @@ import (
"hakurei.app/system/acl"
)
func init() { gob.Register(spFinal{}) }
func init() { gob.Register(spFinalOp{}) }
// spFinal is a transitional op destined for removal after #3, #8, #9 has been resolved.
// spFinalOp is a transitional op destined for removal after #3, #8, #9 has been resolved.
// It exists to avoid reordering the expected entries in test cases.
type spFinal struct{}
type spFinalOp struct{}
func (s spFinal) toSystem(state *outcomeStateSys, config *hst.Config) error {
func (s spFinalOp) toSystem(state *outcomeStateSys) error {
// append ExtraPerms last
for _, p := range config.ExtraPerms {
for _, p := range state.extraPerms {
if p == nil || p.Path == nil {
continue
}
@ -45,7 +45,7 @@ func (s spFinal) toSystem(state *outcomeStateSys, config *hst.Config) error {
return nil
}
func (s spFinal) toContainer(state *outcomeStateParams) error {
func (s spFinalOp) toContainer(state *outcomeStateParams) error {
// TODO(ophestra): move this to spFilesystemOp after #8 and #9
// mount root read-only as the final setup Op

View File

@ -23,7 +23,11 @@ type spPulseOp struct {
Cookie *[pulseCookieSizeMax]byte
}
func (s *spPulseOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
func (s *spPulseOp) toSystem(state *outcomeStateSys) error {
if state.et&hst.EPulse == 0 {
return errNotEnabled
}
pulseRuntimeDir, pulseSocket := s.commonPaths(state.outcomeState)
if _, err := state.k.stat(pulseRuntimeDir.String()); err != nil {
@ -156,7 +160,7 @@ func (s *spPulseOp) toContainer(state *outcomeStateParams) error {
state.env["PULSE_SERVER"] = "unix:" + innerPulseSocket.String()
if s.Cookie != nil {
innerDst := hst.AbsTmp.Append("/pulse-cookie")
innerDst := hst.AbsPrivateTmp.Append("/pulse-cookie")
state.env["PULSE_COOKIE"] = innerDst.String()
state.params.Place(innerDst, s.Cookie[:])
}

View File

@ -6,7 +6,6 @@ import (
"hakurei.app/container/bits"
"hakurei.app/container/check"
"hakurei.app/container/fhs"
"hakurei.app/hst"
"hakurei.app/system"
"hakurei.app/system/acl"
)
@ -16,7 +15,7 @@ func init() { gob.Register(spRuntimeOp{}) }
// spRuntimeOp sets up XDG_RUNTIME_DIR inside the container.
type spRuntimeOp struct{}
func (s spRuntimeOp) toSystem(state *outcomeStateSys, _ *hst.Config) error {
func (s spRuntimeOp) toSystem(state *outcomeStateSys) error {
runtimeDir, runtimeDirInst := s.commonPaths(state.outcomeState)
state.sys.Ensure(runtimeDir, 0700)
state.sys.UpdatePermType(system.User, runtimeDir, acl.Execute)

Some files were not shown because too many files have changed in this diff Show More