well the problem with arbitrary selection is that... you need to do lots
of matching, which is confusing too when you need to encode nesting. so
what if just.. node cli.js index_test.js?
this isn't concerned with reporters or execution, this happens at the
cli level and it solely affects which modules are imported instead of
just all_tests.js.
alternatively, we could do suites instead of files. this is probably
better huh because you don't need to type out all those file paths, and
it doesn't punish large files (because a test file corresponds to
a source code file)
so we'd just import all_tests.js, then just filter out suites whose name
doesn't match <input>, before calling `run` on it. deleting and
filtering out suites should probably be methods on the registrar
i suspect the impl will be tiny excl argument parser nonsense, so imo
squash this into the commit that added registrars
add a comment describing the use-case as “just run the tests i'm editing
to save time”, rather than as skipping, then briefly mention why general
purpose skipping is still a tentative future feature
both on a test level and for the whole thing. i think the reporter or
registrar abstractions should deal with all timeouts, and just feed
elapsed time through all the functions: update() gets time for the
specific test, and finalize passes you the total time. this way you
don't need to do the same logic in every reporter, and you also give
a suggestion to reporter writers (i.e.: you in the future) to expose
test durations. actually tbh per-test isn't possible anywhere but in the
executor, especially when taking potential future parallel execution
into account
on the topic of parallelism: per-test is wall clock for that test,
regardless of perceived time, because no other number is useful. whole
thing is wall clock too, not cpu time
remember:
- use monotonic clocks!! we need elapsed time, not absolute time
- format them to more readable strings like “15h 12m” instead of
“54738 seconds”. once things get large we can be less precise
for the go reporter: ask ozy if the go one already measures it. if so
then don't even bother serializing it
for the stream reporter: the live feed should include per-test time in
brackets or something. the final tree should only include timeout for
outliers on the long side (just shove a box plot-esque algo on it), and
if a flag is given print it for all nodes, and if another flag is given
print the n longest tests. the total time should be in the summary line
at the end in brackets à la pytest
for the dom reporter, we do the same as with the stream reporter's
outlier detection, and have a checkbox or button to dynamically
show/hide all timeouts, and another button to toggle a widget of sorts
that shows up right above the result tree which includes the n longest
tests. all these buttons should be on the same line as the summary
(successes/failures/skips). the total time should be included in the
“execution finished” text form the previous commit, i.e. “execution
finished in 15s”
since the test tree is statically known, we also statically know how
many tests are present. we should hence be using this to provide
a counter, say [1/48], to give a rough estimate as to when tests might
finish. not a time estimate of course, since we can't determine that
nota bene, we can't pass the current test count, and instead need to let
the reporter deal with that, since otherwise we can't easily parallelize
execution in the future. definitely mention this in a comment somewhere
to elaborate on the design
for the go reporter, ask ozy if go has any way to tell it this info.
i doubt it since they don't have a statically known test count. if it
does, then just send the count alongside the tree
for the stream reporter, ignore it entirely; we don't even display
successes by default so the number has nowhere to be attached to
for the dom reporter, put it somewhere in the header, i think alongside
the success/failure/skip count. something like “in progress (4/28)”.
then once finalize() is called change the whole thing to “execution
finished”
While it is still impossible to reliably determine the expected contents of these status files, this checks their nature for expected substitution behaviour.
Signed-off-by: Ophestra <cat@gensokyo.uk>