-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Implemented Watch mode filter by filename and by test name #1530 #3372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hey @mmulet, this sounds really great! I'm a bit short on time but can hopefully have a look in the next few days. |
High-level question, Jest uses regular expressions to select files and tests? I ask because our CLI options use glob patterns for files, and wildcard matching for individual tests. I'm wondering if it's more valuable to follow our CLI behavior, or to match Jest's interface. And the regular expressions are matched agains filenames, rather than paths? Also we refer to test "titles", not "names". |
If the previous handler failed, perhaps due to an assertion, it wouldn't trigger the next run and the tests would hang. Fix by including a failure of the previous handler in the await condition for the next run.
Oh thanks for the reminder, I think I've suffered through that in the past. I'll push up a fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic start @mmulet. I've pushed up some small tweaks and included some more impactful suggestions below. What do you think?
|
I believe I responded to all the suggestions. I'm out of time for now, but I should be able to work again on this Sunday night or Monday and implement these changes. In the meantime, feel free to make any changes. Based on the suggestions you've made, we really seem to be on the same page here. Plus, I'm not the type to be possessive about code, so rewrite the whole thing if you want to. |
Summary of changesThis should be in the same order as the suggestions given above.
|
lib/api.js
Outdated
@@ -162,6 +162,8 @@ export default class Api extends Emittery { | |||
setupOrGlobError = error; | |||
} | |||
|
|||
selectedFiles = selectedFiles.filter(file => runtimeOptions.interactiveFilter?.canSelectTestsInThisFile(file) ?? true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seemed like the best place to skip loading entire file if it was filtered out by the interactive filter, but would somewhere else be better?
lib/runner.js
Outdated
@@ -411,7 +418,7 @@ export default class Runner extends Emittery { | |||
continue; | |||
} | |||
|
|||
if (this.checkSelectedByLineNumbers && !task.metadata.selected) { | |||
if (!task.metadata.selected) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I moved to metadata.selected rather than metadata.skipped, I had to delete the checks for this.checkSelectedByLineNumbers
whenever it also checks metadata.selected.
All tests still pass, and it works fine. So, I am assuming that these checks are not needed.
@mmulet still looking at this; there's some edge cases that go pretty deep and I wanted to see how it'd work with globs and matches, too. Getting close to having that resolved. Edge cases:
I'll push up some changes when I can, for your feedback. Thanks for pushing this forward! |
We'll adjust the test title matching implementation instead.
* Last line is initially empty * Override write() to record that the last line is not empty * Remove unused property declaration
Interactive prompts in watch mode look better without indentation.
This is a historical feature back from when AVA tried to persist "runExclusive" during regular runs. This behavior conflicts with interactive mode, and in any case interactive mode makes it easier to focus on a specific test file or title.
This is a newer flag, previously used for line number selection. It's a better choice than overloading the `exclusive` flag used with `.only()`.
It doesn't rely on any utility methods so it's clearer to list it first rather than having it pop up further down the file.
IIFEs are a little awkward, plus we can trim values in the helper.
… and usability Change "run all" to reset filters. Clear prompts before running.
* Allow watcher logic to override default file selection * Remove from interactive filter class * Count previous failures after watcher has modified file selection. During this modification it may discard previous failures, which means it can't provide an accurate count when starting the run.
Remove interactive filter class, the API can concatenate the interactive match pattern with the --match arguments.
It's more likely you want to quickly see if anything else has broken, than that you mean to reset the filters.
@mmulet I'm about to push up some changes, please let me know what you think and thank you for bearing with me. They fall into different categories:
And then the big change: use glob patterns for file selection, and make this work by having the watcher handle all selection rather than As well as a smaller change, using match patterns (akin to I've yet to review test coverage, some edge cases could do with additional tests. |
Hi!
This pull request implements
Watch mode filter by filename and by test name
from issue #1530Watch this short video for a demonstration of the feature:
Watch_Mode_Demo.mov
Overview
Summary of new feature
(taken from the docs)
Filter tests while watching
You may also filter tests while watching by using the cli. For example, after running
$ npx ava --watch
You will see a prompt like this :
So, to run only tests numbered like
You can type
t
and press enter, then typefoo\d+
and press enter.This will then run all tests that match that pattern.
Afterwards you can use the
r
command to run the matched tests again,or
a
command to run all tests.Code Overview
In the contributing guidelines, you say that you prefer shorter pull requests, but this is a bit long. So, to help speed things along, I'm going to provide an overview for each part that I added or changed.
I add a new variable called watchModeSkipTests
https://github.com/mmulet/ava/blob/78a3768fbe3360930351374f854f9cc7a46fda51/lib/watcher.js#L104
which is an instance of WatchModeSkipTests lib/watch-mode-skip-tests.js, this class handles the regex for the skipping, and most of the code will be passing this object around, populating its fields, etc.
There is one particular part of this file that I want to draw attention to.
#OnDataGenerator
Continue down the
plan
function until you come to the const _class;This class handles the prompt for commands at watch mode. It does this by listening to stdin's data event in an asynchronous generator.
I switched to using an asynchronous generator rather than just plain old stdin.on('data') because some new commands (namely p, and t the ones where you enter regex), also need to listen to 'data'. By this I mean that they have to listen for data while processing the command. In other words, without the generator you would need an explicit state machine which can be hard to debug and keep track of.
To see what I mean, look at listenForCommand:
Note that the
for await
waits on #data, not on this.#onDataGenerator()! What this means is that nested inside the #onCommand function we can await more lines of input, like this:Then when you are done processing your command, ie, the this.#onCommand returns, you can run the next loop (by getting the next this.#data) and everything works as it should.
And, that's how it works.
It's not too complicated, but maybe a bit unusual. So, I just wanted to document my choices and communicate clearly about why I made them.
The rest of the addition to the plan function is as you would expect, it asks the user for the command and prompts for regex filters if necessary.
Oh the places you'll go
As I mentioned before, the rest of the code is mostly passing around the watchModeSkipTests.
instructions
which is yieldedruntimeOptions
variable. Which is the passed to api.runwatchModeSkipTestsData
because only the Data gets passed to the worker, not the class itself.Since we are branching, let's choose a path
To the worker
(https://github.com/mmulet/ava/blob/78a3768fbe3360930351374f854f9cc7a46fda51/lib/worker/base.js#L75) that we need to pass the Data to the Runner
It is inside the runner that we use the watchModeSkipTests to actually , well, skip tests. The logic is simple, on every chain check if we should skip the test, and if we should, we skip!
Another Side, Another Story
Meanwhile, we have a second task, not just skip a test from running, we also want to prevent its result from being shown at all (this is a filter, the whole idea is we have too many tests to sift through).
And there you have it, a complete tour of the changes!
About the tests
One thing you'll notice while reviewing the tests is that I added try catch blocks on each and every one . I did this because whenever a test would fail, the entire test would just hang and end in a timeout (presumably because it throws an error, and the this.done function never gets run).
I tried this with a simple example, and it also had the same result:
Your contributing guidelines say not to include unrelated code changes in the pull request, so I'm considering this error as out of scope for now. But, now you know why the try catch blocks are there (the CLI will still report a
t.$someFunc()
failure as a failure, it just won't report exceptions as errors,throw $someError
will be caught)Last but not least
I'm going to submit this pull request for the bounty on https://oss.issuehunt.io/r/avajs/ava/issues/1530, but any extra tips (you
can use GitHub Sponsor page) would be very welcome!
IssueHunt Summary
Referenced issues
This pull request has been submitted to: