Conversation
Don't require changes when adding support for completion in other shells. Co-authored-by: Bartosz Sławecki <bartoszpiotrslawecki@gmail.com>
pawamoy
left a comment
There was a problem hiding this comment.
Thanks a lot @j-g00da! It's really cool to get completions for Zsh too 😄
One thing I'd like to see for Zsh completions is actual description of the completion words: Zsh's completion system allows to attach a description to each term/word, that is then displayed when hitting TAB in the shell. That means we'd have to add a bit of code to our logic for generating word candidates here, so that descriptions are returned too. I see two options:
- the logic always returns tuples (word, description), and higher-up in the stack we filter out descriptions if the shell doesn't support those
- the logic accepts a
shellargument that lets it know whether it should return descriptions as well as words
We would have to check how this integrates with the actual completion script (completions.zsh) and compctl command.
docs/usage.md
Outdated
| Or in Zsh with: | ||
|
|
||
| ```zsh | ||
| completions_dir="$HOME/.duty" |
There was a problem hiding this comment.
I understand you may have taken inspiration from the Bash example above, but this doesn't seem right to me. The Bash example uses standard locations that Bash immediately understands. This location would require users to fiddle with the Zsh configuration to load the completion from ~/.duty I believe. Furthermore, not using standard locations is generally frowned-upon by users (me included 😄) as that leads to a cluttered HOME directory 🙂
In short, could you try to see if there's a standard location we could write the file in, so that Zsh's completion system natively finds it?
There was a problem hiding this comment.
In short, could you try to see if there's a standard location we could write the file in, so that Zsh's completion system natively finds it?
From what I understand, zsh doesn't have any "standard" location for completions other than /usr/local/share/zsh/site-functions, which is in fpath by default, but I don't think we want to install it system-wide. I based my approach on oh my zsh, which puts completions under ~/.oh-my-zsh/completions and adds it to fpath. Still, I think cluttered HOME is a good point. Let me know what you think.
One thing I'd like to see for Zsh completions is actual description of the completion words
Good point, I will work on this later today.
There was a problem hiding this comment.
Thanks! We could then document that and let the user decide. I found https://github.com/zsh-users/zsh-completions/blob/master/zsh-completions-howto.org#telling-zsh-which-function-to-use-for-completing-a-command to be very readable, maybe we could link to it.
There was a problem hiding this comment.
I've modified the docs yestarday and changed the completions.zsh to use compdef (rich completions) instead of old compctl. Still no descriptions, but now it will be possible to add them. Will work on adding descriptions later today or tomorrow. You can take a look at the docs and let me know if it's enough information (I will polish the text later).
Sources: this answer on stackoverflow, and ofc the zsh-completions-howto.org.
There was a problem hiding this comment.
Also - completions.zsh is WIP, it's just a minimal example for now, but it works, and I know opts.complete parsing as of now breaks the bash complete implementation.
Also 2 - We can make it compatible with old zsh versions, but I don't know if this is that important since user can always use bash completions in zsh if native approach doesn't work (explained in the docs).
There was a problem hiding this comment.
Thank you so much! Don't bother about supporting old Zsh versions, latest is fine 🙂
|
I'm yearning for some automatic tests of those scripts. WDYT @pawamoy |
|
@bswck yes, testing is always good! I'm definitely not sure how to test auto-completion though! Any idea? Wouldn't it be enough to just test the output of our Python code that generates words? |
Definitely no, I strongly feel we need white-box regression tests of completions working on the user end.
We can get some inspiration from typer tests which seem to do exactly that. |
|
I think of doing a bit of extra work on it, since the docs on how to enable completions grew in the Zsh section. Duty is simple, and installing completions for it should also be simple. It’s fine to inform the user about options, but for the most part… it's too much and I really would like something like For Bash, it would be really easy - just run what’s in the docs. For Zsh I think the best option would be to put it in
In fact, I see only one disadvantage of such an approach - |
|
Also - I just checked Typer source code, since @bswck linked it, and this is exactly what they do:
But when it comes to installation dir, it's def install_zsh(*, prog_name: str, complete_var: str, shell: str) -> Path:
# Setup Zsh and load ~/.zfunc
zshrc_path = Path.home() / ".zshrc"
zshrc_path.parent.mkdir(parents=True, exist_ok=True)
zshrc_content = ""
if zshrc_path.is_file():
zshrc_content = zshrc_path.read_text()
completion_line = "fpath+=~/.zfunc; autoload -Uz compinit; compinit"
if completion_line not in zshrc_content:
zshrc_content += f"\n{completion_line}\n"
style_line = "zstyle ':completion:*' menu select"
# TODO: consider setting the style only for the current program
# style_line = f"zstyle ':completion:*:*:{prog_name}:*' menu select"
# Install zstyle completion config only if the user doesn't have a customization
if "zstyle" not in zshrc_content:
zshrc_content += f"\n{style_line}\n"
zshrc_content = f"{zshrc_content.strip()}\n"
zshrc_path.write_text(zshrc_content)
# Install completion under ~/.zfunc/
path_obj = Path.home() / f".zfunc/_{prog_name}"
path_obj.parent.mkdir(parents=True, exist_ok=True)
script_content = get_completion_script(
prog_name=prog_name, complete_var=complete_var, shell=shell
)
path_obj.write_text(script_content)
return path_objOn a side note they might have a potential bug there... |
|
Completely agree with your comment @j-g00da. I was exactly going to say that the only downside is that users will probably have to use sudo. IMO that's an acceptable tradeoff. If they don't want to use sudo, they can deal with their own configuration by writing the completion script where they prefer. OK so it looks like Typer modifies Lets go with the approach you suggested:
|
|
@bswck @j-g00da do you know if Python has any standard or discussion about distributing shell completions inside packages (source dists or wheels)? Looks like venvs have the opportunity to store a, I don't know, I mean pipx managed to handle manpages so surely they could handle completion scripts 🤔 Basher offers such a mechanism for example. Your project defines completion files for Bash, Zsh, whatever, and Basher puts them in a dedicated folder. Users can then easily point their shell at these folders. |
|
I opened pypa/pipx#1604 and astral-sh/uv#11354, let see what people think 😊 |
|
Poetry suggests Still - it's not backed by any standard AFAIK (it's often described as example of custom directory for completions) and because of that, the directory is not by default in I still want to take a look into uv source code later, on my mac it installed completions in |
I went with this approach in 592f938
|
|
Awesome 😍 |
src/duty/_completion.py
Outdated
| # We only have space for one line of description, | ||
| # so I remove descriptions of sub-command parameters from help_text | ||
| # by removing everything after the first newline. | ||
| # I don't think it is the best approach and should be discussed. | ||
| return f"{completion}: {help_text or '-'}".split("\n", 1)[0] |
There was a problem hiding this comment.
- functions docstrings: good, it's reasonable to keep only the first line, as docstrings' first line should be a complete sentence anyway.
- parameter docstrings: we don't parse them yet anyway (from looking at code below). When we do (using Griffe?), I don't think there's a more robust way than keeping the first line anyway. We don't want to enter natural language processing territory (splitting sentences is hard). Maybe we could recommend to use single-line descriptions for parameters, or at least to have a complete sentence as first line.
There was a problem hiding this comment.
Let's group behaviors by shells instead of topics.
Instead of fine-grained repetitive interfaces
topic1
bashzsh- other shells...
topic2
bashzsh- other shells...
we want modular strategies, like
bash
topic1topic2
zsh
topic1topic2
other shells follow the same pattern...
This approach is known under many names; the I from SOLID, the strategy design pattern, the loose coupling, the high cohesion.
Notice that the current solution doesn't really leverage the use of classes for grouping methods by common state; an interface for a "shell" can be easily described with an abstract class/protocol, distinct implementations gathered within a class-level registry modified via __init_subclass__ of the abstract class/protocol. That would cause future implementations of other shells to be cohesive, discoverable, comprehensive and complete, with a potential of sharing useful state across "topic" methods; something along the lines of:
class Shell(metaclass=abc.ABCMeta):
name: ClassVar[str]
implementations: Final[ClassVar[dict[str, Shell]]] = {}
@abc.abstractmethod
def parse_completions(self, candidates: Sequence[CompletionCandidateType]) -> str:
...
@abc.abstractmethod
def install_completions(self, candidates: Sequence[CompletionCandidateType]) -> str:
...
def __init_subclass__(cls) -> None:
cls.implementations[cls.name] = cls
class Bash(Shell):
name = "bash"
def parse_completions(self, candidates: Sequence[CompletionCandidateType]) -> str:
# implementation for bash
def install_completions(self) -> str:
# implementation for bash
class Zsh(Shell):
name = "zsh"
def parse_completions(self, candidates: Sequence[CompletionCandidateType]) -> str:
# implementation for zsh
def install_completions(self) -> str:
# implementation for zsh
# other shells follow the same patternImplementing a new shell/changing an existing shell implementation will only require changes in one area in the code, instead of a number of them, all over the place, separated by other, unrelated fragments.
Notice how this approach makes the new implementations also portable and easier to plug into the pre-existing runtime—simply implement a strategy for your shell and ship it. The library can find it in the Shell.implementations registry and you can override the existing ones as well, and then maybe suggest a patch to upstream.
I'm up for pair programming on this one, hit me up on Discord if you're interested :)
Co-authored-by: Bartosz Sławecki <bartoszpiotrslawecki@gmail.com>
|
Thanks for feedback @bswck. Generally there are only tests left to do now @pawamoy. I'll do some research and experiment with it today. There are still some things worth pointing out though:
|
# Conflicts: # src/duty/completions.bash
|
Thank you so much for all your hard work on this @j-g00da 🙏
|
|
There are some, but since we are using symlinks (in On the other hand should we really care if completion scripts were installed manually? Sure, we can make test specifically for backward compatibility with 1.5.0, as currently this is the only way to install completions, but with the next update we can promote usage of |
Hmm.
🤔 Package updates are likely more frequent that reinstalls on a different Python versions, so lets continue with symlinks 👍 In the end though it looks to me like we should always recommend users to update the completion script upon updating duty. Also, if the file already exists, don't error out, force rewrite it. I'm waiting for the PR to get out of draft for a full review 🙂 |
Yes. |
|
Just pushed some work in progress (only bash for now) tests to show what I'm trying to do here. Since some of these modify the system, I decided to mark them with custom pytest mark and run them in isolated containers. Just a proof of concept for now, not optimized, I'm still not entirely sure if it's the way to go. |
| @pytest.mark.isolate | ||
| def test_completion_function(duties_file: str, partial: str, expected: str) -> None: | ||
| """Test bash `_complete_duty` function.""" | ||
| # TODO: Temporary hack, as for now completions don't respect the `-d` flag - to be fixed in another PR. |
There was a problem hiding this comment.
Also a side note - completions always use duties.py in main directory to generate candidates because -d param is not passed to completion script. I can try to fix it when I'm done with tests, but maybe let's make it another PR.
.github/workflows/ci.yml
Outdated
| collect-isolated-tests: | ||
| runs-on: ubuntu-latest | ||
|
|
||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| fetch-depth: 0 | ||
| fetch-tags: true | ||
|
|
||
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.12" | ||
|
|
||
| - name: Setup uv | ||
| uses: astral-sh/setup-uv@v3 | ||
| with: | ||
| enable-cache: true | ||
| cache-dependency-glob: pyproject.toml | ||
|
|
||
| - name: Install dependencies | ||
| run: make setup | ||
|
|
||
| - name: Collect tests | ||
| run: make collect-isolated-tests | ||
|
|
||
| isolated-tests: | ||
|
|
||
| needs: collect-isolated-tests | ||
|
|
||
| strategy: | ||
| matrix: | ||
| os: | ||
| - ubuntu-latest | ||
| - macos-latest | ||
| - windows-latest | ||
| python-version: | ||
| - "3.9" | ||
| - "3.10" | ||
| - "3.11" | ||
| - "3.12" | ||
| - "3.13" | ||
| - "3.14" | ||
| resolution: | ||
| - highest | ||
| - lowest-direct | ||
| pytest-nodeid: ${{ fromJSON(needs.collect-isolated-tests.outputs.isolated_tests) }} | ||
| exclude: | ||
| - os: macos-latest | ||
| resolution: lowest-direct | ||
| - os: windows-latest | ||
| resolution: lowest-direct | ||
| runs-on: ${{ matrix.os }} | ||
| continue-on-error: ${{ matrix.python-version == '3.14' }} | ||
|
|
||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| fetch-depth: 0 | ||
| fetch-tags: true | ||
|
|
||
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
| allow-prereleases: true | ||
|
|
||
| - name: Setup uv | ||
| uses: astral-sh/setup-uv@v3 | ||
| with: | ||
| enable-cache: true | ||
| cache-dependency-glob: pyproject.toml | ||
| cache-suffix: py${{ matrix.python-version }} | ||
|
|
||
| - name: Install dependencies | ||
| env: | ||
| UV_RESOLUTION: ${{ matrix.resolution }} | ||
| run: make setup | ||
|
|
||
| - name: Run the test suite | ||
| env: | ||
| _DUTY_ISOLATED_TEST_CONTAINER: true | ||
| run: duty test ${{ matrix.pytest-nodeid }} parallel=False | ||
|
|
There was a problem hiding this comment.
IIUC the current solution would create lots of jobs. Also, lots of boilerplate 😕
My suggestion: remove all the boilerplate. Skip relevant tests if env var is unset. Set the env var in CI.
There was a problem hiding this comment.
Hmmm, yeah, there's parallelism to handle too. I'm thinking about it.
There was a problem hiding this comment.
We could use pytest-xdist's --dist=loadfile option, to make sure all tests within a given module are executed by the same worker. Maybe there's a way to set this option automatically when our custom env var is set. Each test would clean up after itself. I believe this would prevent any race condition.
There was a problem hiding this comment.
IIUC the current solution would create lots of jobs.
Yes, that's why I'm not really sure about it. On the other hand this is the only way I see to make these tests deterministic. If we run multiple tests on the same machine, we can create some cleanup mechanism but it won't be failproof and will depend on implementation - and if a change in implementation needs a change in corresponding test, then it's not a good test.
Also there is a problem of parallelism as you said. Even if we run them in one job (per python version and os), we can't parallelize it, so we still need a second matrix for them.
Boilerplate can be reduced for sure, I just wanted to share what I'm trying to do, because this needs to be discussed.
There was a problem hiding this comment.
If we were to ditch this idea and run them like the other tests, easiest solution would be to just call {Shell}().install_path.unlink(). It would work for Bash and Zsh, not sure about for example PowerShell.
There was a problem hiding this comment.
and if a change in implementation needs a change in corresponding test, then it's not a good test.
That's a tradeoff I'm willing to accept.
easiest solution would be to just call {Shell}().install_path.unlink()
Sounds very reasonable to me 🙂
There was a problem hiding this comment.
I mean, yeah, integration tests are nice to have (detecting early when completion install commands do not work as expected anymore), but at the same I don't expect shells to change frequently (and that's an euphemism). So I'm on the side of "lets not waste energy writing and maintaining integration tests that aren't relevant to the project specifically". If it works now, chances are very high that it'll keep working for a long time. So, yes please, no boilerplate, no custom handling of these tests, just clean up / teardown and use --dist=loadfile when the env var is set 🙂 (can be done in the test duty)
There was a problem hiding this comment.
Sure, I will get back to this, when I have some free time. I think we can also force pytest to run the installation test first (before edge cases like overriding previous installation.)
There was a problem hiding this comment.
I would not recommend forcing the order of tests as this can hide dependencies between them. Random order forces us to make sure each test is atomic and complete and correctly cleans up after itself.
And of course, no rush, thank you so much for all your work already 😄

No description provided.