Skip to content

Conversation

@vincentDcmps
Copy link

hello
I have apply change to customize allowed_ips in function of host.

@githubixx
Copy link
Owner

Hello! I'm not so sure for what this change might be needed TBH 😉 You can set wireguard_allowed_ips per host already and if you want host routes you can assign a value of 10.0.0.2/32,192.168.1.41/32 e.g.

@vincentDcmps
Copy link
Author

in my case I have three device with a central device

with my modification by example on gerard and oscar:

wireguard_byhost_allowed_ips:
  merlin: 10.0.0.6,192.168.1.41
flowchart LR
  A[oscar] <--> B[merlin]
  C[gerard] <--> B 
  
Loading

so I don't want that a and communicate directly by wireguard themself because they are on same lan

if I set wireguard_allowed_ips like you say I will have more something like that

flowchart LR
  A[oscar] <--> B[merlin]
  C[gerard] <--> B 
  A <--> C
Loading

@githubixx
Copy link
Owner

I somehow still don't get this PR 😉 Personally it seems wrong to me to have a "global" variable where you define a dictionary where the hostname is the key while you have the Ansible's host repository on the other side. So if you have something specific that only applies to one host why not use host variables? 😕

I guess this Molecule test comes more or less close to your use case: https://github.com/githubixx/ansible-role-wireguard/tree/master/molecule/kvm-single-server Can you maybe use that one as a template and adjust it accordingly so that it matches your use case? You don't need to execute it as you most probably don't have Vagrant and KVM. But it'd give me an idea.

gregorydlogan added a commit to gregorydlogan/ansible-role-wireguard that referenced this pull request Mar 5, 2024
@Unit193
Copy link

Unit193 commented Apr 2, 2024

@vincentDcmps I'm a little late to the party, but thanks for filing this PR. It's exactly what I needed! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants