Description
What happened:
When using AAAA dual records (e.g. ::ffff:192.168.20.3
), the Pi-Hole v6 client goes into a crash loop.
This may also effect the v5 client? I never tried to use these dual records with it, but it works fine with other external dns providers (e.g. UniFi webhook)
If I use --exclude-record-types=AAAA
and manually create the records in Pi-Hole everything works fine as well, so I believe Pi-Hole is fine with these types of records.
What you expected to happen:
AAAA dual records should work. For instance, it works fine with the UniFi webhook.
How to reproduce it (as minimally and precisely as possible):
Using the staging image v20250403-v0.16.1-70-gc5af75e3
I get the following error when I have the sources set to service
(I get the same fatal error for both both upsert-only
and sync
, I'm using sync
here when writing it up):
(No records at all in the Pi-Hole
UI when I startup external-dns)
time="2025-04-03T21:41:04-05:00" level=debug msg="Endpoints generated from service: kube-system/cilium-gateway-internal: [internal.domain.name 0 IN A 192.168.20.3 [] internal.domain.name 0 IN AAAA ::ffff:192.168.20.3 []]"
...
time="2025-04-03T21:41:04-05:00" level=info msg="PUT internal.domain.name IN AAAA -> ::ffff:192.168.20.3"
time="2025-04-03T21:41:04-05:00" level=info msg="PUT internal.domain.name IN A -> 192.168.20.3"
...
time="2025-04-03T21:41:09-05:00" level=debug msg="Endpoints generated from service: kube-system/cilium-gateway-internal: [internal.domain.name 0 IN A 192.168.20.3 [] internal.domain.name 0 IN AAAA ::ffff:192.168.20.3 []]"
...
time="2025-04-03T21:41:09-05:00" level=info msg="PUT internal.domain.name IN AAAA -> ::ffff:192.168.20.3"
time="2025-04-03T21:41:09-05:00" level=debug msg="Error on request %!s(<nil>)"
time="2025-04-03T21:41:09-05:00" level=fatal msg="Failed to do run once: received 400 status code from request: [bad_request] Item already present (Uniqueness of items is enforced) - 0.000316s"
(These are the only logs with debug enabled that include 192.168.20.3
)
This causes the pod to enter CrashLoopBackOff
.
When the pod retries, it shows instead:
time="2025-04-03T21:48:40-05:00" level=debug msg="Endpoints generated from service: kube-system/cilium-gateway-external: [internal.domain.name 0 IN A 192.168.20.3 [] internal.domain.name 0 IN AAAA ::ffff:192.1
68.20.3 []]"
...
time="2025-04-03T21:48:40-05:00" level=info msg="PUT internal.domain.name IN AAAA -> ::ffff:192.168.20.3"
time="2025-04-03T21:48:40-05:00" level=debug msg="Error on request %!s(<nil>)"
time="2025-04-03T21:48:40-05:00" level=fatal msg="Failed to do run once: received 400 status code from request: [bad_request] Item already present (Uniqueness of items is enforced) - 0.000280s"
(So only the one, instead of showing it twice)
If I switch from service
as the source to gateway-httproute
, the A
record get deleted and my CNAME
records get created as expected, but the AAAA
records do not get deleted, I still see them in the Pi-Hole UI. (And I see the All records are already up to date
, the pod does not crash)
If I then switch back again from gateway-httproute
to service
, all the CNAME
records get deleted, but I get the fatal error before the A
record even get created, since it fails to re-create the AAAA
record first.
Adding --exclude-record-types=AAAA
fixes the crash, but obviously the AAAA
records do not get created.
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version
): v20250403-v0.16.1-70-gc5af75e3 - DNS provider: Pi-Hole v6
- Others:
pihole-FTL --version
: v6.1