Skip to content

unable to mount ipv6 with glusterfs client 11.1 but glusterfs client 10.3 works #4643

@sob727

Description

@sob727

Short version: with mount.glusterfs 11.1 I can't mount an IPv6 volume (that volume can be 10.3 or 11.1, both won't work).

I tried with 2 different clusters, one running on Debian Bookworm nodes (glusterfs 10.3) , the other one on Debian Trixie nodes (glusterfs 11.1).
Both are replica setups (Bookworm setup is 4 nodes, Trixie setup is 3 nodes), IPv6 only.

I've been using an all Bookworm setup for ages with IPv6 only (both transport and client), working fine. However as I upgrade to Trixie I encountered this regression.

Volumes from both clusters can be mounted by clients running mount.glusterfs version 10.3 on Bookworm era clients

$ mount.glusterfs -V
glusterfs 10.3
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
$ mount.glusterfs 2600:dead:beef:dead:beef:ff:fe80:f90b:/volume1 /data
$

Volumes from neither can be mounted by clients running mount.glusterfs version 10.1 on Trixie era clients:

$ mount.glusterfs -V
glusterfs 11.1
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

$ mount.glusterfs 2600:dead:beef:dead:beef:ff:fe80:f90b:/volume1 /data
Mounting glusterfs on /data failed.
$ tail -f -n 20 /var/log/glusterfs/data.log 
[2025-12-15 00:13:54.322216 +0000] I [MSGID: 100030] [glusterfsd.c:2872:main] 0-/usr/sbin/glusterfs: Started running version [{arg=/usr/sbin/glusterfs}, {version=11.1}, {cmdlinestr=/usr/sbin/glusterfs --process-name fuse --volfile-server=2600:dead:beef:dead:beef:ff:fe80:f90b --volfile-id=/volume1 /data}] 
[2025-12-15 00:13:54.322782 +0000] I [glusterfsd.c:2562:daemonize] 0-glusterfs: Pid of current running process is 337341
[2025-12-15 00:13:54.324625 +0000] E [MSGID: 101073] [name.c:254:gf_resolve_ip6] 0-resolver: error in getaddrinfo [{family=2}, {ret=Name or service not known}] 
[2025-12-15 00:13:54.324632 +0000] E [name.c:383:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host 2600:4040:926d:9900:5054:ff:fe80
[2025-12-15 00:13:54.324710 +0000] I [glusterfsd-mgmt.c:2783:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: 2600:4040:926d:9900:5054:ff:fe80
[2025-12-15 00:13:54.324722 +0000] I [glusterfsd-mgmt.c:2822:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2025-12-15 00:13:54.324742 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] 
[2025-12-15 00:13:54.324763 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] 
[2025-12-15 00:13:54.324871 +0000] W [glusterfsd.c:1427:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe7f5) [0x7fe7470677f5] -->/usr/sbin/glusterfs(+0x12baa) [0x559a8993ebaa] -->/usr/sbin/glusterfs(cleanup_and_exit+0x64) [0x559a89936fd4] ) 0-: received signum (1), shutting down 
[2025-12-15 00:13:54.324893 +0000] I [fuse-bridge.c:7051:fini] 0-fuse: Unmounting '/data'.
[2025-12-15 00:13:54.325046 +0000] I [fuse-bridge.c:7055:fini] 0-fuse: Closing fuse connection to '/data'.
[2025-12-15 00:13:54.325153 +0000] W [glusterfsd.c:1427:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libc.so.6(+0x92b7b) [0x7fe746ee6b7b] -->/usr/sbin/glusterfs(+0x1285d) [0x559a8993e85d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x64) [0x559a89936fd4] ) 0-: received signum (15), shutting down 

Happy to provide more details if needed. I should mention that all the IPv6 nodes are setup by FQDN. And the setup is otherwise very simple, with the only change from stock being that it's inet6.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions