You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
There is a memory leak reported by Asan after booting the file system and then shutting down. Regardless of whether there are user operations were performed.
The exact command to reproduce the issue:
Boot the file system, wait a few seconds, and then shut down.
Direct leak of 131188 byte(s) in 1 object(s) allocated from:
#0 0x4a046d in malloc (/usr/local/sbin/glusterfs+0x4a046d) #1 0x7f2fdae47e1e in __gf_malloc /root/glusterfs/libglusterfs/src/mem-pool.c:231:11 #2 0x7f2fdae5362e in iobuf_get_from_small /root/glusterfs/libglusterfs/src/iobuf.c:451:13 #3 0x7f2fdae5362e in iobuf_get2 /root/glusterfs/libglusterfs/src/iobuf.c:482:17 #4 0x7f2fdae543b6 in iobuf_get /root/glusterfs/libglusterfs/src/iobuf.c:556:13 #5 0x7f2fd823d144 in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6150:17 #6 0x7f2fdaaceea6 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x7ea6)
SUMMARY: AddressSanitizer: 131188 byte(s) leaked in 1 allocation(s).
Expected results:
Should not detect memory leaks.
Mandatory info: - The output of the gluster volume info command:
Volume Name: gv0
Type: Distribute
Volume ID: 5eadb70e-c4c6-4af1-a132-79e0257be33f
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.1.1:/data/brick0/gv0
Brick2: 127.0.1.1:/data/brick1/gv0
Brick3: 127.0.1.1:/data/brick2/gv0
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
- The output of the gluster volume status command:
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.0.1.1:/data/brick0/gv0 58236 0 Y 513
Brick 127.0.1.1:/data/brick1/gv0 49982 0 Y 528
Brick 127.0.1.1:/data/brick2/gv0 50168 0 Y 543
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the gluster volume heal command:
Launching heal operation to perform index self heal on volume gv0 has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume gv0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
No.
Additional info:
- The operating system / glusterfs version:
Linux kernel version: 6.2.0
OS version: Debian 11.8
GlusterFS version: 11.1
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered:
Description of problem:
There is a memory leak reported by Asan after booting the file system and then shutting down. Regardless of whether there are user operations were performed.
The exact command to reproduce the issue:
Boot the file system, wait a few seconds, and then shut down.
The full output of the command that failed:
==405==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 131188 byte(s) in 1 object(s) allocated from:
#0 0x4a046d in malloc (/usr/local/sbin/glusterfs+0x4a046d)
#1 0x7f2fdae47e1e in __gf_malloc /root/glusterfs/libglusterfs/src/mem-pool.c:231:11
#2 0x7f2fdae5362e in iobuf_get_from_small /root/glusterfs/libglusterfs/src/iobuf.c:451:13
#3 0x7f2fdae5362e in iobuf_get2 /root/glusterfs/libglusterfs/src/iobuf.c:482:17
#4 0x7f2fdae543b6 in iobuf_get /root/glusterfs/libglusterfs/src/iobuf.c:556:13
#5 0x7f2fd823d144 in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6150:17
#6 0x7f2fdaaceea6 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x7ea6)
SUMMARY: AddressSanitizer: 131188 byte(s) leaked in 1 allocation(s).
Expected results:
Should not detect memory leaks.
Mandatory info:
- The output of the
gluster volume info
command:Volume Name: gv0
Type: Distribute
Volume ID: 5eadb70e-c4c6-4af1-a132-79e0257be33f
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.1.1:/data/brick0/gv0
Brick2: 127.0.1.1:/data/brick1/gv0
Brick3: 127.0.1.1:/data/brick2/gv0
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
- The output of the
gluster volume status
command:Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.0.1.1:/data/brick0/gv0 58236 0 Y 513
Brick 127.0.1.1:/data/brick1/gv0 49982 0 Y 528
Brick 127.0.1.1:/data/brick2/gv0 50168 0 Y 543
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the
gluster volume heal
command:Launching heal operation to perform index self heal on volume gv0 has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume gv0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
No.
Additional info:
- The operating system / glusterfs version:
Linux kernel version: 6.2.0
OS version: Debian 11.8
GlusterFS version: 11.1
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: