Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leaks Detected #4391

Open
l392zhan opened this issue Jul 9, 2024 · 0 comments
Open

Memory Leaks Detected #4391

l392zhan opened this issue Jul 9, 2024 · 0 comments

Comments

@l392zhan
Copy link

l392zhan commented Jul 9, 2024

Description of problem:
There is a memory leak reported by Asan after booting the file system and then shutting down. Regardless of whether there are user operations were performed.

The exact command to reproduce the issue:
Boot the file system, wait a few seconds, and then shut down.

The full output of the command that failed:

==405==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 131188 byte(s) in 1 object(s) allocated from:
#0 0x4a046d in malloc (/usr/local/sbin/glusterfs+0x4a046d)
#1 0x7f2fdae47e1e in __gf_malloc /root/glusterfs/libglusterfs/src/mem-pool.c:231:11
#2 0x7f2fdae5362e in iobuf_get_from_small /root/glusterfs/libglusterfs/src/iobuf.c:451:13
#3 0x7f2fdae5362e in iobuf_get2 /root/glusterfs/libglusterfs/src/iobuf.c:482:17
#4 0x7f2fdae543b6 in iobuf_get /root/glusterfs/libglusterfs/src/iobuf.c:556:13
#5 0x7f2fd823d144 in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6150:17
#6 0x7f2fdaaceea6 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x7ea6)

SUMMARY: AddressSanitizer: 131188 byte(s) leaked in 1 allocation(s).

Expected results:
Should not detect memory leaks.

Mandatory info:
- The output of the gluster volume info command:
Volume Name: gv0
Type: Distribute
Volume ID: 5eadb70e-c4c6-4af1-a132-79e0257be33f
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.1.1:/data/brick0/gv0
Brick2: 127.0.1.1:/data/brick1/gv0
Brick3: 127.0.1.1:/data/brick2/gv0
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet

- The output of the gluster volume status command:
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.0.1.1:/data/brick0/gv0 58236 0 Y 513
Brick 127.0.1.1:/data/brick1/gv0 49982 0 Y 528
Brick 127.0.1.1:/data/brick2/gv0 50168 0 Y 543

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:
Launching heal operation to perform index self heal on volume gv0 has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume gv0

**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/

**- Is there any crash ? Provide the backtrace and coredump
No.

Additional info:

- The operating system / glusterfs version:
Linux kernel version: 6.2.0
OS version: Debian 11.8
GlusterFS version: 11.1

Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant