-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: [benchmark][standalone] Milvus memory usage is large than before causing Milvus OOM #38213
Comments
same case,but shard_num=1, memory usage is normaltest case name: test_hybrid_search_locust_shard1_float_dql_ivf_sq8_standalone
|
enabled queryNode.mmap.scalarField=truetest case name: test_hybrid_search_locust_shard16_float_dql_ivf_sq8_standalone server:
|
March master image: master-20240329-88d426f39-amd64test case name: test_hybrid_search_locust_shard16_float_dql_ivf_sq8_standalone server:
|
I doubt this is a delete issue. |
There is no delete in this scenario, only concurrent hybrid_search @czs007 is helping with this |
related to #38492 |
Is there an existing issue for this?
Environment
Current Behavior
argo task: multi-vector-corn-kjs2s
test case name: test_hybrid_search_locust_shard16_float_dql_ivf_sq8_standalone
server:
client steps:
normal running
image: 2.4-20240415-1e4f5ee9-amd64
index IVF_SQ8 memory usage is less then 35G
Expected Behavior
No response
Steps To Reproduce
Milvus Log
No response
Anything else?
test result:
concurrent_number=1
concurrent_number=20
The text was updated successfully, but these errors were encountered: