Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Major compaction stucks for collection with partition key enabled when there are multiple major tasks #38851

Open
1 task done
binbinlv opened this issue Dec 30, 2024 · 8 comments
Assignees
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@binbinlv
Copy link
Contributor

binbinlv commented Dec 30, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: xiaocai2333-debug_datacoord_access_timeout-c8ded3d-20241227
- Deployment mode(standalone or cluster): cluster
- MQ type(rocksmq, pulsar or kafka):    pulsar
- SDK version(e.g. pymilvus v2.0.0rc2): 2.4 latest
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

Major compaction stucks for collection with partition key enabled: no responses for days

image

Expected Behavior

Compact successfully

Steps To Reproduce

1. create 2 collections with partition key disable/enable (10M data inserted, dim=128)
2. major compaction to the collection with partition key disabled
3. major compaction to the collection with partition key enabled

Milvus Log

granfana:
https://grafana-4am.zilliz.cc/d/uLf5cJ3Ga/milvus2-0?orgId=1&from=now-1h&to=now&var-datasource=prometheus&var-cluster=&var-namespace=qa-milvus&var-instance=major-24-ndoap&var-collection=All&var-app_name=milvus

log:
https://grafana-4am.zilliz.cc/explore?orgId=1&panes=%7B%22pew%22:%7B%22datasource%22:%22vhI6Vw67k%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bcluster%3D%5C%224am%5C%22,namespace%3D%5C%22qa-milvus%5C%22,pod%3D~%5C%22major-24-ndoap.%2A%5C%22%7D%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22vhI6Vw67k%22%7D%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&schemaVersion=1

Anything else?

stuck collection name: major_compaction_collection_enable_scalar_clustering_key_1kw_pk_string

@binbinlv binbinlv added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 30, 2024
@binbinlv
Copy link
Contributor Author

/assign @xiaocai2333

@binbinlv binbinlv added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 30, 2024
@binbinlv binbinlv added this to the 2.4.20 milestone Dec 30, 2024
@binbinlv binbinlv added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Dec 30, 2024
@binbinlv
Copy link
Contributor Author

This is the new function error introduced by the new code, so set urgent label.

@binbinlv
Copy link
Contributor Author

And the query with count(*) stucks too on this collection (name: "major_compaction_collection_enable_scalar_clustering_key_1kw_pk_string").

The query with count(*) on another collection (name: "major_compaction_collection_enable_scalar_clustering_key_1kw") is good, but it becomes very slow compared with before.

@xiaocai2333
Copy link
Contributor

xiaocai2333 commented Dec 30, 2024

Based on the logs, it was observed that the compaction task stopped scheduling at a certain point. After further analysis, a deadlock was suspected. Using pprof, it was discovered that the scheduler had been attempting to acquire a task lock for an extended period.
image

The root cause was identified as a write lock that had been held for too long without being released.
image

Upon reviewing the code, it was found that the issue occurred because the function failed to release the lock upon returning.
image

@binbinlv
Copy link
Contributor Author

/unassign @yanliang567

@binbinlv binbinlv changed the title [Bug]: Major compaction stucks for collection with partition key enabled when there are multple major tasks [Bug]: Major compaction stucks for collection with partition key enabled when there are multiple major tasks Dec 30, 2024
@binbinlv
Copy link
Contributor Author

And the query with count(*) stucks too on this collection (name: "major_compaction_collection_enable_scalar_clustering_key_1kw_pk_string").

The query with count(*) on another collection (name: "major_compaction_collection_enable_scalar_clustering_key_1kw") is good, but it becomes very slow compared with before.

When re-trigger query on this collection "major_compaction_collection_enable_scalar_clustering_key_1kw_pk_string", query returns with error:

pymilvus.exceptions.MilvusException: <MilvusException: (code=65535, message=fail to Query on QueryNode 8: Timestamp lag too large)>

xiaofan-luan pushed a commit that referenced this issue Dec 30, 2024
xiaofan-luan pushed a commit that referenced this issue Dec 30, 2024
sre-ci-robot pushed a commit that referenced this issue Dec 31, 2024
@binbinlv
Copy link
Contributor Author

binbinlv commented Jan 2, 2025

Verified and fixed in the pr image.

milvus: PR-38848-20241231-6c5664aa
pymilvus: 2.4-latest

@binbinlv
Copy link
Contributor Author

binbinlv commented Jan 2, 2025

cancel the urgent label first, and will close this issue when the fixed pr is merged and verified this issue using the 2.4-latest image.

@binbinlv binbinlv removed the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants