-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CHORE] Swordfish perf + cleanup #3132
Closed
Closed
Changes from 6 commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
c36af6a
parallel buffered sinks, pivot fix, morsel size 1 for tests
f1f3c6a
fix test
67b429d
loole
9ebdccf
loole channel, dispatcher, probe bridge
fc26098
undo test changes
8767751
why are my tests not running
3dd1ff3
pipeline node start &self
3150d8b
Merge branch main into colin/swordfish-boost-2
d7bfa8f
no async trait
3ace474
fix unpivot tests
b58e064
Merge branch main into colin/swordfish-boost-2
7deb319
fix tests
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
use std::{cmp::Ordering::*, collections::VecDeque, sync::Arc}; | ||
|
||
use common_error::DaftResult; | ||
use daft_micropartition::MicroPartition; | ||
|
||
// A buffer that accumulates morsels until a threshold is reached | ||
pub struct RowBasedBuffer { | ||
pub buffer: VecDeque<Arc<MicroPartition>>, | ||
pub curr_len: usize, | ||
pub threshold: usize, | ||
} | ||
|
||
impl RowBasedBuffer { | ||
pub fn new(threshold: usize) -> Self { | ||
assert!(threshold > 0); | ||
Self { | ||
buffer: VecDeque::new(), | ||
curr_len: 0, | ||
threshold, | ||
} | ||
} | ||
|
||
// Push a morsel to the buffer | ||
pub fn push(&mut self, part: Arc<MicroPartition>) { | ||
self.curr_len += part.len(); | ||
self.buffer.push_back(part); | ||
} | ||
|
||
// Pop enough morsels that reach the threshold | ||
// - If the buffer currently has not enough morsels, return None | ||
// - If the buffer has exactly enough morsels, return the morsels | ||
// - If the buffer has more than enough morsels, return a vec of morsels, each correctly sized to the threshold. | ||
// The remaining morsels will be pushed back to the buffer | ||
pub fn pop_enough(&mut self) -> DaftResult<Option<Vec<Arc<MicroPartition>>>> { | ||
match self.curr_len.cmp(&self.threshold) { | ||
Less => Ok(None), | ||
Equal => { | ||
if self.buffer.len() == 1 { | ||
let part = self.buffer.pop_front().unwrap(); | ||
self.curr_len = 0; | ||
Ok(Some(vec![part])) | ||
} else { | ||
let chunk = MicroPartition::concat( | ||
&std::mem::take(&mut self.buffer) | ||
.iter() | ||
.map(|x| x.as_ref()) | ||
.collect::<Vec<_>>(), | ||
)?; | ||
self.curr_len = 0; | ||
Ok(Some(vec![Arc::new(chunk)])) | ||
} | ||
} | ||
Greater => { | ||
let num_ready_chunks = self.curr_len / self.threshold; | ||
let concated = MicroPartition::concat( | ||
&std::mem::take(&mut self.buffer) | ||
.iter() | ||
.map(|x| x.as_ref()) | ||
.collect::<Vec<_>>(), | ||
)?; | ||
let mut start = 0; | ||
let mut parts_to_return = Vec::with_capacity(num_ready_chunks); | ||
for _ in 0..num_ready_chunks { | ||
let end = start + self.threshold; | ||
let part = Arc::new(concated.slice(start, end)?); | ||
parts_to_return.push(part); | ||
start = end; | ||
} | ||
if start < concated.len() { | ||
let part = Arc::new(concated.slice(start, concated.len())?); | ||
self.curr_len = part.len(); | ||
self.buffer.push_back(part); | ||
} else { | ||
self.curr_len = 0; | ||
} | ||
Ok(Some(parts_to_return)) | ||
} | ||
} | ||
} | ||
|
||
// Pop all morsels in the buffer regardless of the threshold | ||
pub fn pop_all(&mut self) -> DaftResult<Option<Arc<MicroPartition>>> { | ||
assert!(self.curr_len < self.threshold); | ||
if self.buffer.is_empty() { | ||
Ok(None) | ||
} else { | ||
let concated = MicroPartition::concat( | ||
&std::mem::take(&mut self.buffer) | ||
.iter() | ||
.map(|x| x.as_ref()) | ||
.collect::<Vec<_>>(), | ||
)?; | ||
self.curr_len = 0; | ||
Ok(Some(Arc::new(concated))) | ||
} | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kevinzwang made a great callout the other day that the round-robin dispatching could be inefficient if ordering is not required. This can be fixed using https://docs.rs/loole/latest/loole/ channels, which are multi-producer multi-consumer. This essentially makes it work-stealing if maintaining order is not required.
This script is 4x faster now.