Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: flickery frames due to leaky event handling #36

Merged
merged 2 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions packages/scatterbrain/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@alleninstitute/vis-scatterbrain",
"version": "0.0.4",
"version": "0.0.5",
"contributors": [
{
"name": "Lane Sawyer",
Expand Down Expand Up @@ -58,4 +58,4 @@
"lodash": "^4.17.21",
"regl": "^2.1.0"
}
}
}
36 changes: 15 additions & 21 deletions packages/scatterbrain/src/abstract/async-frame.ts
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ export function beginFrame<
renderItem(itemToRender, dataset, settings, maybe);
}
};
const reportStatus = (event: AsyncFrameEvent<Dataset, Item>, synchronous: boolean = false) => {
const reportStatus = (event: AsyncFrameEvent<Dataset, Item>, synchronous: boolean) => {
// we want to report our status, however the flow of events can be confusing -
// our callers anticipate an asynchronous (long running) frame to be started,
// but there are scenarios in which the whole thing is completely synchronous
Expand All @@ -102,11 +102,11 @@ export function beginFrame<
if (synchronous) {
lifecycleCallback(event);
} else {
Promise.resolve().then(() => lifecycleCallback(event));
Promise.resolve().then(() => lifecycleCallback(event))
}
};

const doWorkOnQueue = (intervalId: number) => {
const doWorkOnQueue = (intervalId: number, synchronous: boolean = false) => {
// try our best to cleanup if something goes awry
const startWorkTime = performance.now();
const cleanupOnError = (err: unknown) => {
Expand All @@ -117,7 +117,7 @@ export function beginFrame<
abort.abort(err);
clearInterval(intervalId);
// pass the error somewhere better:
reportStatus({ status: 'error', error: err }, true);
reportStatus({ status: 'error', error: err }, synchronous);
};
while (mutableCache.getNumPendingTasks() < Math.max(maximumInflightAsyncTasks, 1)) {
// We know there are items in the queue because of the check above, so we assert the type exist
Expand All @@ -131,7 +131,7 @@ export function beginFrame<
requestsForItem(itemToRender, dataset, settings, abort.signal),
partial(fancy, itemToRender),
toCacheKey,
() => reportStatus({ status: 'progress', dataset, renderedItems: [itemToRender] }, true)
() => reportStatus({ status: 'progress', dataset, renderedItems: [itemToRender] }, synchronous)
);
if (result !== undefined) {
// put this cancel callback in a list where we can invoke if something goes wrong
Expand All @@ -152,30 +152,24 @@ export function beginFrame<
if (mutableCache.getNumPendingTasks() < 1) {
// we do want to wait for that last in-flight task to actually finish though:
clearInterval(intervalId);
reportStatus({ status: 'finished' });
reportStatus({ status: 'finished' }, synchronous);
}
return;
}
};
const interval = setInterval(() => doWorkOnQueue(interval), queueProcessingIntervalMS);

// do some work right now...
reportStatus({ status: 'begin' }, true)
const interval = setInterval(() => doWorkOnQueue(interval), queueProcessingIntervalMS);
if (queue.length > 0) {
reportStatus({ status: 'begin' }, true);
doWorkOnQueue(interval);
// return a function to allow our caller to cancel the frame - guaranteed that no settings/data will be
// touched/referenced after cancellation, unless the author of render() did some super weird bad things
return {
cancelFrame: (reason?: string) => {
taskCancelCallbacks.forEach((cancelMe) => cancelMe());
abort.abort(new DOMException(reason, 'AbortError'));
clearInterval(interval);
reportStatus({ status: 'cancelled' });
},
};
doWorkOnQueue(interval, false)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here is where we do rendering work as fast as we can - by not waiting for our setInterval() function to do it for us. To help prevent confusing situations, I now pass false to the worker, indicating that all its lifecycle callbacks should happen later, on a promise

}
return {
cancelFrame: () => {},
cancelFrame: (reason?: string) => {
taskCancelCallbacks.forEach((cancelMe) => cancelMe());
abort.abort(new DOMException(reason, 'AbortError'));
clearInterval(interval);
reportStatus({ status: 'cancelled' }, true);
}
};
}

Expand Down
9 changes: 8 additions & 1 deletion packages/scatterbrain/src/abstract/render-server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -168,12 +168,19 @@ export class RenderServer {
}
};
this.clients.set(client, {
frame: renderFn(image, this.cache, hijack),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was the bulk of the issue experienced by Skyler's work - renderFn() often returns a frame that is already finished - that means that event handlers (like the one that copies the webGL buffer to the screen) run before we even write this little object to our map of clients for the render server - this of course would be surprising for everyone!

frame: null,
image,
copyBuffer,
resolution,
updateRequested: null,
});
// this is worded rather awkwardly, because sometimes the frameLifecycle object returned by renderFn() represents
// a frame that is already finished!
// this is a good thing for performance, but potentially confusing - so we do our book-keeping before we actually start rendering:
const aboutToStart = this.clients.get(client); // this is the record we just put into the clients map - TS just wants to be sure it really exists:
if (aboutToStart) {
aboutToStart.frame = renderFn(image, this.cache, hijack)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fix is to start the frame AFTER writing down important things like its private image buffer, size, etc.

}
}
}
}
1 change: 1 addition & 0 deletions packages/scatterbrain/src/dataset-cache.ts
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,7 @@ export class AsyncDataCache<SemanticKey extends RecordKey, CacheKey extends Reco
workingSet: Record<SemanticKey, () => Promise<D>>,
use: (items: Record<SemanticKey, D>) => void,
toCacheKey: (semanticKey: SemanticKey) => CacheKey,
// TODO: consider removing taskFinished - it would be more simple to let the caller handle this in their use() function
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to do later!

taskFinished?: () => void
): cancelFn | undefined {
const keys: SemanticKey[] = Object.keys(workingSet) as SemanticKey[];
Expand Down
Loading