Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make formula internals thread safe. #332

Merged
merged 3 commits into from
Jan 25, 2024
Merged

Conversation

Laimiux
Copy link
Collaborator

@Laimiux Laimiux commented Jan 22, 2024

What

Currently, Formula is single-threaded and usually bound to the main UI thread. This requires processing all events and running evaluations on that thread. This can be a performance limitation:

  • Evaluations can get pretty expensive for nested and complicated features
  • There can be delays jumping from network/io threads to the main thread (during initial screen loads when there is a lot of contention)

While we have to run only one formula evaluation at a time, there is no reason why this has to be bound to a single specific thread. This will allow network and other async events to be processed on IO thread instead of having to jump to main thread.

Implementation details

To make Formula unbound to any thread, we need to make the internals thread safe. Since, we can only handle one evaluation at a time, we implement this via a non-blocking FIFO queue for updates. The simplified way it works is when an event arrives, it checks if another thread is already processing the queue and if so, it hands off this update to that thread. If not, it starts to process the queue.

@Laimiux Laimiux force-pushed the laimonas/thread-safe-internals branch from b3abe8a to f1abd79 Compare January 22, 2024 15:26
Comment on lines 26 to 27
* - If queue is running by a different thread, add to the queue and see if we need to
* take over the processing.
Copy link

@tripathiamritansh tripathiamritansh Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something to consider is the cost of context switching if we rapidly switch threads for small evaluation tasks. It might make sense to batch things if this can be the case

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain more about what you mean by context switching?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thread context switching is the process of saving the state of a currently running thread and then restoring the state of another thread, allowing it to resume execution. This essentially allows multiple threads to share a single CPU, giving the illusion that they are running concurrently. The cost is fairly low for IO threads but high for computation threads

Copy link
Collaborator Author

@Laimiux Laimiux Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to avoid thread-switching and enable running the formula on the thread that the event is produced on. Currently, all network calls switch to the main thread and then process the event after the switch. In the future, we will not switch to the main thread, but process the event first and only switch to the main thread when updating the UI.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we are using a non-blocking (atomic) approach to take over processing so the penalty should be relatively small.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok that makes sense

@Laimiux Laimiux force-pushed the laimonas/thread-safe-internals branch from f1abd79 to 1e93038 Compare January 22, 2024 19:33
@carrotkite
Copy link

carrotkite commented Jan 22, 2024

JaCoCo Code Coverage 78.91% ✅

Class Covered Meta Status
com/instacart/formula/coroutines/FlowRuntime 40% 0%
com/instacart/formula/internal/ActionKey 100% 0%
com/instacart/formula/internal/SynchronizedUpdateQueue 92% 0%
com/instacart/formula/internal/FormulaManagerImpl 87% 0%
com/instacart/formula/internal/ChildrenManager 75% 0%
com/instacart/formula/internal/Frame 100% 0%
com/instacart/formula/internal/ListenerImpl 75% 0%
com/instacart/formula/rxjava3/RxJavaRuntime 80% 0%
com/instacart/formula/FormulaRuntime 76% 0%

Generated by 🚫 Danger

@Laimiux Laimiux force-pushed the laimonas/thread-safe-internals branch 3 times, most recently from 51b8366 to 87ba18b Compare January 23, 2024 16:50
@Laimiux Laimiux force-pushed the laimonas/thread-safe-internals branch from 87ba18b to 2b2a24d Compare January 23, 2024 17:57
@Laimiux Laimiux changed the title Make formula internals thread safe (WIP). Make formula internals thread safe. Jan 23, 2024
Copy link

@jasonostrander jasonostrander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

import com.instacart.formula.IFormula

/**
* A poor man's thread checker.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😆

}
}

fun input(newKey: String) = apply {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is not related to this line. So we're handing off Formula update events to the currently processing Thread, which might be a network/IO thread. I'm not convinced this is the most effective use of our resources. Background network thread is designed to take care of network calls boilerplate, it's not conceptually designed to process app UI state management updates. I think the further network threads usually go is up to parsing response to a more digestible data.

I'm curious if we should explore Formula runtime having 1 working background thread that runs transitions and side effects. If a side effect needs to run on a main thread, there would be an explicit API to do so, such as:

transition(state = state.copy(...)) {
    uiThread {
        ...    
    }
    backgroundThread {
        ...    
    }
}

Copy link

@ntoskrnl ntoskrnl Jan 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point about IO-thread. So, we basically have 2 options:

  • don't perform any enforcement and let user switch thread manually via in the RxAction.fromObservable{} block or in repository via observeOn(computation()).
  • or maybe let formula enforce thread. E.g. we can introduce background thread (or a pool of threads) and if the event received not on UI thread and not on one of our background threads, we execute it on the background thread from the pool.

I would prefer the second option.

The API to request on which thread perform side effect is useful. I only see one downside: it could add mental overhead to the formula user and code-reviewer. Engineers will need to spend additional time to decide/verify what should be on uiThread what should be on backgroundThread. I can be useful though.

Anyway, I think it's definitely worth exploring. We should probably stick to option 1 and collect more usage data after this new multi-thread formula gets adopted.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wrong, but I’m not sure the statement that “network/IO thread is not conceptually designed to process app UI state management updates” is correct. The thread classification such as I/O or computation only matters in terms of thread pools (and how many threads are allocated). I’m not sure there is a problem that a network thread executes more code (and will exit shortly after). Not having to jump a thread to process the update has two benefits:

  • We process the formula update much faster (jumping to the main thread can have a huge 100+ms delay; I’ve seen 300+ms during app start)
  • We potentially defer some processing to a background thread, leaving the main thread more responsive

I'm curious if we should explore Formula runtime having 1 working background thread

The problem with using a single background worker is that each user event will have to jump from main thread to background thread and then back to the main thread. The delay added by this can make things feel less responsive. This is why I think better option is not to confine the formula to a single thread which would require constant jumping back and forth. The current approach tries to execute things on the thread that has the event.

If a side effect needs to run on a main thread, there would be an explicit API to do so, such as:

In terms of extra threading APIs for side-effect execution, it might be convenient, but I'm not sure it's the right approach. Currently, the formula doesn't manage any threads and doesn't know about the main thread. Not sure introducing those for this use-case is worth it. Regardless if we introduce this threading API within side-effects, the end user will still have to think much more carefully about shared mutable state and how to synchronize reads/writes. I suspect logic to ensure thread-safety should live within classes that we call the side-effects on instead of formula.

@@ -14,7 +14,7 @@ import com.instacart.formula.Evaluation
*/
internal class Frame<Input, State, Output>(
val input: Input,
val state: State,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated, making private since it's not used

@@ -2,5 +2,5 @@ package com.instacart.formula.internal

internal data class ActionKey(
val id: Long,
val delegateKey: Any?,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated, making private since it's not used

@Laimiux Laimiux force-pushed the laimonas/thread-safe-internals branch from 7b3ef28 to 9269064 Compare January 25, 2024 15:35
@Laimiux Laimiux merged commit 406d7dc into master Jan 25, 2024
3 checks passed
@Laimiux Laimiux deleted the laimonas/thread-safe-internals branch January 25, 2024 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants