-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Counting semaphore variant of lock API #485
Comments
One can provide such a lock using the existing lock routines and AMOs, roughly as follows: static long lock = 0;
static long active_counter = 0;
const int root_pe = shmem_n_pes() / 2;
// Acquire the counting semaphore
// -- (Wait to) acquire the lock
shmem_set_lock(&lock);
// -- Bump the active counter
long num_active = shmem_atomic_fetch_inc(&active_counter, root_pe);
// -- If (one) too many are active, wait-loop
unsigned int dt_us = 10;
while (num_active >= max_active) {
usleep(dt_us);
dt_us = (dt_us > 25000u ? 50000u : (2 * dt_us));
num_active = shmem_atomic_fetch(&active_counter, root_pe);
}
// -- Release the lock
shmem_clear_lock(&lock);
do {
something_useful();
} while (!done);
// Release the counting semaphore
shmem_atomic_add(&active_counter, -1, root_pe); There is some potential for optimization here, but hopefully this gives a good sketch of how this can be done. The |
Would the lock type of the proposed API also be |
On Feb 13, 2022, at 11:46 AM, James Dinan ***@***.***> wrote:
Would the lock type of the proposed API also be long or something else? IIUC, the above implementation would require more space than we have in a single long.
Could we do this by using a long* lock variable as a hash key to a private (bigger) structure?
Tony
|
I think we would need a point where we could symmetrically allocate memory for the lock. Locks are global objects that aren't tied to a team, so this could be tricky. We do have extra space in the MCS lock algorithm that we used in SOS. We assume that However, if we're going to introduce new API, I would really prefer to fix this design issue with the legacy lock API. |
Do you mean you'd rather the lock routines use an opaque type? |
Some working group comments:
|
The current locking API provides a fair-queued lock across the PEs, allowing one PE to hold the lock at a time. There is user interest in supporting a variant that supports N PEs to hold the "lock" at a time, as with a counting semaphore. Similar fair-queuing, FIFO behavior is strongly desirable.
One significant use-case is for parallel file I/O. With large PE counts, user applications can often see better performance for parallel writers if only some of the PEs write at a time. Such an API would provide a fair mechanism for PEs to enqueue as writers (as with
shmem_*_lock
), while also supporting multiple, concurrent writers (a new capability).The text was updated successfully, but these errors were encountered: