Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added MemoryLimiter and other classes etc. #652

Merged
merged 1 commit into from
Feb 13, 2024

Conversation

nolan-veed
Copy link
Contributor

@nolan-veed nolan-veed commented Dec 3, 2023

Why

For #610

What

  • Added MemoryLimiter, LimitedArena, LimitedPool classes.
  • Updated and added new tests.

Testing

There are tests.

Copy link

github-actions bot commented Dec 3, 2023

🤖 Upon creation, pull request description does not have a link to an issue. If there is a related issue, please add it to the description using any of the supported formats.

@github-actions github-actions bot added the work in progress Pull request is still in progress and changing label Dec 3, 2023
@nolan-veed
Copy link
Contributor Author

@gavv Please give a quick early review on MemoryLimiter. Does the tracking need to be atomic?

@nolan-veed
Copy link
Contributor Author

Does the tracking need to be atomic?

I think I've answered my own question. Yes, I suppose, as it's shared across various arenas and pools.

@gavv
Copy link
Member

gavv commented Dec 3, 2023

Yep, it should be atomic, and ideally lock free. I guess it would be enough to use core::Atomic or core::AtomicOps.

We likely need to get rid of would_allow because it's usage will be always a race.

Otherwise, the class looks fine!

@gavv
Copy link
Member

gavv commented Dec 3, 2023

Oh, also %zu is not portable, please use %lu with a cast to ulong.

@gavv gavv added the contribution A pull-request by someone else except maintainers label Dec 5, 2023
@nolan-veed
Copy link
Contributor Author

Oh, also %zu is not portable, please use %lu with a cast to ulong.

OK. I've used it elsewhere in the code as well.

@nolan-veed
Copy link
Contributor Author

Yep, it should be atomic, and ideally lock free. I guess it would be enough to use core::Atomic or core::AtomicOps.

See my approach now.

@nolan-veed nolan-veed marked this pull request as ready for review December 20, 2023 14:13
@github-actions github-actions bot added ready for review Pull request can be reviewed and removed work in progress Pull request is still in progress and changing labels Dec 20, 2023
@nolan-veed
Copy link
Contributor Author

OK. I should have kept this in draft.

I've just added the LimitedArena, LimitedPool, and could use your thoughts on it and a few things. Not yet added tests.

@gavv
Copy link
Member

gavv commented Dec 21, 2023

Some notes:

  1. Ideally we should also acquire the size of the overhead for every allocated chunk/slot. Arena and pool add headers and guards, and for many allocations this unaccounted overhead will accumulate and may become significant. I guess we need to add methods IArena and IPool to report the overhead, and to take it into account in Limited versions.

  2. I think MemoryLimiter::release() should never return false (as well as deallocate). When invariant is failed, it indicates a critical bug, we can panic, but there is no need for error checking because there is anyway to meaningful handling of this error except panic.

  3. In CAS loops, please add core::cpu_relax() call.

  4. I think we should first deallocate and only then release memory from limiter, to avoid over-allocation.

  5. I think we can allow max_bytes = 0 which would mean "no limit". This way we can create all memory limiters unconditionally (even when there is no actual limit), which I think will simplify code.

  6. Please log errors when MemoryLimiter::acquire() fails, and include the limit and requested size into the log message. Also, similar to SlabPool, it would be nice to add a name to MemoryLimiter, and include it into the log message too.

  7. Please inherit classes from NonCopyable when appreciable.

  8. Probably acquiring/releasing zero bytes should be a panic?

  9. I think MemoryLimiter destructor should panic if there is unreleased memory. Though, if it would complicate some usages, it's not necessary.

@gavv gavv added work in progress Pull request is still in progress and changing and removed ready for review Pull request can be reviewed labels Dec 21, 2023
@nolan-veed
Copy link
Contributor Author

Quick q about... how to approach LimitedPool.

  • SlabPool has an EmbeddedCapacity - this means that memory will already be "booked" when it's created.
  • IPool has reserve() and allocate(). So, how should the limited version work? I guess we check with MemoryLimiter only on reserve()? So, allocate() and deallocate() just don't bother with the MemoryLimiter?

@gavv
Copy link
Member

gavv commented Dec 23, 2023

  1. The goal of per-pool limits is to limit number of packets given to one socket/session, so that we can be sure that if one connection flooded, we will not overallocate and other connections won't be affected.

    Hence, I think for per-session LimitedPool it doesn't matter how many packets are booked in embedded array or in slab, since they remain available for other sessions. It only matters how many packets were allocated for this specific session.

    So I think LimitedPool should only acquire and release in allocate/deallocate, and doesn't need to bother about reserve.

  2. To control the total amount of memory, we will have root LimitedArena with global limit (max_memory). In the end, all allocations, including objects allocated by pools and pools themselves and their embedded capacity, all will go through this limited arena.

@gavv
Copy link
Member

gavv commented Dec 26, 2023

I see some commits, please ping me when it's ready for review :)

@nolan-veed
Copy link
Contributor Author

I see some commits, please ping me when it's ready for review :)

Yea. Sorry. Small tidy ups are remaining and a few more tests.

@nolan-veed nolan-veed requested a review from gavv December 26, 2023 11:09
@github-actions github-actions bot added ready for review Pull request can be reviewed and removed work in progress Pull request is still in progress and changing labels Dec 26, 2023
@gavv
Copy link
Member

gavv commented Dec 28, 2023

I've looked at the implementation and have small suggestion about IPool and IArena interface.

size_of + overhead / object_size + overhead looks a bit too low level, because the caller needs to know that there is "payload" size and "overhead" size. The caller also should assume that the overhead is constant (same for all allocations), which by the way may not be true when arena applies alignment?

Instead of that, I think it would be a bit simpler if the caller will operate with just one "allocation size", which includes payload + overhead.

For IPool, it can be one method, e.g.:

    virtual size_t allocation_size() = 0;

We can remove object_size() since it's not really used and keep only allocation_size() which takes overhead into account.

For IArena, we'd still need 2 methods, but they will both include payload + overhead:

    //! Computes how much bytes will be actually allocated if allocate() is called with given size.
    //! Covers all internal overhead, if any.
    virtual size_t compute_allocated_size(size_t size) = 0;

    //! Returns how much bytes was allocated for given pointer returned by allocate().
    //! Covers all internal overhead, if any.
    //! Returns same value as computed by compute_allocated_size(size).
    virtual size_t allocated_size(void*) = 0;

Feel free to change naming and comments of course.

What do you think?

@gavv gavv added needs revision Pull request should be revised by its author and removed ready for review Pull request can be reviewed labels Dec 28, 2023
@gavv
Copy link
Member

gavv commented Dec 28, 2023

Thanks for update, I've posted a few more comments.

@nolan-veed
Copy link
Contributor Author

I've looked at the implementation and have small suggestion about IPool and IArena interface.
...
What do you think?

It does make sense. I did think about something similar to this initially - should have suggested.
I'll go with those names, they are clear.

@@ -54,8 +54,7 @@ class IPool {
//! Placement new for core::IPool.
//! @note
//! nothrow forces compiler to check for NULL return value before calling ctor.
inline void* operator new(size_t size, roc::core::IPool& pool) throw() {
roc_panic_if_not(size <= pool.object_size());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gavv
Note: I had to remove this only good use of object_size() as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add size parameter to allocate(). Implementation can then check that requested size fits within slot size. This way we can be sure that any allocation on pool is safe (no matter via new or directly via allocate).

const bool is_owner = chunk->owner == this;

if (!is_owner) {
if (AtomicOps::load_seq_cst(flags_) & HeapArenaFlag_EnableGuards) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to remove num_guard_failures_++; from here, because this method is const. Would you rather have the member variable as mutable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense to make it mutable. I'd say it's not part of the logical object state.

@nolan-veed nolan-veed requested a review from gavv December 30, 2023 17:23
@github-actions github-actions bot added ready for review Pull request can be reviewed and removed needs revision Pull request should be revised by its author labels Dec 30, 2023
@nolan-veed nolan-veed force-pushed the 610-memory-limiter-etc branch from ffb4683 to 2533ecd Compare December 30, 2023 17:29
Copy link
Member

@gavv gavv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for update, a few more issues remaining.

src/internal_modules/roc_core/heap_arena.cpp Outdated Show resolved Hide resolved
src/internal_modules/roc_core/limited_arena.cpp Outdated Show resolved Hide resolved
src/internal_modules/roc_core/limited_pool.cpp Outdated Show resolved Hide resolved
src/internal_modules/roc_core/memory_limiter.cpp Outdated Show resolved Hide resolved
src/internal_modules/roc_core/memory_limiter.cpp Outdated Show resolved Hide resolved
src/internal_modules/roc_core/slab_pool.h Outdated Show resolved Hide resolved
src/internal_modules/roc_core/slab_pool.h Outdated Show resolved Hide resolved
@gavv gavv added needs revision Pull request should be revised by its author and removed ready for review Pull request can be reviewed labels Jan 2, 2024
@nolan-veed nolan-veed force-pushed the 610-memory-limiter-etc branch from b2c9872 to 516ec84 Compare February 10, 2024 17:34
@nolan-veed nolan-veed requested a review from gavv February 10, 2024 23:00
@github-actions github-actions bot added ready for review Pull request can be reviewed and removed needs revision Pull request should be revised by its author labels Feb 10, 2024
@gavv gavv force-pushed the 610-memory-limiter-etc branch from da1a8c4 to 11b3710 Compare February 13, 2024 09:02
@gavv gavv merged commit 68a5da3 into roc-streaming:develop Feb 13, 2024
1 check passed
@gavv
Copy link
Member

gavv commented Feb 13, 2024

Thank you! Looks good!

I've created a PR with small refactoring of guard flags in heap arena and slap pool (not directly related to this PR). You can take a look if you wish: #700

@github-actions github-actions bot removed the ready for review Pull request can be reviewed label Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contribution A pull-request by someone else except maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants