Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add alloc_no_gc #1218

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 83 additions & 19 deletions src/memory_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ use crate::plan::{Mutator, MutatorContext};
use crate::scheduler::WorkBucketStage;
use crate::scheduler::{GCWork, GCWorker};
use crate::util::alloc::allocators::AllocatorSelector;
use crate::util::constants::{LOG_BYTES_IN_PAGE, MIN_OBJECT_SIZE};
use crate::util::constants::LOG_BYTES_IN_PAGE;
use crate::util::heap::layout::vm_layout::vm_layout;
use crate::util::opaque_pointer::*;
use crate::util::{Address, ObjectReference};
Expand Down Expand Up @@ -140,11 +140,28 @@ pub fn flush_mutator<VM: VMBinding>(mutator: &mut Mutator<VM>) {
mutator.flush()
}

/// Allocate memory for an object. For performance reasons, a VM should
/// implement the allocation fast-path on their side rather than just calling this function.
/// Allocate memory for an object.
///
/// If the VM provides a non-zero `offset` parameter, then the returned address will be
/// such that the `RETURNED_ADDRESS + offset` is aligned to the `align` parameter.
/// When the allocation is successful, it returns the starting address of the new object. The
/// memory range for the new object is `size` bytes starting from the returned address, and
/// `RETURNED_ADDRESS + offset` is guaranteed to be aligned to the `align` parameter. The returned
/// address of a successful allocation will never be zero.
///
/// If MMTk fails to allocate memory, it will attempt a GC to free up some memory and retry the
/// allocation. After triggering GC, it will call [`crate::vm::Collection::block_for_gc`] to suspend
/// the current thread that is allocating. Callers of this function must be aware of this behavior.
/// For example, JIT compilers that support
/// precise stack scanning need to make the call site a GC-safe point by generating stack maps. See
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
/// [`alloc_no_gc`] if it is undesirable to trigger GC at this allocation site.
///
/// If MMTk has attempted at least one GC, and still cannot free up enough memory, it will call
/// [`crate::vm::Collection::out_of_memory`] to inform the binding. The VM binding
/// can implement that method to handle the out-of-memory event in a VM-specific way, including but
/// not limited to throwing exceptions or errors. If [`crate::vm::Collection::out_of_memory`] returns
/// normally without panicking or throwing exceptions, this function will return zero.
///
/// For performance reasons, a VM should implement the allocation fast-path on their side rather
/// than just calling this function.
///
/// Arguments:
/// * `mutator`: The mutator to perform this allocation request.
Expand All @@ -159,24 +176,46 @@ pub fn alloc<VM: VMBinding>(
offset: usize,
semantics: AllocationSemantics,
) -> Address {
// MMTk has assumptions about minimal object size.
// We need to make sure that all allocations comply with the min object size.
// Ideally, we check the allocation size, and if it is smaller, we transparently allocate the min
// object size (the VM does not need to know this). However, for the VM bindings we support at the moment,
// their object sizes are all larger than MMTk's min object size, so we simply put an assertion here.
// If you plan to use MMTk with a VM with its object size smaller than MMTk's min object size, you should
// meet the min object size in the fastpath.
debug_assert!(size >= MIN_OBJECT_SIZE);
// Assert alignment
debug_assert!(align >= VM::MIN_ALIGNMENT);
debug_assert!(align <= VM::MAX_ALIGNMENT);
// Assert offset
debug_assert!(VM::USE_ALLOCATION_OFFSET || offset == 0);
#[cfg(debug_assertions)]
crate::util::alloc::allocator::assert_allocation_args::<VM>(size, align, offset);

mutator.alloc(size, align, offset, semantics)
}

/// Invoke the allocation slow path. This is only intended for use when a binding implements the fastpath on
/// Allocate memory for an object.
///
/// The semantics of this function is the same as [`alloc`], except that when MMTk fails to allocate
/// memory, it will simply return zero. This function is guaranteed not to trigger GC and not to
/// call [`crate::vm::Collection::block_for_gc`] or [`crate::vm::Collection::out_of_memory`].
///
/// Generally [`alloc`] is preferred over this function. This function should only be used
/// when the binding does not want GCs to happen at certain allocation sites (for example, places
/// where stack maps cannot be generated), and is able to handle allocation failure if that happens.
///
/// Arguments:
/// * `mutator`: The mutator to perform this allocation request.
/// * `size`: The number of bytes required for the object.
/// * `align`: Required alignment for the object.
/// * `offset`: Offset associated with the alignment.
/// * `semantics`: The allocation semantic required for the allocation.
pub fn alloc_no_gc<VM: VMBinding>(
k-sareen marked this conversation as resolved.
Show resolved Hide resolved
mutator: &mut Mutator<VM>,
size: usize,
align: usize,
offset: usize,
semantics: AllocationSemantics,
) -> Address {
#[cfg(debug_assertions)]
crate::util::alloc::allocator::assert_allocation_args::<VM>(size, align, offset);

mutator.alloc_no_gc(size, align, offset, semantics)
}

/// Invoke the allocation slow path of [`alloc`].
/// Like [`alloc`], this function may trigger GC and call [`crate::vm::Collection::block_for_gc`] or
/// [`crate::vm::Collection::out_of_memory`]. The caller needs to be aware of that.
///
/// *Notes*: This is only intended for use when a binding implements the fastpath on
/// the binding side. When the binding handles fast path allocation and the fast path fails, it can use this
/// method for slow path allocation. Calling before exhausting fast path allocaiton buffer will lead to bad
/// performance.
Expand All @@ -197,6 +236,31 @@ pub fn alloc_slow<VM: VMBinding>(
mutator.alloc_slow(size, align, offset, semantics)
}

/// Invoke the allocation slow path of [`alloc_no_gc`].
///
/// Like [`alloc_no_gc`], this function is guaranteed not to trigger GC and not to call
/// [`crate::vm::Collection::block_for_gc`] or [`crate::vm::Collection::out_of_memory`]. It returns zero on
/// allocation failure.
///
/// Like [`alloc_slow`], this function is also only intended for use when a binding implements the
/// fastpath on the binding side.
///
/// Arguments:
/// * `mutator`: The mutator to perform this allocation request.
/// * `size`: The number of bytes required for the object.
/// * `align`: Required alignment for the object.
/// * `offset`: Offset associated with the alignment.
/// * `semantics`: The allocation semantic required for the allocation.
pub fn alloc_slow_no_gc<VM: VMBinding>(
mutator: &mut Mutator<VM>,
size: usize,
align: usize,
offset: usize,
semantics: AllocationSemantics,
) -> Address {
mutator.alloc_slow_no_gc(size, align, offset, semantics)
}

/// Perform post-allocation actions, usually initializing object metadata. For many allocators none are
/// required. For performance reasons, a VM should implement the post alloc fast-path on their side
/// rather than just calling this function.
Expand Down
80 changes: 72 additions & 8 deletions src/plan/mutator_context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -113,11 +113,29 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
offset: usize,
allocator: AllocationSemantics,
) -> Address {
unsafe {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
}
.alloc(size, align, offset)
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc(size, align, offset)
}

fn alloc_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_no_gc(size, align, offset)
}

fn alloc_slow(
Expand All @@ -127,11 +145,29 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
offset: usize,
allocator: AllocationSemantics,
) -> Address {
unsafe {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
}
.alloc_slow(size, align, offset)
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_slow(size, align, offset)
}

fn alloc_slow_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_slow_no_gc(size, align, offset)
}

// Note that this method is slow, and we expect VM bindings that care about performance to implement allocation fastpath sequence in their bindings.
Expand Down Expand Up @@ -264,7 +300,7 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
fn prepare(&mut self, tls: VMWorkerThread);
/// Do the release work for this mutator.
fn release(&mut self, tls: VMWorkerThread);
/// Allocate memory for an object.
/// Allocate memory for an object. This function will trigger a GC on failed allocation.
///
/// Arguments:
/// * `size`: the number of bytes required for the object.
Expand All @@ -278,7 +314,23 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation. This is only useful when the binding
/// Allocate memory for an object. This function will not trigger a GC on failed allocation.
///
/// Arguments:
/// * `size`: the number of bytes required for the object.
/// * `align`: required alignment for the object.
/// * `offset`: offset associated with the alignment. The result plus the offset will be aligned to the given alignment.
/// * `allocator`: the allocation semantic used for this object.
fn alloc_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation for [`MutatorContext::alloc`]. This function will trigger a GC on failed allocation.
///
/// This is only useful when the binding
/// implements the fast path allocation, and would like to explicitly
/// call the slow path after the fast path allocation fails.
fn alloc_slow(
Expand All @@ -288,6 +340,18 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation for [`MutatorContext::alloc_no_gc`]. This function will not trigger a GC on failed allocation.
///
/// This is only useful when the binding
/// implements the fast path allocation, and would like to explicitly
/// call the slow path after the fast path allocation fails.
fn alloc_slow_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// Perform post-allocation actions. For many allocators none are
/// required.
///
Expand Down
4 changes: 2 additions & 2 deletions src/policy/immix/immixspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -516,8 +516,8 @@ impl<VM: VMBinding> ImmixSpace<VM> {
}

/// Allocate a clean block.
pub fn get_clean_block(&self, tls: VMThread, copy: bool) -> Option<Block> {
let block_address = self.acquire(tls, Block::PAGES);
pub fn get_clean_block(&self, tls: VMThread, copy: bool, no_gc_on_fail: bool) -> Option<Block> {
let block_address = self.acquire(tls, Block::PAGES, no_gc_on_fail);
if block_address.is_zero() {
return None;
}
Expand Down
4 changes: 2 additions & 2 deletions src/policy/largeobjectspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -303,8 +303,8 @@ impl<VM: VMBinding> LargeObjectSpace<VM> {
}

/// Allocate an object
pub fn allocate_pages(&self, tls: VMThread, pages: usize) -> Address {
self.acquire(tls, pages)
pub fn allocate_pages(&self, tls: VMThread, pages: usize, no_gc_on_fail: bool) -> Address {
self.acquire(tls, pages, no_gc_on_fail)
}

/// Test if the object's mark bit is the same as the given value. If it is not the same,
Expand Down
8 changes: 6 additions & 2 deletions src/policy/lockfreeimmortalspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
data_pages + meta_pages
}

fn acquire(&self, _tls: VMThread, pages: usize) -> Address {
fn acquire(&self, _tls: VMThread, pages: usize, no_gc_on_fail: bool) -> Address {
trace!("LockFreeImmortalSpace::acquire");
let bytes = conversions::pages_to_bytes(pages);
let start = self
Expand All @@ -145,7 +145,11 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
})
.expect("update cursor failed");
if start + bytes > self.limit {
panic!("OutOfMemory")
if no_gc_on_fail {
return Address::ZERO;
} else {
panic!("OutOfMemory");
}
}
if self.slow_path_zeroing {
crate::util::memory::zero(start, bytes);
Expand Down
10 changes: 8 additions & 2 deletions src/policy/marksweepspace/native_ms/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,13 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
crate::util::metadata::vo_bit::bzero_vo_bit(block.start(), Block::BYTES);
}

pub fn acquire_block(&self, tls: VMThread, size: usize, align: usize) -> BlockAcquireResult {
pub fn acquire_block(
&self,
tls: VMThread,
size: usize,
align: usize,
no_gc_on_fail: bool,
) -> BlockAcquireResult {
{
let mut abandoned = self.abandoned.lock().unwrap();
let bin = mi_bin::<VM>(size, align);
Expand All @@ -424,7 +430,7 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
}
}

let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE);
let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE, no_gc_on_fail);
if acquired.is_zero() {
BlockAcquireResult::Exhausted
} else {
Expand Down
Loading
Loading