From f622666b06f03cc0b6193b3af446cc4eadabd2b1 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 22 Jan 2024 09:47:23 +0000 Subject: [PATCH] build based on f03ff94 --- dev/.documenter-siteinfo.json | 2 +- dev/examples/index.html | 18 +++++++++--------- dev/index.html | 2 +- dev/reference/advanced/index.html | 6 +++--- dev/reference/arraymethods/index.html | 6 +++--- dev/reference/backends/index.html | 2 +- dev/reference/helpers/index.html | 6 +++--- dev/reference/partition/index.html | 4 ++-- dev/reference/primitives/index.html | 12 ++++++------ dev/reference/psparsematrix/index.html | 4 ++-- dev/reference/ptimer/index.html | 2 +- dev/reference/pvector/index.html | 10 +++++----- dev/refindex/index.html | 2 +- dev/usage/index.html | 2 +- 14 files changed, 39 insertions(+), 39 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index c68c44e7..bce97bfe 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-21T16:02:16","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-22T09:47:19","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/examples/index.html b/dev/examples/index.html index 7dd95819..ce89a83e 100644 --- a/dev/examples/index.html +++ b/dev/examples/index.html @@ -20,19 +20,19 @@ [ Int[] ] end end
4-element Vector{Vector{Vector{Int64}}}:
- [[4, 15, 30], [18, 9, 22], [9, 1, 19], [7, 7, 13]]
+ [[3, 3, 4], [25, 16, 20], [1, 5, 29], [21, 8, 10]]
  [[]]
  [[]]
  [[]]

Note that only the first entry contains meaningful data in previous output.

a_rcv = scatter(a_snd,source=1)
4-element Vector{Vector{Int64}}:
- [4, 15, 30]
- [18, 9, 22]
- [9, 1, 19]
- [7, 7, 13]

After the scatter, all the parts have received their chunk. Now, we can count in parallel.

b_snd = map(ai->count(isodd,ai),a_rcv)
4-element Vector{Int64}:
- 1
+ [3, 3, 4]
+ [25, 16, 20]
+ [1, 5, 29]
+ [21, 8, 10]

After the scatter, all the parts have received their chunk. Now, we can count in parallel.

b_snd = map(ai->count(isodd,ai),a_rcv)
4-element Vector{Int64}:
+ 2
  1
  3
- 3

Finally we reduce the partial sums.

b_rcv = reduction(+,b_snd,init=0,destination=1)
4-element Vector{Int64}:
- 8
+ 1

Finally we reduce the partial sums.

b_rcv = reduction(+,b_snd,init=0,destination=1)
4-element Vector{Int64}:
+ 7
  0
  0
  0

Only the destination rank will receive the correct result.

Point-to-point communication

Each rank generates some message (in this case an integer 10 times the current rank id). Each rank sends this data to the next rank. The last one sends it to the first, closing the circle. After repeating this exchange a number of times equal to the number of ranks, we check that we ended up with the original message.

First, each rank generates the ids of the neighbor to send data to.

using PartitionedArrays
@@ -150,4 +150,4 @@
  []
  [-2.0]

Update the matrix and the vector accordingly:

pvector!(b,VV,cacheb) |> wait
 psparse!(A,V,cacheA) |> wait

The solution should be the same as before in this case.

r = A*x - b
-norm(r)
6.280369834735101e-16

This page was generated using Literate.jl.

+norm(r)
6.280369834735101e-16

This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index e3ab86c3..b75316c5 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Introduction · PartitionedArrays.jl

PartitionedArrays.jl

Welcome to the documentation for PartitionedArrays.jl!

What

This package provides distributed (a.k.a. partitioned) vectors and sparse matrices like the ones needed in distributed finite differences, finite volumes, or finite element computations. Packages such GridapDistributed have shown weak and strong scaling up to tens of thousands of CPU cores in the distributed assembly of sparse linear systems when using PartitionedArrays as their distributed linear algebra back-end. See this publication for further details:

Santiago Badia, Alberto F. Martín, and Francesc Verdugo (2022). "GridapDistributed: a massively parallel finite element toolbox in Julia". Journal of Open Source Software, 7(74), 4157. doi: 10.21105/joss.04157.

Why

The main objective of this package is to avoid to interface directly with MPI or MPI-based libraries when prototyping and debugging distributed parallel codes. MPI-based applications are executed in batch mode with commands like mpiexec -np 4 julia input.jl, which break the Julia development workflow. In particular, one starts a fresh Julia session at each run, making difficult to reuse compiled code between runs. In addition, packages like Revise and Debugger are also difficult to use in combination with MPI computations on several processes.

To overcome these limitations, PartitionedArrays considers a data-oriented programming model that allows one to write distributed algorithms in a generic way, independent from the message passing back-end used to run them. MPI is one of the possible back-ends available in PartitionedArrays, used to deploy large computations on computational clusters. However, one can also use other back-ends that are able to run on standard serial Julia sessions, which allows one to use the standard Julia workflow to develop and debug complex codes in an effective way.

+Introduction · PartitionedArrays.jl

PartitionedArrays.jl

Welcome to the documentation for PartitionedArrays.jl!

What

This package provides distributed (a.k.a. partitioned) vectors and sparse matrices like the ones needed in distributed finite differences, finite volumes, or finite element computations. Packages such GridapDistributed have shown weak and strong scaling up to tens of thousands of CPU cores in the distributed assembly of sparse linear systems when using PartitionedArrays as their distributed linear algebra back-end. See this publication for further details:

Santiago Badia, Alberto F. Martín, and Francesc Verdugo (2022). "GridapDistributed: a massively parallel finite element toolbox in Julia". Journal of Open Source Software, 7(74), 4157. doi: 10.21105/joss.04157.

Why

The main objective of this package is to avoid to interface directly with MPI or MPI-based libraries when prototyping and debugging distributed parallel codes. MPI-based applications are executed in batch mode with commands like mpiexec -np 4 julia input.jl, which break the Julia development workflow. In particular, one starts a fresh Julia session at each run, making difficult to reuse compiled code between runs. In addition, packages like Revise and Debugger are also difficult to use in combination with MPI computations on several processes.

To overcome these limitations, PartitionedArrays considers a data-oriented programming model that allows one to write distributed algorithms in a generic way, independent from the message passing back-end used to run them. MPI is one of the possible back-ends available in PartitionedArrays, used to deploy large computations on computational clusters. However, one can also use other back-ends that are able to run on standard serial Julia sessions, which allows one to use the standard Julia workflow to develop and debug complex codes in an effective way.

diff --git a/dev/reference/advanced/index.html b/dev/reference/advanced/index.html index 4ac345ec..743f85b6 100644 --- a/dev/reference/advanced/index.html +++ b/dev/reference/advanced/index.html @@ -1,5 +1,5 @@ -Advanced · PartitionedArrays.jl

Advanced

Custom partitions

PartitionedArrays.LocalIndicesType
struct LocalIndices

Container for arbitrary local indices.

Properties

  • n_global::Int: Number of global indices.
  • owner::Int32: Id of the part that stores the local indices
  • local_to_global::Vector{Int}: Global ids of the local indices in this part. local_to_global[i_local] is the global id corresponding to the local index number i_local.
  • local_to_owner::Vector{Int32}: Owners of the local ids. local_to_owner[i_local]is the id of the owner of the local index number i_local.

Supertype hierarchy

LocalIndices <: AbstractLocalIndices
source
PartitionedArrays.LocalIndicesMethod
LocalIndices(n_global,owner,local_to_global,local_to_owner)

Build an instance of LocalIndices from the underlying properties n_global, owner, local_to_global, and local_to_owner. The types of these variables need to match the type of the properties in LocalIndices.

source
PartitionedArrays.OwnAndGhostIndicesType
OwnAndGhostIndices

Container for local indices stored as own and ghost indices separately. Local indices are defined by concatenating own and ghost ones.

Properties

  • own::OwnIndices: Container for the own indices.
  • ghost::GhostIndices: Container for the ghost indices.
  • global_to_owner: [optional: it can be nothing] Vector containing the owner of each global id.

Supertype hierarchy

OwnAndGhostIndices{A} <: AbstractLocalIndices

where A=typeof(global_to_owner).

source
PartitionedArrays.OwnIndicesType
struct OwnIndices

Container for own indices.

Properties

  • n_global::Int: Number of global indices
  • owner::Int32: Id of the part that owns these indices
  • own_to_global::Vector{Int}: Global ids of the indices owned by this part. own_to_global[i_own] is the global id corresponding to the own index number i_own.

Supertype hierarchy

OwnIndices <: Any
source
PartitionedArrays.GhostIndicesType
struct GhostIndices

Container for ghost indices.

Properties

  • n_global::Int: Number of global indices
  • ghost_to_global::Vector{Int}: Global ids of the ghost indices in this part. ghost_to_global[i_ghost] is the global id corresponding to the ghost index number i_ghost.
  • ghost_to_owner::Vector{Int32}: Owners of the ghost ids. ghost_to_owner[i_ghost]is the id of the owner of the ghost index number i_ghost.

Supertype hierarchy

GhostIndices <: Any
source

Transform partitions

PartitionedArrays.replace_ghostFunction
replace_ghost(indices,gids,owners)

Replaces the ghost indices in indices with global ids in gids and owners in owners. Returned object takes ownership of gids and owners. This method only makes sense if indices stores ghost ids in separate vectors like in OwnAndGhostIndices. gids should be unique and not being owned by indices.

source
PartitionedArrays.union_ghostFunction
union_ghost(indices,gids,owners)

Make the union of the ghost indices in indices with the global indices gids and owners owners. Return an object of the same type as indices with the new ghost indices and the same own indices as in indices. The result does not take ownership of gids and owners.

source
PartitionedArrays.find_ownerFunction
find_owner(index_partition,global_ids)

Find the owners of the global ids in global_ids. The input global_ids is a vector of vectors distributed over the same parts as index_partition. Each part will look for the owners in parallel, when using a parallel back-end.

Example

julia> using PartitionedArrays
+Advanced · PartitionedArrays.jl

Advanced

Custom partitions

PartitionedArrays.LocalIndicesType
struct LocalIndices

Container for arbitrary local indices.

Properties

  • n_global::Int: Number of global indices.
  • owner::Int32: Id of the part that stores the local indices
  • local_to_global::Vector{Int}: Global ids of the local indices in this part. local_to_global[i_local] is the global id corresponding to the local index number i_local.
  • local_to_owner::Vector{Int32}: Owners of the local ids. local_to_owner[i_local]is the id of the owner of the local index number i_local.

Supertype hierarchy

LocalIndices <: AbstractLocalIndices
source
PartitionedArrays.LocalIndicesMethod
LocalIndices(n_global,owner,local_to_global,local_to_owner)

Build an instance of LocalIndices from the underlying properties n_global, owner, local_to_global, and local_to_owner. The types of these variables need to match the type of the properties in LocalIndices.

source
PartitionedArrays.OwnAndGhostIndicesType
OwnAndGhostIndices

Container for local indices stored as own and ghost indices separately. Local indices are defined by concatenating own and ghost ones.

Properties

  • own::OwnIndices: Container for the own indices.
  • ghost::GhostIndices: Container for the ghost indices.
  • global_to_owner: [optional: it can be nothing] Vector containing the owner of each global id.

Supertype hierarchy

OwnAndGhostIndices{A} <: AbstractLocalIndices

where A=typeof(global_to_owner).

source
PartitionedArrays.OwnIndicesType
struct OwnIndices

Container for own indices.

Properties

  • n_global::Int: Number of global indices
  • owner::Int32: Id of the part that owns these indices
  • own_to_global::Vector{Int}: Global ids of the indices owned by this part. own_to_global[i_own] is the global id corresponding to the own index number i_own.

Supertype hierarchy

OwnIndices <: Any
source
PartitionedArrays.GhostIndicesType
struct GhostIndices

Container for ghost indices.

Properties

  • n_global::Int: Number of global indices
  • ghost_to_global::Vector{Int}: Global ids of the ghost indices in this part. ghost_to_global[i_ghost] is the global id corresponding to the ghost index number i_ghost.
  • ghost_to_owner::Vector{Int32}: Owners of the ghost ids. ghost_to_owner[i_ghost]is the id of the owner of the ghost index number i_ghost.

Supertype hierarchy

GhostIndices <: Any
source

Transform partitions

PartitionedArrays.replace_ghostFunction
replace_ghost(indices,gids,owners)

Replaces the ghost indices in indices with global ids in gids and owners in owners. Returned object takes ownership of gids and owners. This method only makes sense if indices stores ghost ids in separate vectors like in OwnAndGhostIndices. gids should be unique and not being owned by indices.

source
PartitionedArrays.union_ghostFunction
union_ghost(indices,gids,owners)

Make the union of the ghost indices in indices with the global indices gids and owners owners. Return an object of the same type as indices with the new ghost indices and the same own indices as in indices. The result does not take ownership of gids and owners.

source
PartitionedArrays.find_ownerFunction
find_owner(index_partition,global_ids)

Find the owners of the global ids in global_ids. The input global_ids is a vector of vectors distributed over the same parts as index_partition. Each part will look for the owners in parallel, when using a parallel back-end.

Example

julia> using PartitionedArrays
 
 julia> rank = LinearIndices((4,));
 
@@ -22,5 +22,5 @@
  [2]
  [2, 3]
  [3, 1]
- [4, 4, 1]
source

Transform indices

Local vector storage

PartitionedArrays.OwnAndGhostVectorsType
struct OwnAndGhostVectors{A,C,T}

Vector type that stores the local values of a PVector instance using a vector of own values, a vector of ghost values, and a permutation. This is not the default data layout of PVector (which is a plain vector), but corresponds to the layout of distributed vectors in other packages, such as PETSc. Use this type to avoid duplicating memory when passing data to these other packages.

Properties

  • own_values::A: The vector of own values.
  • ghost_values::A: The vector of ghost values.
  • permumation::C: A permutation vector such that vcat(own_values,ghost_values)[permutation] corresponds to the local values.

Supertype hierarchy

OwnAndGhostVectors{A,C,T} <: AbstractVector{T}
source

Assembly

PartitionedArrays.assembly_graphFunction
assembly_graph(index_partition;kwargs...)

Return an instance of ExchangeGraph representing the communication graph needed to perform assembly of distributed vectors defined on the index partition index_partition. kwargs are delegated to ExchangeGraph in order to find the receiving neighbors from the sending ones.

Equivalent to

neighbors = assembly_neighbors(index_partition;kwargs...)
-ExchangeGraph(neighbors...)
source
PartitionedArrays.assembly_neighborsFunction
neigs_snd, neigs_rcv = assembly_neighbors(index_partition;kwargs...)

Return the ids of the neighbor parts from we send and receive data respectively in the assembly of distributed vectors defined on the index partition index_partition. partition index_partition. kwargs are delegated to ExchangeGraph in order to find the receiving neighbors from the sending ones.

source
PartitionedArrays.assembly_local_indicesFunction
ids_snd, ids_rcv = assembly_local_indices(index_partition)

Return the local ids to be sent and received in the assembly of distributed vectors defined on the index partition index_partition.

Local values corresponding to the local indices in ids_snd[i] (respectively ids_rcv[i]) are sent to part neigs_snd[i] (respectively neigs_rcv[i]), where neigs_snd, neigs_rcv = assembly_neighbors(index_partition).

source
+ [4, 4, 1]
source

Transform indices

Local vector storage

PartitionedArrays.OwnAndGhostVectorsType
struct OwnAndGhostVectors{A,C,T}

Vector type that stores the local values of a PVector instance using a vector of own values, a vector of ghost values, and a permutation. This is not the default data layout of PVector (which is a plain vector), but corresponds to the layout of distributed vectors in other packages, such as PETSc. Use this type to avoid duplicating memory when passing data to these other packages.

Properties

  • own_values::A: The vector of own values.
  • ghost_values::A: The vector of ghost values.
  • permumation::C: A permutation vector such that vcat(own_values,ghost_values)[permutation] corresponds to the local values.

Supertype hierarchy

OwnAndGhostVectors{A,C,T} <: AbstractVector{T}
source

Assembly

PartitionedArrays.assembly_graphFunction
assembly_graph(index_partition;kwargs...)

Return an instance of ExchangeGraph representing the communication graph needed to perform assembly of distributed vectors defined on the index partition index_partition. kwargs are delegated to ExchangeGraph in order to find the receiving neighbors from the sending ones.

Equivalent to

neighbors = assembly_neighbors(index_partition;kwargs...)
+ExchangeGraph(neighbors...)
source
PartitionedArrays.assembly_neighborsFunction
neigs_snd, neigs_rcv = assembly_neighbors(index_partition;kwargs...)

Return the ids of the neighbor parts from we send and receive data respectively in the assembly of distributed vectors defined on the index partition index_partition. partition index_partition. kwargs are delegated to ExchangeGraph in order to find the receiving neighbors from the sending ones.

source
PartitionedArrays.assembly_local_indicesFunction
ids_snd, ids_rcv = assembly_local_indices(index_partition)

Return the local ids to be sent and received in the assembly of distributed vectors defined on the index partition index_partition.

Local values corresponding to the local indices in ids_snd[i] (respectively ids_rcv[i]) are sent to part neigs_snd[i] (respectively neigs_rcv[i]), where neigs_snd, neigs_rcv = assembly_neighbors(index_partition).

source
diff --git a/dev/reference/arraymethods/index.html b/dev/reference/arraymethods/index.html index 8079baf0..b4ad923d 100644 --- a/dev/reference/arraymethods/index.html +++ b/dev/reference/arraymethods/index.html @@ -1,5 +1,5 @@ -Array methods · PartitionedArrays.jl

Array methods

Indices

Transformations

PartitionedArrays.map_mainFunction
map_main(f,args...;kwargs...)

Like map(f,args...) but only apply f to one component of the arrays in args.

Optional key-word arguments

  • main = MAIN: The linear index of the component to map
  • otherwise = (args...)->nothing: The function to apply when mapping indices different from main.

Examples

julia> using PartitionedArrays
+Array methods · PartitionedArrays.jl

Array methods

Indices

Transformations

PartitionedArrays.map_mainFunction
map_main(f,args...;kwargs...)

Like map(f,args...) but only apply f to one component of the arrays in args.

Optional key-word arguments

  • main = MAIN: The linear index of the component to map
  • otherwise = (args...)->nothing: The function to apply when mapping indices different from main.

Examples

julia> using PartitionedArrays
 
 julia> a = [1,2,3,4]
 4-element Vector{Int64}:
@@ -13,7 +13,7 @@
    nothing
  -2
    nothing
-   nothing
source
PartitionedArrays.tuple_of_arraysFunction
tuple_of_arrays(a)

Convert the array of tuples a into a tuple of arrays.

Examples

julia> using PartitionedArrays
 
 julia> a = [(1,2),(3,4),(5,6)]
 3-element Vector{Tuple{Int64, Int64}}:
@@ -22,4 +22,4 @@
  (5, 6)
 
 julia> b,c = tuple_of_arrays(a)
-([1, 3, 5], [2, 4, 6])
source
+([1, 3, 5], [2, 4, 6])
source
diff --git a/dev/reference/backends/index.html b/dev/reference/backends/index.html index 2fd404e1..1079cc09 100644 --- a/dev/reference/backends/index.html +++ b/dev/reference/backends/index.html @@ -1,2 +1,2 @@ -Back-ends · PartitionedArrays.jl

Back-ends

MPI

PartitionedArrays.MPIArrayType
MPIArray{T,N}

Represent an array of element type T and number of dimensions N, where each item in the array is stored in a separate MPI process. I.e., each MPI rank stores only one item. For arrays that can store more than one item per rank see PVector or PSparseMatrix. This struct implements the Julia array interface. However, using setindex! and getindex! is disabled for performance reasons (communication cost).

Properties

The fields of this struct (and the inner constructors) are private. To generate an instance of MPIArray use function distribute_with_mpi.

Supertype hierarchy

MPIArray{T,N} <: AbstractArray{T,N}
source
PartitionedArrays.distribute_with_mpiMethod
distribute_with_mpi(a;comm::MPI.Comm=MPI.COMM_WORLD,duplicate_comm=true)

Create an MPIArray instance by distributing the items in the collection a over the ranks of the given MPI communicator comm. Each rank receives exactly one item, thus length(a) and the communicator size need to match. For arrays that can store more than one item per rank see PVector or PSparseMatrix. If duplicate_comm=false the result will take ownership of the given communicator. Otherwise, a copy will be done with MPI.Comm_dup(comm).

Note

This function calls MPI.Init() if MPI is not initialized yet.

source
PartitionedArrays.with_mpiMethod
with_mpi(f;comm=MPI.COMM_WORLD,duplicate_comm=true)

Call f(a->distribute_with_mpi(a;comm,duplicate_comm)) and abort MPI if there was an error. This is the safest way of running the function f using MPI.

Note

This function calls MPI.Init() if MPI is not initialized yet.

source

Debug

PartitionedArrays.DebugArrayType
struct DebugArray{T,N}

Data structure that emulates the limitations of MPIArray, but that can be used on a standard sequential (a.k.a. serial) Julia session. This struct implements the Julia array interface. Like for MPIArray, using setindex! and getindex on DebugArray is disabled since this will not be efficient in actual parallel runs (communication cost).

Properties

The fields of this struct are private.

Supertype hierarchy

DebugArray{T,N} <: AbstractArray{T,N}
source
PartitionedArrays.DebugArrayMethod
DebugArray(a)

Create a DebugArray{T,N} data object from the items in collection a, where T=eltype(a) and N=ndims(a) . If a::Array{T,N}, then the result takes ownership of the input. Otherwise, a copy of the input is created.

source
+Back-ends · PartitionedArrays.jl

Back-ends

MPI

PartitionedArrays.MPIArrayType
MPIArray{T,N}

Represent an array of element type T and number of dimensions N, where each item in the array is stored in a separate MPI process. I.e., each MPI rank stores only one item. For arrays that can store more than one item per rank see PVector or PSparseMatrix. This struct implements the Julia array interface. However, using setindex! and getindex! is disabled for performance reasons (communication cost).

Properties

The fields of this struct (and the inner constructors) are private. To generate an instance of MPIArray use function distribute_with_mpi.

Supertype hierarchy

MPIArray{T,N} <: AbstractArray{T,N}
source
PartitionedArrays.distribute_with_mpiMethod
distribute_with_mpi(a;comm::MPI.Comm=MPI.COMM_WORLD,duplicate_comm=true)

Create an MPIArray instance by distributing the items in the collection a over the ranks of the given MPI communicator comm. Each rank receives exactly one item, thus length(a) and the communicator size need to match. For arrays that can store more than one item per rank see PVector or PSparseMatrix. If duplicate_comm=false the result will take ownership of the given communicator. Otherwise, a copy will be done with MPI.Comm_dup(comm).

Note

This function calls MPI.Init() if MPI is not initialized yet.

source
PartitionedArrays.with_mpiMethod
with_mpi(f;comm=MPI.COMM_WORLD,duplicate_comm=true)

Call f(a->distribute_with_mpi(a;comm,duplicate_comm)) and abort MPI if there was an error. This is the safest way of running the function f using MPI.

Note

This function calls MPI.Init() if MPI is not initialized yet.

source

Debug

PartitionedArrays.DebugArrayType
struct DebugArray{T,N}

Data structure that emulates the limitations of MPIArray, but that can be used on a standard sequential (a.k.a. serial) Julia session. This struct implements the Julia array interface. Like for MPIArray, using setindex! and getindex on DebugArray is disabled since this will not be efficient in actual parallel runs (communication cost).

Properties

The fields of this struct are private.

Supertype hierarchy

DebugArray{T,N} <: AbstractArray{T,N}
source
PartitionedArrays.DebugArrayMethod
DebugArray(a)

Create a DebugArray{T,N} data object from the items in collection a, where T=eltype(a) and N=ndims(a) . If a::Array{T,N}, then the result takes ownership of the input. Otherwise, a copy of the input is created.

source
diff --git a/dev/reference/helpers/index.html b/dev/reference/helpers/index.html index 0003c0cc..ecdd5026 100644 --- a/dev/reference/helpers/index.html +++ b/dev/reference/helpers/index.html @@ -1,6 +1,6 @@ Helpers · PartitionedArrays.jl

Helpers

JaggedArray

PartitionedArrays.GenericJaggedArrayType
struct GenericJaggedArray{V,A,B}

Generalization of JaggedArray, where the fields data and ptrs are allowed to be any array-like object.

Properties

data::A
-ptrs::B

Supertype hierarchy

GenericJaggedArray{V,A,B} <: AbstractVector{V}

Given a::GenericJaggedArray, V is typeof(view(a.data,a.ptrs[i]:(a.ptrs[i+1]-1))).

source
PartitionedArrays.JaggedArrayType
struct JaggedArray{T,Ti}

Efficient implementation of a vector of vectors. The inner vectors are stored one after the other in consecutive memory locations using an auxiliary vector data. The range of indices corresponding to each inner vector are encoded using a vector of integers ptrs.

Properties

data::Vector{T}
-ptrs::Vector{Ti}

Given a::JaggedArray, a.data contains the inner vectors. The i-th inner vector is stored in the range a.ptrs[i]:(a.ptrs[i+1]-1). The number of inner vectors (i.e. length(a)) is length(a.ptrs)-1. a[i] returns a view of a.data restricted to the range a.ptrs[i]:(a.ptrs[i+1]-1).

Supertype hierarchy

JaggedArray{T,Ti} <: AbstractVector{V}

Given a::JaggedArray, V is typeof(view(a.data,a.ptrs[i]:(a.ptrs[i+1]-1))).

source
PartitionedArrays.JaggedArrayMethod
JaggedArray(a)

Create a JaggedArray object from the vector of vectors a. If a::JaggedArray, then a is returned. Otherwise, the contents of a are copied.

source
PartitionedArrays.JaggedArrayMethod
JaggedArray(data::Vector,ptrs::Vector)

Create a JaggedArray from the given data and ptrs fields. The resulting object stores references to the given vectors.

source
PartitionedArrays.jagged_arrayMethod
jagged_array(data,ptrs)

Create a JaggedArray or a GenericJaggedArray object depending on the type of data and ptrs. The returned object stores references to the given inputs.

source
PartitionedArrays.length_to_ptrs!Method
length_to_ptrs!(ptrs)

Compute the field ptrs of a JaggedArray. length(ptrs) should be the number of sub-vectors in the jagged array plus one. At input, ptrs[i+1] is the length of the i-th sub-vector. At output, ptrs[i]:(ptrs[i+1]-1) contains the range where the i-th sub-vector is stored in the data field of the jagged array.

source

Sparse utils

PartitionedArrays.nziteratorFunction
for (i,j,v) in nziterator(a)
+ptrs::B

Supertype hierarchy

GenericJaggedArray{V,A,B} <: AbstractVector{V}

Given a::GenericJaggedArray, V is typeof(view(a.data,a.ptrs[i]:(a.ptrs[i+1]-1))).

source
PartitionedArrays.JaggedArrayType
struct JaggedArray{T,Ti}

Efficient implementation of a vector of vectors. The inner vectors are stored one after the other in consecutive memory locations using an auxiliary vector data. The range of indices corresponding to each inner vector are encoded using a vector of integers ptrs.

Properties

data::Vector{T}
+ptrs::Vector{Ti}

Given a::JaggedArray, a.data contains the inner vectors. The i-th inner vector is stored in the range a.ptrs[i]:(a.ptrs[i+1]-1). The number of inner vectors (i.e. length(a)) is length(a.ptrs)-1. a[i] returns a view of a.data restricted to the range a.ptrs[i]:(a.ptrs[i+1]-1).

Supertype hierarchy

JaggedArray{T,Ti} <: AbstractVector{V}

Given a::JaggedArray, V is typeof(view(a.data,a.ptrs[i]:(a.ptrs[i+1]-1))).

source
PartitionedArrays.JaggedArrayMethod
JaggedArray(a)

Create a JaggedArray object from the vector of vectors a. If a::JaggedArray, then a is returned. Otherwise, the contents of a are copied.

source
PartitionedArrays.JaggedArrayMethod
JaggedArray(data::Vector,ptrs::Vector)

Create a JaggedArray from the given data and ptrs fields. The resulting object stores references to the given vectors.

source
PartitionedArrays.jagged_arrayMethod
jagged_array(data,ptrs)

Create a JaggedArray or a GenericJaggedArray object depending on the type of data and ptrs. The returned object stores references to the given inputs.

source
PartitionedArrays.length_to_ptrs!Method
length_to_ptrs!(ptrs)

Compute the field ptrs of a JaggedArray. length(ptrs) should be the number of sub-vectors in the jagged array plus one. At input, ptrs[i+1] is the length of the i-th sub-vector. At output, ptrs[i]:(ptrs[i+1]-1) contains the range where the i-th sub-vector is stored in the data field of the jagged array.

source

Sparse utils

PartitionedArrays.nziteratorFunction
for (i,j,v) in nziterator(a)
 ...
-end

Iterate over the non zero entries of a returning the corresponding row i, column j and value v.

source
+end

Iterate over the non zero entries of a returning the corresponding row i, column j and value v.

source diff --git a/dev/reference/partition/index.html b/dev/reference/partition/index.html index 2151f0a4..86e364f8 100644 --- a/dev/reference/partition/index.html +++ b/dev/reference/partition/index.html @@ -15,7 +15,7 @@ [1, 2, 5, 6] [3, 4, 7, 8] [9, 10, 13, 14] - [11, 12, 15, 16]source
PartitionedArrays.variable_partitionFunction
variable_partition(n_own,n_global[;start])

Build a 1D variable-size block partition of the range 1:n. The output is a vector of vectors containing the indices in each component of the partition. The eltype of the result implements the AbstractLocalIndices interface.

Arguments

  • n_own::AbstractArray{<:Integer}: Array containing the block size for each part.
  • n_global::Integer: Number of global indices. It should be equal to sum(n_own).
  • start::AbstractArray{Int}=scan(+,n_own,type=:exclusive,init=1): First global index in each part.

We ask the user to provide n_global and (optionally) start since discovering them requires communications.

Examples

julia> using PartitionedArrays
+ [11, 12, 15, 16]
source
PartitionedArrays.variable_partitionFunction
variable_partition(n_own,n_global[;start])

Build a 1D variable-size block partition of the range 1:n. The output is a vector of vectors containing the indices in each component of the partition. The eltype of the result implements the AbstractLocalIndices interface.

Arguments

  • n_own::AbstractArray{<:Integer}: Array containing the block size for each part.
  • n_global::Integer: Number of global indices. It should be equal to sum(n_own).
  • start::AbstractArray{Int}=scan(+,n_own,type=:exclusive,init=1): First global index in each part.

We ask the user to provide n_global and (optionally) start since discovering them requires communications.

Examples

julia> using PartitionedArrays
 
 julia> rank = LinearIndices((4,));
 
@@ -26,4 +26,4 @@
  [1, 2, 3]
  [4, 5]
  [6, 7]
- [8, 9, 10]
source
PartitionedArrays.trivial_partitionFunction
trivial_partition(ranks,n;destination=MAIN)
Warning

Document me!

source
PartitionedArrays.partition_from_colorFunction
partition_from_color(ranks,global_to_color;multicast=false,source=MAIN)

Build an arbitrary 1d partition by defining the parts via the argument global_to_color (see below). The output is a vector of vectors containing the indices in each component of the partition. The eltype of the result implements the AbstractLocalIndices interface.

Arguments

  • ranks: Array containing the distribution of ranks.
  • global_to_color: If multicast==false, global_to_color[gid] contains the part id that owns the global id gid. If multicast==true, then global_to_color[source][gid] contains the part id that owns the global id gid.

Key-word arguments

  • multicast=false
  • source=MAIN

This function is useful when generating a partition using a graph partitioner such as METIS. The argument global_to_color is the usual output of such tools.

source

AbstractLocalIndices

PartitionedArrays.AbstractLocalIndicesType
abstract type AbstractLocalIndices

Abstract type representing the local, own, and ghost indices in a part of a partition of a range 1:n with length n.

Notation

Let 1:n be an integer range with length n. We denote the indices in 1:n as the global indices. Let us consider a partition of 1:n. The indices in a part in the partition are called the own indices of this part. I.e., each part owns a subset of 1:n. All these subsets are disjoint. Let us assume that each part is equipped with a second set of indices called the ghost indices. The set of ghost indices in a given part is an arbitrary subset of the global indices 1:n that are owned by other parts. The union of the own and ghost indices is referred to as the local indices of this part. The sets of local indices might overlap between the different parts.

The sets of own, ghost, and local indices are stored using vector-like containers in concrete implementations of AbstractLocalIndices. This equips them with a certain order. The i-th own index in a part is defined as the one being stored at index i in the array that contains the own indices in this part (idem for ghost and local indices). The map between indices in these ordered index sets are given by functions such as local_to_global, own_to_local etc.

Supertype hierarchy

AbstractLocalIndices <: AbstractVector{Int}
source
PartitionedArrays.local_to_globalFunction
local_to_global(indices)

Return an array with the global indices of the local indices in indices.

source
PartitionedArrays.own_to_globalFunction
own_to_global(indices)

Return an array with the global indices of the own indices in indices.

source
PartitionedArrays.ghost_to_globalFunction
ghost_to_global(indices)

Return an array with the global indices of the ghost indices in indices.

source
PartitionedArrays.local_to_ownerFunction
local_to_owner(indices)

Return an array with the owners of the local indices in indices.

source
PartitionedArrays.own_to_ownerFunction
own_to_owner(indices)

Return an array with the owners of the own indices in indices.

source
PartitionedArrays.ghost_to_ownerFunction
ghost_to_owner(indices)

Return an array with the owners of the ghost indices in indices.

source
PartitionedArrays.global_to_localFunction
global_to_local(indices)

Return an array with the inverse index map of local_to_global(indices).

source
PartitionedArrays.global_to_ownFunction
global_to_own(indices)

Return an array with the inverse index map of own_to_global(indices).

source
PartitionedArrays.global_to_ghostFunction
global_to_ghost(indices)

Return an array with the inverse index map of ghost_to_global(indices).

source
PartitionedArrays.own_to_localFunction
own_to_local(indices)

Return an array with the local ids of the own indices in indices.

source
PartitionedArrays.ghost_to_localFunction
ghost_to_local(indices)

Return an array with the local ids of the ghost indices in indices.

source
PartitionedArrays.local_to_ownFunction
local_to_own(indices)

Return an array with the inverse index map of own_to_local(indices).

source
PartitionedArrays.local_to_ghostFunction
local_to_ghost(indices)

Return an array with the inverse index map of ghost_to_local(indices).

source
PartitionedArrays.local_lengthFunction
local_length(indices)

Get number of local ids in indices.

source
PartitionedArrays.own_lengthFunction
own_length(indices)

Get number of own ids in indices.

source
PartitionedArrays.ghost_lengthFunction
ghost_length(indices)

Get number of ghost ids in indices.

source
PartitionedArrays.global_lengthFunction
global_length(indices)

Get number of global ids associated with indices.

source
PartitionedArrays.part_idFunction
part_id(indices)

Return the id of the part that is storing indices.

source

PRange

PartitionedArrays.PRangeType
struct PRange{A}

PRange (partitioned range) is a type representing a range of indices 1:n partitioned into several parts. This type is used to represent the axes of instances of PVector and PSparseMatrix.

Properties

  • partition::A

The item partition[i] is an object that contains information about the own, ghost, and local indices of part number i. typeof(partition[i]) is a type that implements the methods of the AbstractLocalIndices interface. Use this interface to access the underlying information about own, ghost, and local indices.

Supertype hierarchy

PRange{A} <: AbstractUnitRange{Int}
source
PartitionedArrays.partitionMethod
partition(a::PRange)

Get a.partition.

source
+ [8, 9, 10]source
PartitionedArrays.trivial_partitionFunction
trivial_partition(ranks,n;destination=MAIN)
Warning

Document me!

source
PartitionedArrays.partition_from_colorFunction
partition_from_color(ranks,global_to_color;multicast=false,source=MAIN)

Build an arbitrary 1d partition by defining the parts via the argument global_to_color (see below). The output is a vector of vectors containing the indices in each component of the partition. The eltype of the result implements the AbstractLocalIndices interface.

Arguments

  • ranks: Array containing the distribution of ranks.
  • global_to_color: If multicast==false, global_to_color[gid] contains the part id that owns the global id gid. If multicast==true, then global_to_color[source][gid] contains the part id that owns the global id gid.

Key-word arguments

  • multicast=false
  • source=MAIN

This function is useful when generating a partition using a graph partitioner such as METIS. The argument global_to_color is the usual output of such tools.

source

AbstractLocalIndices

PartitionedArrays.AbstractLocalIndicesType
abstract type AbstractLocalIndices

Abstract type representing the local, own, and ghost indices in a part of a partition of a range 1:n with length n.

Notation

Let 1:n be an integer range with length n. We denote the indices in 1:n as the global indices. Let us consider a partition of 1:n. The indices in a part in the partition are called the own indices of this part. I.e., each part owns a subset of 1:n. All these subsets are disjoint. Let us assume that each part is equipped with a second set of indices called the ghost indices. The set of ghost indices in a given part is an arbitrary subset of the global indices 1:n that are owned by other parts. The union of the own and ghost indices is referred to as the local indices of this part. The sets of local indices might overlap between the different parts.

The sets of own, ghost, and local indices are stored using vector-like containers in concrete implementations of AbstractLocalIndices. This equips them with a certain order. The i-th own index in a part is defined as the one being stored at index i in the array that contains the own indices in this part (idem for ghost and local indices). The map between indices in these ordered index sets are given by functions such as local_to_global, own_to_local etc.

Supertype hierarchy

AbstractLocalIndices <: AbstractVector{Int}
source
PartitionedArrays.local_to_globalFunction
local_to_global(indices)

Return an array with the global indices of the local indices in indices.

source
PartitionedArrays.own_to_globalFunction
own_to_global(indices)

Return an array with the global indices of the own indices in indices.

source
PartitionedArrays.ghost_to_globalFunction
ghost_to_global(indices)

Return an array with the global indices of the ghost indices in indices.

source
PartitionedArrays.local_to_ownerFunction
local_to_owner(indices)

Return an array with the owners of the local indices in indices.

source
PartitionedArrays.own_to_ownerFunction
own_to_owner(indices)

Return an array with the owners of the own indices in indices.

source
PartitionedArrays.ghost_to_ownerFunction
ghost_to_owner(indices)

Return an array with the owners of the ghost indices in indices.

source
PartitionedArrays.global_to_localFunction
global_to_local(indices)

Return an array with the inverse index map of local_to_global(indices).

source
PartitionedArrays.global_to_ownFunction
global_to_own(indices)

Return an array with the inverse index map of own_to_global(indices).

source
PartitionedArrays.global_to_ghostFunction
global_to_ghost(indices)

Return an array with the inverse index map of ghost_to_global(indices).

source
PartitionedArrays.own_to_localFunction
own_to_local(indices)

Return an array with the local ids of the own indices in indices.

source
PartitionedArrays.ghost_to_localFunction
ghost_to_local(indices)

Return an array with the local ids of the ghost indices in indices.

source
PartitionedArrays.local_to_ownFunction
local_to_own(indices)

Return an array with the inverse index map of own_to_local(indices).

source
PartitionedArrays.local_to_ghostFunction
local_to_ghost(indices)

Return an array with the inverse index map of ghost_to_local(indices).

source
PartitionedArrays.local_lengthFunction
local_length(indices)

Get number of local ids in indices.

source
PartitionedArrays.own_lengthFunction
own_length(indices)

Get number of own ids in indices.

source
PartitionedArrays.ghost_lengthFunction
ghost_length(indices)

Get number of ghost ids in indices.

source
PartitionedArrays.global_lengthFunction
global_length(indices)

Get number of global ids associated with indices.

source
PartitionedArrays.part_idFunction
part_id(indices)

Return the id of the part that is storing indices.

source

PRange

PartitionedArrays.PRangeType
struct PRange{A}

PRange (partitioned range) is a type representing a range of indices 1:n partitioned into several parts. This type is used to represent the axes of instances of PVector and PSparseMatrix.

Properties

  • partition::A

The item partition[i] is an object that contains information about the own, ghost, and local indices of part number i. typeof(partition[i]) is a type that implements the methods of the AbstractLocalIndices interface. Use this interface to access the underlying information about own, ghost, and local indices.

Supertype hierarchy

PRange{A} <: AbstractUnitRange{Int}
source
PartitionedArrays.partitionMethod
partition(a::PRange)

Get a.partition.

source
diff --git a/dev/reference/primitives/index.html b/dev/reference/primitives/index.html index 9af8d2af..b388167f 100644 --- a/dev/reference/primitives/index.html +++ b/dev/reference/primitives/index.html @@ -17,7 +17,7 @@ 3-element Vector{Vector{Int64}}: [1, 2, 3] [1, 2, 3] - [1, 2, 3]source
PartitionedArrays.gather!Function
gather!(rcv,snd;destination=MAIN)

In-place version of gather. It returns rcv. The result array rcv can be allocated with the helper function allocate_gather.

source
PartitionedArrays.allocate_gatherFunction
allocate_gather(snd;destination=MAIN)

Allocate an array to be used in the first argument of gather!.

source

Scatter

PartitionedArrays.scatterFunction
scatter(snd;source=MAIN)

Copy the items in the collection snd[source] into an array of the same size and container type as snd. This function requires length(snd[source]) == length(snd).

Examples

julia> using PartitionedArrays
+ [1, 2, 3]
source
PartitionedArrays.gather!Function
gather!(rcv,snd;destination=MAIN)

In-place version of gather. It returns rcv. The result array rcv can be allocated with the helper function allocate_gather.

source
PartitionedArrays.allocate_gatherFunction
allocate_gather(snd;destination=MAIN)

Allocate an array to be used in the first argument of gather!.

source

Scatter

PartitionedArrays.scatterFunction
scatter(snd;source=MAIN)

Copy the items in the collection snd[source] into an array of the same size and container type as snd. This function requires length(snd[source]) == length(snd).

Examples

julia> using PartitionedArrays
 
 julia> a = [Int[],[1,2,3],Int[]]
 3-element Vector{Vector{Int64}}:
@@ -29,7 +29,7 @@
 3-element Vector{Int64}:
  1
  2
- 3
source
PartitionedArrays.scatter!Function
scatter!(rcv,snd;source=1)

In-place version of scatter. The destination array rcv can be generated with the helper function allocate_scatter. It returns rcv.

source
PartitionedArrays.allocate_scatterFunction
allocate_scatter(snd;source=1)

Allocate an array to be used in the first argument of scatter!.

source

Emit

PartitionedArrays.multicastFunction
multicast(snd;source=MAIN)

Copy snd[source] into a new array of the same size and type as snd.

Examples

julia> using PartitionedArrays
+ 3
source
PartitionedArrays.scatter!Function
scatter!(rcv,snd;source=1)

In-place version of scatter. The destination array rcv can be generated with the helper function allocate_scatter. It returns rcv.

source
PartitionedArrays.allocate_scatterFunction
allocate_scatter(snd;source=1)

Allocate an array to be used in the first argument of scatter!.

source

Emit

PartitionedArrays.multicastFunction
multicast(snd;source=MAIN)

Copy snd[source] into a new array of the same size and type as snd.

Examples

julia> using PartitionedArrays
 
 julia> a = [0,0,2,0]
 4-element Vector{Int64}:
@@ -43,7 +43,7 @@
  2
  2
  2
- 2
source
PartitionedArrays.multicast!Function
multicast!(rcv,snd;source=1)

In-place version of multicast. The destination array rcv can be generated with the helper function allocate_multicast. It returns rcv.

source
PartitionedArrays.allocate_multicastFunction
allocate_multicast(snd;source=1)

Allocate an array to be used in the first argument of multicast!.

source

Scan

PartitionedArrays.scanFunction
scan(op,a;init,type)

Return the scan of the values in a for the operation op. Use type=:inclusive or type=:exclusive to use an inclusive or exclusive scan. init will be added to all items in the result. Additionally, for exclusive scans, the first item in the result will be set to init.

Examples

julia> using PartitionedArrays
+ 2
source
PartitionedArrays.multicast!Function
multicast!(rcv,snd;source=1)

In-place version of multicast. The destination array rcv can be generated with the helper function allocate_multicast. It returns rcv.

source
PartitionedArrays.allocate_multicastFunction
allocate_multicast(snd;source=1)

Allocate an array to be used in the first argument of multicast!.

source

Scan

PartitionedArrays.scanFunction
scan(op,a;init,type)

Return the scan of the values in a for the operation op. Use type=:inclusive or type=:exclusive to use an inclusive or exclusive scan. init will be added to all items in the result. Additionally, for exclusive scans, the first item in the result will be set to init.

Examples

julia> using PartitionedArrays
 
 julia> a = [2,4,1,3]
 4-element Vector{Int64}:
@@ -64,7 +64,7 @@
  1
  3
  7
- 8
source
PartitionedArrays.scan!Function
scan!(op,b,a;init,type)

In-place version of scan on the result b.

source

Reduction

PartitionedArrays.reductionFunction
reduction(op, a; destination=MAIN [,init])

Reduce the values in array a according with operation op and the initial value init and store the result in a new array of the same size as a at index destination.

Examples

julia> using PartitionedArrays
+ 8
source
PartitionedArrays.scan!Function
scan!(op,b,a;init,type)

In-place version of scan on the result b.

source

Reduction

PartitionedArrays.reductionFunction
reduction(op, a; destination=MAIN [,init])

Reduce the values in array a according with operation op and the initial value init and store the result in a new array of the same size as a at index destination.

Examples

julia> using PartitionedArrays
 
 julia> a = [1,3,2,4]
 4-element Vector{Int64}:
@@ -78,7 +78,7 @@
   0
  10
   0
-  0
source
PartitionedArrays.reduction!Function
reduction!(op, b, a;destination=MAIN [,init])

In-place version of reduction on the result b.

source

Exchange

PartitionedArrays.ExchangeGraphType
struct ExchangeGraph{A}

Type representing a directed graph to be used in exchanges, see function exchange and exchange!.

Properties

  • snd::A
  • rcv::A

snd[i] contains a list of the outgoing neighbors of node i. rcv[i] contains a list of the incomming neighbors of node i. A is a vector-like container type.

Supertype hierarchy

ExchangeGraph <: Any
source
PartitionedArrays.ExchangeGraphMethod
ExchangeGraph(snd,rcv)

Create an instance of ExchangeGraph from the underlying fields.

source
PartitionedArrays.ExchangeGraphMethod
ExchangeGraph(snd; symmetric=false [rcv, neighbors,find_rcv_ids])

Create an ExchangeGraph object only from the lists of outgoing neighbors in snd. In case the list of incoming neighbors is known, it can be passed as key-word argument by setting rcv and the rest of key-word arguments are ignored. If symmetric==true, then the incoming neighbors are set to snd. Otherwise, either the optional neighbors or find_rcv_ids are considered, in that order. neighbors is also an ExchangeGraph that contains a super set of the outgoing and incoming neighbors associated with snd. It is used to find the incoming neighbors rcv efficiently. If neighbors are not provided, then find_rcv_ids is used (either the user-provided or a default one). find_rcv_ids is a function that implements an algorithm to find the rcv side of the exchange graph out of the snd side information.

source
PartitionedArrays.exchangeFunction
exchange(snd,graph::ExchangeGraph) -> Task

Send the data in snd according the directed graph graph. This function returns immediately and returns a task that produces the result, allowing for latency hiding. Use fetch to wait and get the result. The object snd and rcv=fetch(exchange(snd,graph)) are array of vectors. The value snd[i][j] is sent to node graph.snd[i][j]. The value rcv[i][j] is the one received from node graph.rcv[i][j].

Examples

julia> using PartitionedArrays
+  0
source
PartitionedArrays.reduction!Function
reduction!(op, b, a;destination=MAIN [,init])

In-place version of reduction on the result b.

source

Exchange

PartitionedArrays.ExchangeGraphType
struct ExchangeGraph{A}

Type representing a directed graph to be used in exchanges, see function exchange and exchange!.

Properties

  • snd::A
  • rcv::A

snd[i] contains a list of the outgoing neighbors of node i. rcv[i] contains a list of the incomming neighbors of node i. A is a vector-like container type.

Supertype hierarchy

ExchangeGraph <: Any
source
PartitionedArrays.ExchangeGraphMethod
ExchangeGraph(snd,rcv)

Create an instance of ExchangeGraph from the underlying fields.

source
PartitionedArrays.ExchangeGraphMethod
ExchangeGraph(snd; symmetric=false [rcv, neighbors,find_rcv_ids])

Create an ExchangeGraph object only from the lists of outgoing neighbors in snd. In case the list of incoming neighbors is known, it can be passed as key-word argument by setting rcv and the rest of key-word arguments are ignored. If symmetric==true, then the incoming neighbors are set to snd. Otherwise, either the optional neighbors or find_rcv_ids are considered, in that order. neighbors is also an ExchangeGraph that contains a super set of the outgoing and incoming neighbors associated with snd. It is used to find the incoming neighbors rcv efficiently. If neighbors are not provided, then find_rcv_ids is used (either the user-provided or a default one). find_rcv_ids is a function that implements an algorithm to find the rcv side of the exchange graph out of the snd side information.

source
PartitionedArrays.exchangeFunction
exchange(snd,graph::ExchangeGraph) -> Task

Send the data in snd according the directed graph graph. This function returns immediately and returns a task that produces the result, allowing for latency hiding. Use fetch to wait and get the result. The object snd and rcv=fetch(exchange(snd,graph)) are array of vectors. The value snd[i][j] is sent to node graph.snd[i][j]. The value rcv[i][j] is the one received from node graph.rcv[i][j].

Examples

julia> using PartitionedArrays
 
 julia> snd_ids = [[3,4],[1,3],[1,4],[2]]
 4-element Vector{Vector{Int64}}:
@@ -105,4 +105,4 @@
  [20, 30]
  [40]
  [10, 20]
- [10, 30]
source
PartitionedArrays.exchange!Function
exchange!(rcv,snd,graph::ExchangeGraph) -> Task

In-place and asynchronous version of exchange. This function returns immediately and returns a task that produces rcv with the updated values. Use fetch to get the updated version of rcv. The input rcv can be allocated with allocate_exchange.

source
PartitionedArrays.allocate_exchangeFunction
allocate_exchange(snd,graph::ExchangeGraph)

Allocate the result to be used in the first argument of exchange!.

source
+ [10, 30]source
PartitionedArrays.exchange!Function
exchange!(rcv,snd,graph::ExchangeGraph) -> Task

In-place and asynchronous version of exchange. This function returns immediately and returns a task that produces rcv with the updated values. Use fetch to get the updated version of rcv. The input rcv can be allocated with allocate_exchange.

source
PartitionedArrays.allocate_exchangeFunction
allocate_exchange(snd,graph::ExchangeGraph)

Allocate the result to be used in the first argument of exchange!.

source
diff --git a/dev/reference/psparsematrix/index.html b/dev/reference/psparsematrix/index.html index 00cc192d..8e8e7331 100644 --- a/dev/reference/psparsematrix/index.html +++ b/dev/reference/psparsematrix/index.html @@ -1,3 +1,3 @@ -PSparseMatrix · PartitionedArrays.jl

PSparseMatrix

Type signature

PartitionedArrays.PSparseMatrixType
struct PSparseMatrix{V,B,C,D,T}

PSparseMatrix (partitioned sparse matrix) is a type representing a matrix whose rows are distributed (a.k.a. partitioned) over different parts for distributed-memory parallel computations. Each part stores a subset of the rows of the matrix and their corresponding non zero columns.

This type overloads numerous array-like operations with corresponding parallel implementations.

Properties

  • matrix_partition::A
  • row_partition::B
  • col_partition::C
  • assembled::Bool

matrix_partition[i] contains a (sparse) matrix with the local rows and the corresponding nonzero columns (the local columns) in the part number i. eltype(matrix_partition) == V. row_partition[i] and col_partition[i] contain information about the local, own, and ghost rows and columns respectively in part number i. The types eltype(row_partition) and eltype(col_partition) implement the AbstractLocalIndices interface. For assembled==true, it is assumed that the matrix data is fully contained in the own rows.

Supertype hierarchy

PSparseMatrix{V,A,B,C,T} <: AbstractMatrix{T}

with T=eltype(V).

source

Accessors

PartitionedArrays.own_ghost_valuesMethod
own_ghost_values(a::PSparseMatrix)

Get a vector of matrices containing the own rows and ghost columns in each part of a.

The row indices of the returned matrices can be mapped to global indices, local indices, and owner by using own_to_global, own_to_local, and own_to_owner, respectively.

The column indices of the returned matrices can be mapped to global indices, local indices, and owner by using ghost_to_global, ghost_to_local, and ghost_to_owner, respectively.

source
PartitionedArrays.ghost_own_valuesMethod
ghost_own_values(a::PSparseMatrix)

Get a vector of matrices containing the ghost rows and own columns in each part of a.

The row indices of the returned matrices can be mapped to global indices, local indices, and owner by using ghost_to_global, ghost_to_local, and ghost_to_owner, respectively.

The column indices of the returned matrices can be mapped to global indices, local indices, and owner by using own_to_global, own_to_local, and own_to_owner, respectively.

source

Constructors

PartitionedArrays.psparseMethod
psparse(f,row_partition,col_partition;assembled)

Build an instance of PSparseMatrix from the initialization function f and the partition for rows and columns row_partition and col_partition.

Equivalent to

matrix_partition = map(f,row_partition,col_partition)
-PSparseMatrix(matrix_partition,row_partition,col_partition,assembled)
source
PartitionedArrays.psparseMethod
psparse([f,]I,J,V,row_partition,col_partition;kwargs...) -> Task

Crate an instance of PSparseMatrix by setting arbitrary entries from each of the underlying parts. It returns a task that produces the instance of PSparseMatrix allowing latency hiding while performing the communications needed in its setup.

source

Assembly

Re-partition

+PSparseMatrix · PartitionedArrays.jl

PSparseMatrix

Type signature

PartitionedArrays.PSparseMatrixType
struct PSparseMatrix{V,B,C,D,T}

PSparseMatrix (partitioned sparse matrix) is a type representing a matrix whose rows are distributed (a.k.a. partitioned) over different parts for distributed-memory parallel computations. Each part stores a subset of the rows of the matrix and their corresponding non zero columns.

This type overloads numerous array-like operations with corresponding parallel implementations.

Properties

  • matrix_partition::A
  • row_partition::B
  • col_partition::C
  • assembled::Bool

matrix_partition[i] contains a (sparse) matrix with the local rows and the corresponding nonzero columns (the local columns) in the part number i. eltype(matrix_partition) == V. row_partition[i] and col_partition[i] contain information about the local, own, and ghost rows and columns respectively in part number i. The types eltype(row_partition) and eltype(col_partition) implement the AbstractLocalIndices interface. For assembled==true, it is assumed that the matrix data is fully contained in the own rows.

Supertype hierarchy

PSparseMatrix{V,A,B,C,T} <: AbstractMatrix{T}

with T=eltype(V).

source

Accessors

PartitionedArrays.own_ghost_valuesMethod
own_ghost_values(a::PSparseMatrix)

Get a vector of matrices containing the own rows and ghost columns in each part of a.

The row indices of the returned matrices can be mapped to global indices, local indices, and owner by using own_to_global, own_to_local, and own_to_owner, respectively.

The column indices of the returned matrices can be mapped to global indices, local indices, and owner by using ghost_to_global, ghost_to_local, and ghost_to_owner, respectively.

source
PartitionedArrays.ghost_own_valuesMethod
ghost_own_values(a::PSparseMatrix)

Get a vector of matrices containing the ghost rows and own columns in each part of a.

The row indices of the returned matrices can be mapped to global indices, local indices, and owner by using ghost_to_global, ghost_to_local, and ghost_to_owner, respectively.

The column indices of the returned matrices can be mapped to global indices, local indices, and owner by using own_to_global, own_to_local, and own_to_owner, respectively.

source

Constructors

PartitionedArrays.psparseMethod
psparse(f,row_partition,col_partition;assembled)

Build an instance of PSparseMatrix from the initialization function f and the partition for rows and columns row_partition and col_partition.

Equivalent to

matrix_partition = map(f,row_partition,col_partition)
+PSparseMatrix(matrix_partition,row_partition,col_partition,assembled)
source
PartitionedArrays.psparseMethod
psparse([f,]I,J,V,row_partition,col_partition;kwargs...) -> Task

Crate an instance of PSparseMatrix by setting arbitrary entries from each of the underlying parts. It returns a task that produces the instance of PSparseMatrix allowing latency hiding while performing the communications needed in its setup.

source

Assembly

Re-partition

diff --git a/dev/reference/ptimer/index.html b/dev/reference/ptimer/index.html index 8966ee04..a6fcaea2 100644 --- a/dev/reference/ptimer/index.html +++ b/dev/reference/ptimer/index.html @@ -1,2 +1,2 @@ -Benchmarking · PartitionedArrays.jl

Benchmarking

PTimer

PartitionedArrays.PTimerType
struct PTimer{...}

Type used to benchmark distributed applications based on PartitionedArrays.

Properties

Properties and type parameters are private

Sub-type hierarchy

PTimer{...} <: Any
source
PartitionedArrays.PTimerMethod
PTimer(ranks;verbose::Bool=false)

Construct an instance of PTimer by using the same data distribution as for ranks. If verbose==true, then a message will be printed each time a new section is added in the timer when calling toc!.

source

Measuring elapsed times

PartitionedArrays.tic!Function
tic!(t::PTimer;barrier=false)

Reset the timer t to start measuring the time in a section. If barrier==true, all process will be synchronized before resetting the timer if using a distributed back-end. For MPI, this will result in a call to MPI.Barrier.

source

Post-process

PartitionedArrays.statisticsFunction
statistics(t::PTimer)

Return a dictionary with statistics of the compute time for the sections currently stored in the timer t.

source
+Benchmarking · PartitionedArrays.jl

Benchmarking

PTimer

PartitionedArrays.PTimerType
struct PTimer{...}

Type used to benchmark distributed applications based on PartitionedArrays.

Properties

Properties and type parameters are private

Sub-type hierarchy

PTimer{...} <: Any
source
PartitionedArrays.PTimerMethod
PTimer(ranks;verbose::Bool=false)

Construct an instance of PTimer by using the same data distribution as for ranks. If verbose==true, then a message will be printed each time a new section is added in the timer when calling toc!.

source

Measuring elapsed times

PartitionedArrays.tic!Function
tic!(t::PTimer;barrier=false)

Reset the timer t to start measuring the time in a section. If barrier==true, all process will be synchronized before resetting the timer if using a distributed back-end. For MPI, this will result in a call to MPI.Barrier.

source

Post-process

PartitionedArrays.statisticsFunction
statistics(t::PTimer)

Return a dictionary with statistics of the compute time for the sections currently stored in the timer t.

source
diff --git a/dev/reference/pvector/index.html b/dev/reference/pvector/index.html index cc2301fb..ee10b174 100644 --- a/dev/reference/pvector/index.html +++ b/dev/reference/pvector/index.html @@ -1,7 +1,7 @@ -PVector · PartitionedArrays.jl

PVector

Type signature

PartitionedArrays.PVectorType
struct PVector{V,A,B,...}

PVector (partitioned vector) is a type representing a vector whose entries are distributed (a.k.a. partitioned) over different parts for distributed-memory parallel computations.

This type overloads numerous array-like operations with corresponding parallel implementations.

Properties

  • vector_partition::A
  • index_partition::B

vector_partition[i] contains the vector of local values of the i-th part in the data distribution. The first type parameter V corresponds to typeof(values[i]) i.e. the vector type used to store the local values. The item index_partition[i] implements the AbstractLocalIndices interface providing information about the local, own, and ghost indices in the i-th part.

The rest of fields of this struct and type parameters are private.

Supertype hierarchy

PVector{V,A,B,...} <: AbstractVector{T}

with T=eltype(V).

source

Accessors

Constructors

PartitionedArrays.PVectorMethod
PVector{V}(undef,index_partition)
-PVector(undef,index_partition)

Create an instance of PVector with local uninitialized values stored in a vector of type V (which defaults to V=Vector{Float64}).

source
PartitionedArrays.pvectorMethod
pvector(f,index_partition)

Equivalent to

vector_partition = map(f,index_partition)
-PVector(vector_partition,index_partition)
source
PartitionedArrays.pvectorMethod
pvector([f,]I,V,index_partition;kwargs...) -> Task

Crate an instance of PVector by setting arbitrary entries from each of the underlying parts. It returns a task that produces the instance of PVector allowing latency hiding while performing the communications needed in its setup.

source
PartitionedArrays.prandFunction
prand([rng,][s,]index_partition)

Create a PVector object with uniform random values and the data partition in index_partition. The optional arguments have the same meaning and default values as in rand.

source
PartitionedArrays.prandnFunction
prandn([rng,][s,]index_partition)

Create a PVector object with normally distributed random values and the data partition in index_partition. The optional arguments have the same meaning and default values as in randn.

source

Assembly

PartitionedArrays.assemble!Method
assemble!([op,] a::PVector) -> Task

Transfer the ghost values to their owner part and insert them according with the insertion operation op (+ by default). It returns a task that produces a with updated values. After the transfer, the source ghost values are set to zero.

Examples

julia> using PartitionedArrays
+PVector · PartitionedArrays.jl

PVector

Type signature

PartitionedArrays.PVectorType
struct PVector{V,A,B,...}

PVector (partitioned vector) is a type representing a vector whose entries are distributed (a.k.a. partitioned) over different parts for distributed-memory parallel computations.

This type overloads numerous array-like operations with corresponding parallel implementations.

Properties

  • vector_partition::A
  • index_partition::B

vector_partition[i] contains the vector of local values of the i-th part in the data distribution. The first type parameter V corresponds to typeof(values[i]) i.e. the vector type used to store the local values. The item index_partition[i] implements the AbstractLocalIndices interface providing information about the local, own, and ghost indices in the i-th part.

The rest of fields of this struct and type parameters are private.

Supertype hierarchy

PVector{V,A,B,...} <: AbstractVector{T}

with T=eltype(V).

source

Accessors

Constructors

PartitionedArrays.PVectorMethod
PVector{V}(undef,index_partition)
+PVector(undef,index_partition)

Create an instance of PVector with local uninitialized values stored in a vector of type V (which defaults to V=Vector{Float64}).

source
PartitionedArrays.pvectorMethod
pvector(f,index_partition)

Equivalent to

vector_partition = map(f,index_partition)
+PVector(vector_partition,index_partition)
source
PartitionedArrays.pvectorMethod
pvector([f,]I,V,index_partition;kwargs...) -> Task

Crate an instance of PVector by setting arbitrary entries from each of the underlying parts. It returns a task that produces the instance of PVector allowing latency hiding while performing the communications needed in its setup.

source
PartitionedArrays.prandFunction
prand([rng,][s,]index_partition)

Create a PVector object with uniform random values and the data partition in index_partition. The optional arguments have the same meaning and default values as in rand.

source
PartitionedArrays.prandnFunction
prandn([rng,][s,]index_partition)

Create a PVector object with normally distributed random values and the data partition in index_partition. The optional arguments have the same meaning and default values as in randn.

source

Assembly

PartitionedArrays.assemble!Method
assemble!([op,] a::PVector) -> Task

Transfer the ghost values to their owner part and insert them according with the insertion operation op (+ by default). It returns a task that produces a with updated values. After the transfer, the source ghost values are set to zero.

Examples

julia> using PartitionedArrays
 
 julia> rank = LinearIndices((2,));
 
@@ -25,7 +25,7 @@
 julia> local_values(a)
 2-element Vector{Vector{Float64}}:
  [1.0, 1.0, 2.0, 0.0]
- [0.0, 2.0, 1.0, 1.0]
source
PartitionedArrays.consistent!Method
consistent!(a::PVector) -> Task

Make the local values of a globally consistent. I.e., the ghost values are updated with the corresponding own value in the part that owns the associated global global id.

Examples

julia> using PartitionedArrays
+ [0.0, 2.0, 1.0, 1.0]
source
PartitionedArrays.consistent!Method
consistent!(a::PVector) -> Task

Make the local values of a globally consistent. I.e., the ghost values are updated with the corresponding own value in the part that owns the associated global global id.

Examples

julia> using PartitionedArrays
 
 julia> rank = LinearIndices((2,));
 
@@ -49,4 +49,4 @@
 julia> local_values(a)
 2-element Vector{Vector{Int32}}:
  [1, 1, 1, 2]
- [1, 2, 2, 2]
source

Re-partition

+ [1, 2, 2, 2]
source

Re-partition

diff --git a/dev/refindex/index.html b/dev/refindex/index.html index b7776a4e..5dcf8798 100644 --- a/dev/refindex/index.html +++ b/dev/refindex/index.html @@ -1,2 +1,2 @@ -Index · PartitionedArrays.jl

Index

+Index · PartitionedArrays.jl

Index

diff --git a/dev/usage/index.html b/dev/usage/index.html index 66310fdd..92cc6510 100644 --- a/dev/usage/index.html +++ b/dev/usage/index.html @@ -71,4 +71,4 @@ Section max min avg ─────────────────────────────────────────── Sleep 6.010e+00 6.010e+00 6.010e+00 -─────────────────────────────────────────── +───────────────────────────────────────────