Reference documentation for deal.II version 8.1.0
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
Classes | Functions
Utilities::MPI Namespace Reference

Classes

struct  MinMaxAvg
 
class  MPI_InitFinalize
 
class  Partitioner
 

Functions

unsigned int n_mpi_processes (const MPI_Comm &mpi_communicator)
 
unsigned int this_mpi_process (const MPI_Comm &mpi_communicator)
 
std::vector< unsigned intcompute_point_to_point_communication_pattern (const MPI_Comm &mpi_comm, const std::vector< unsigned int > &destinations)
 
MPI_Comm duplicate_communicator (const MPI_Comm &mpi_communicator)
 
template<typename T >
sum (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , unsigned int N>
void sum (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&sums)[N])
 
template<typename T >
void sum (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &sums)
 
template<typename T >
max (const T &t, const MPI_Comm &mpi_communicator)
 
template<typename T , unsigned int N>
void max (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&maxima)[N])
 
template<typename T >
void max (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &maxima)
 
MinMaxAvg min_max_avg (const double my_value, const MPI_Comm &mpi_communicator)
 

Detailed Description

A namespace for utility functions that abstract certain operations using the Message Passing Interface (MPI) or provide fallback operations in case deal.II is configured not to use MPI at all.

Function Documentation

unsigned int Utilities::MPI::n_mpi_processes ( const MPI_Comm &  mpi_communicator)

Return the number of MPI processes there exist in the given communicator object. If this is a sequential job, it returns 1.

unsigned int Utilities::MPI::this_mpi_process ( const MPI_Comm &  mpi_communicator)

Return the number of the present MPI process in the space of processes described by the given communicator. This will be a unique value for each process between zero and (less than) the number of all processes (given by get_n_mpi_processes()).

std::vector<unsigned int> Utilities::MPI::compute_point_to_point_communication_pattern ( const MPI_Comm &  mpi_comm,
const std::vector< unsigned int > &  destinations 
)

Consider an unstructured communication pattern where every process in an MPI universe wants to send some data to a subset of the other processors. To do that, the other processors need to know who to expect messages from. This function computes this information.

Parameters
mpi_commA communicator that describes the processors that are going to communicate with each other.
destinationsThe list of processors the current process wants to send information to. This list need not be sorted in any way. If it contains duplicate entries that means that multiple messages are intended for a given destination.
Returns
A list of processors that have indicated that they want to send something to the current processor. The resulting list is not sorted. It may contain duplicate entries if processors enter the same destination more than once in their destinations list.
MPI_Comm Utilities::MPI::duplicate_communicator ( const MPI_Comm &  mpi_communicator)

Given a communicator, generate a new communicator that contains the same set of processors but that has a different, unique identifier.

This functionality can be used to ensure that different objects, such as distributed matrices, each have unique communicators over which they can interact without interfering with each other.

When no longer needed, the communicator created here needs to be destroyed using MPI_Comm_free.

template<typename T >
T Utilities::MPI::sum ( const T &  t,
const MPI_Comm &  mpi_communicator 
)
inline

Return the sum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a results and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

Definition at line 447 of file mpi.h.

template<typename T , unsigned int N>
void Utilities::MPI::sum ( const T(&)  values[N],
const MPI_Comm &  mpi_communicator,
T(&)  sums[N] 
)
inline

Like the previous function, but take the sums over the elements of an array of length N. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor.

Definition at line 465 of file mpi.h.

template<typename T >
void Utilities::MPI::sum ( const std::vector< T > &  values,
const MPI_Comm &  mpi_communicator,
std::vector< T > &  sums 
)
inline

Like the previous function, but take the sums over the elements of a std::vector. In other words, the i-th element of the results array is the sum over the i-th entries of the input arrays from each processor.

Definition at line 483 of file mpi.h.

template<typename T >
T Utilities::MPI::max ( const T &  t,
const MPI_Comm &  mpi_communicator 
)
inline

Return the maximum over all processors of the value t. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t. This function corresponds to the MPI_Allreduce function, i.e. all processors receive the result of this operation.

Note
Sometimes, not all processors need a results and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.
This function is only implemented for certain template arguments T, namely float, double, int, unsigned int.

Definition at line 501 of file mpi.h.

template<typename T , unsigned int N>
void Utilities::MPI::max ( const T(&)  values[N],
const MPI_Comm &  mpi_communicator,
T(&)  maxima[N] 
)
inline

Like the previous function, but take the maxima over the elements of an array of length N. In other words, the i-th element of the results array is the maximum of the i-th entries of the input arrays from each processor.

Definition at line 519 of file mpi.h.

template<typename T >
void Utilities::MPI::max ( const std::vector< T > &  values,
const MPI_Comm &  mpi_communicator,
std::vector< T > &  maxima 
)
inline

Like the previous function, but take the maximum over the elements of a std::vector. In other words, the i-th element of the results array is the maximum over the i-th entries of the input arrays from each processor.

Definition at line 537 of file mpi.h.

MinMaxAvg Utilities::MPI::min_max_avg ( const double  my_value,
const MPI_Comm &  mpi_communicator 
)

Returns sum, average, minimum, maximum, processor id of minimum and maximum as a collective operation of on the given MPI communicator mpi_communicator . Each processor's value is given in my_value and the result will be returned. The result is available on all machines.

Note
Sometimes, not all processors need a results and in that case one would call the MPI_Reduce function instead of the MPI_Allreduce function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.