7.2. Intercommunicator Constructors


Up: Extended Collective Operations Next: Extended Collective Operations Previous: Introduction

The current MPI interface provides only two intercommunicator construction routines:


The other communicator constructors, MPI_COMM_CREATE and MPI_COMM_SPLIT, currently apply only to intracommunicators. These operations in fact have well-defined semantics for intercommunicators [20].

In the following discussions, the two groups in an intercommunicator are called the left and right groups. A process in an intercommunicator is a member of either the left or the right group. From the point of view of that process, the group that the process is a member of is called the local group; the other group (relative to that process) is the remote group. The left and right group labels give us a way to describe the two groups in an intercommunicator that is not relative to any particular process (as the local and remote groups are).

In addition, the specification of collective operations (Section 4.1 of MPI-1) requires that all collective routines are called with matching arguments. For the intercommunicator extensions, this is weakened to matching for all members of the same local group.

MPI_COMM_CREATE(comm_in, group, comm_out)
IN comm_inoriginal communicator (handle)
IN groupgroup of processes to be in new communicator (handle)
OUT comm_outnew communicator (handle)

MPI::Intercomm MPI::Intercomm::Create(const Group& group) const
MPI::Intracomm MPI::Intracomm::Create(const Group& group) const

The C and Fortran language bindings are identical to those in MPI-1, so are omitted here.

If comm_in is an intercommunicator, then the output communicator is also an intercommunicator where the local group consists only of those processes contained in group (see Figure 9 ). The group argument should only contain those processes in the local group of the input intercommunicator that are to be a part of comm_out. If either group does not specify at least one process in the local group of the intercommunicator, or if the calling process is not included in the group, MPI_COMM_NULL is returned.


Rationale.

In the case where either the left or right group is empty, a null communicator is returned instead of an intercommunicator with MPI_GROUP_EMPTY because the side with the empty group must return MPI_COMM_NULL. ( End of rationale.)


Figure 9:

[ ]Intercommunicator create using MPI_COMM_CREATE extended to intercommunicators. The input groups are those in the grey circle.
Example The following example illustrates how the first node in the left side of an intercommunicator could be joined with all members on the right side of an intercommunicator to form a new intercommunicator.

        MPI_Comm  inter_comm, new_inter_comm; 
        MPI_Group local_group, group; 
        int       rank = 0; /* rank on left side to include in  
                               new inter-comm */ 
 
        /* Construct the original intercommunicator: "inter_comm" */ 
        ... 
 
        /* Construct the group of processes to be in new  
           intercommunicator */ 
        if (/* I'm on the left side of the intercommunicator */) { 
          MPI_Comm_group ( inter_comm, &local_group ); 
          MPI_Group_incl ( local_group, 1, &rank, &group ); 
          MPI_Group_free ( &local_group ); 
        } 
        else  
          MPI_Comm_group ( inter_comm, &group ); 
 
        MPI_Comm_create ( inter_comm, group, &new_inter_comm ); 
        MPI_Group_free( &group ); 

MPI_COMM_SPLIT(comm_in, color, key, comm_out)
IN comm_inoriginal communicator (handle)
IN colorcontrol of subset assignment (integer)
IN keycontrol of rank assignment (integer)
OUT comm_outnew communicator (handle)

MPI::Intercomm MPI::Intercomm::Split(int color, int key) const
MPI::Intracomm MPI::Intracomm::Split(int color, int key) const

The C and Fortran language bindings are identical to those in MPI-1, so are omitted here.

The result of MPI_COMM_SPLIT on an intercommunicator is that those processes on the left with the same color as those processes on the right combine to create a new intercommunicator. The key argument describes the relative rank of processes on each side of the intercommunicator (see Figure 10 ). For those colors that are specified only on one side of the intercommunicator, MPI_COMM_NULL is returned. MPI_COMM_NULL is also returned to those processes that specify MPI_UNDEFINED as the color.


Figure 10:

[ ]Intercommunicator construction achieved by splitting an existing intercommunicator with MPI_COMM_SPLIT extended to intercommunicators.
Example(Parallel client-server model). The following client code illustrates how clients on the left side of an intercommunicator could be assigned to a single server from a pool of servers on the right side of an intercommunicator.

        /* Client code */ 
        MPI_Comm  multiple_server_comm; 
        MPI_Comm  single_server_comm; 
        int       color, rank, num_servers; 
         
        /* Create intercommunicator with clients and servers:  
           multiple_server_comm */ 
        ... 
         
        /* Find out the number of servers available */ 
        MPI_Comm_remote_size ( multiple_server_comm, &num_servers ); 
         
        /* Determine my color */ 
        MPI_Comm_rank ( multiple_server_comm, &rank ); 
        color = rank % num_servers; 
         
        /* Split the intercommunicator */ 
        MPI_Comm_split ( multiple_server_comm, color, rank,  
                         &single_server_comm ); 
The following is the corresponding server code:
        /* Server code */ 
        MPI_Comm  multiple_client_comm; 
        MPI_Comm  single_server_comm; 
        int       rank; 
 
        /* Create intercommunicator with clients and servers:  
           multiple_client_comm */ 
        ... 
         
        /* Split the intercommunicator for a single server per group 
           of clients */ 
        MPI_Comm_rank ( multiple_client_comm, &rank ); 
        MPI_Comm_split ( multiple_client_comm, rank, 0,  
                         &single_server_comm );   



Up: Extended Collective Operations Next: Extended Collective Operations Previous: Introduction


Return to MPI-2 Standard Index
Return to MPI 1.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 18, 1997
HTML Generated on September 10, 2001