在上一篇中我們介紹了 mpi4py 中的全收集操作方法,下面我們將介紹全規約操作。
對組內通信子上的全規約操作,組內所有進程都作為根執行一次規約操作,操作完畢后所有進程接收緩沖區的數據均相同。這個操作等價于以某個進程作為根首先進行一次規約操作,然后執行一次廣播操作,最后每個進程都得到相同的結果。
對組間通信子上的全規約操作,其關聯的兩個組 group A 和 group B 都要執行該方法調用,該操作使得 group A 中進程提供的規約結果將保存到 group B 的各進程中,反之亦然。
方法接口
mpi4py 中的全規約操作的方法(MPI.Comm 類的方法)接口為:
allreduce(self, sendobj, op=SUM)
Allreduce(self, sendbuf, recvbuf, Op op=SUM)
這些方法的參數與規約操作對應方法的參數類似,不同的是對全規約操作沒有了 root
參數。
對組內通信子對象的 Allreduce,可以將其 sendbuf
參數設置成 MPI.IN_PLACE,此時各進程將從自己的接收緩沖區中提取數據,經過規約操作后,將結果替換接收緩沖區中原來的內容。
例程
下面給出全規約操作的使用例程。
# allreduce.py
"""
Demonstrates the usage of allreduce, Allreduce.
Run this with 4 processes like:
$ mpiexec -n 4 python allreduce.py
"""
import numpy as np
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# ------------------------------------------------------------------------------
# reduce generic object from each process by using allreduce
if rank == 0:
send_obj = 0.5
elif rank == 1:
send_obj = 2.5
elif rank == 2:
send_obj = 3.5
else:
send_obj = 1.5
# reduce by SUM: 0.5 + 2.5 + 3.5 + 1.5 = 8.0
recv_obj = comm.allreduce(send_obj, op=MPI.SUM)
print 'allreduce by SUM: rank %d has %s' % (rank, recv_obj)
# reduce by MAX: max(0.5, 2.5, 3.5, 1.5) = 3.5
recv_obj = comm.allreduce(send_obj, op=MPI.MAX)
print 'allreduce by MAX: rank %d has %s' % (rank, recv_obj)
# ------------------------------------------------------------------------------
# reduce numpy arrays from each process by using Allreduce
send_buf = np.array([0, 1], dtype='i') + 2 * rank
recv_buf = np.empty(2, dtype='i')
# Reduce by SUM: [0, 1] + [2, 3] + [4, 5] + [6, 7] = [12, 16]
comm.Allreduce(send_buf, recv_buf, op=MPI.SUM)
print 'Allreduce by SUM: rank %d has %s' % (rank, recv_buf)
# ------------------------------------------------------------------------------
# reduce numpy arrays from each process by using Allreduce with MPI.IN_PLACE
recv_buf = np.array([0, 1], dtype='i') + 2 * rank
# Reduce by SUM with MPI.IN_PLACE: [0, 1] + [2, 3] + [5, 6] + [6, 7] = [12, 16]
# recv_buf used as both send buffer and receive buffer
comm.Allreduce(MPI.IN_PLACE, recv_buf, op=MPI.SUM)
print 'Allreduce by SUM with MPI.IN_PLACE: rank %d has %s' % (rank, recv_buf)
運行結果如下:
$ mpiexec -n 4 python allreduce.py
allreduce by SUM: rank 2 has 8.0
allreduce by SUM: rank 0 has 8.0
allreduce by SUM: rank 1 has 8.0
allreduce by SUM: rank 3 has 8.0
allreduce by MAX: rank 3 has 3.5
allreduce by MAX: rank 2 has 3.5
allreduce by MAX: rank 0 has 3.5
Allreduce by SUM: rank 0 has [12 16]
allreduce by MAX: rank 1 has 3.5
Allreduce by SUM: rank 1 has [12 16]
Allreduce by SUM with MPI.IN_PLACE: rank 0 has [12 16]
Allreduce by SUM: rank 3 has [12 16]
Allreduce by SUM with MPI.IN_PLACE: rank 3 has [12 16]
Allreduce by SUM: rank 2 has [12 16]
Allreduce by SUM with MPI.IN_PLACE: rank 2 has [12 16]
Allreduce by SUM with MPI.IN_PLACE: rank 1 has [12 16]
以上我們介紹了 mpi4py 中的全規約操作方法,在下一篇中我們將介紹規約發散操作。