Skip to content

Commit b146883

Browse files
authored
Merge pull request #15 from marius311/redirect
add stdout_to_master/stderr_to_master options, and minor readme fix
2 parents 66df256 + 04a89ee commit b146883

File tree

2 files changed

+10
-8
lines changed

2 files changed

+10
-8
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,14 +91,14 @@ mpirun -np 5 julia cman-transport.jl TCP
9191

9292
This launches a total of 5 processes, mpi rank 0 is the julia pid 1. mpi rank 1 is julia pid 2 and so on.
9393

94-
The program must call `MPI.start(TCP_TRANSPORT_ALL)` with argument `TCP_TRANSPORT_ALL`.
94+
The program must call `MPIClusterManagers.start_main_loop(TCP_TRANSPORT_ALL)` with argument `TCP_TRANSPORT_ALL`.
9595
On mpi rank 0, it returns a `manager` which can be used with `@mpi_do`
9696
On other processes (i.e., the workers) the function does not return
9797

9898

9999
### MPIManager: MPI transport - all processes execute MPI code
100100

101-
`MPI.start` must be called with option `MPI_TRANSPORT_ALL` to use MPI as transport.
101+
`MPIClusterManagers.start_main_loop` must be called with option `MPI_TRANSPORT_ALL` to use MPI as transport.
102102
```
103103
mpirun -np 5 julia cman-transport.jl MPI
104104
```

src/mpimanager.jl

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -191,14 +191,14 @@ function Distributed.launch(mgr::MPIManager, params::Dict,
191191
end
192192

193193
# Entry point for MPI worker processes for MPI_ON_WORKERS and TCP_TRANSPORT_ALL
194-
setup_worker(host, port) = setup_worker(host, port, nothing)
195-
function setup_worker(host, port, cookie)
194+
setup_worker(host, port; kwargs...) = setup_worker(host, port, nothing; kwargs...)
195+
function setup_worker(host, port, cookie; stdout_to_master=true, stderr_to_master=true)
196196
!MPI.Initialized() && MPI.Init()
197197
# Connect to the manager
198198
io = connect(IPv4(host), port)
199199
wait_connected(io)
200-
redirect_stdout(io)
201-
redirect_stderr(io)
200+
stdout_to_master && redirect_stdout(io)
201+
stderr_to_master && redirect_stderr(io)
202202

203203
# Send our MPI rank to the manager
204204
rank = MPI.Comm_rank(MPI.COMM_WORLD)
@@ -326,7 +326,9 @@ end
326326

327327
# Enter the MPI cluster manager's main loop (does not return on the workers)
328328
function start_main_loop(mode::TransportMode=TCP_TRANSPORT_ALL;
329-
comm::MPI.Comm=MPI.COMM_WORLD)
329+
comm::MPI.Comm=MPI.COMM_WORLD,
330+
stdout_to_master=true,
331+
stderr_to_master=true)
330332
!MPI.Initialized() && MPI.Init()
331333
@assert MPI.Initialized() && !MPI.Finalized()
332334
if mode == TCP_TRANSPORT_ALL
@@ -359,7 +361,7 @@ function start_main_loop(mode::TransportMode=TCP_TRANSPORT_ALL;
359361
(obj, status) = MPI.recv(0, 0, comm)
360362
(host, port, cookie) = obj
361363
# Call the regular worker entry point
362-
setup_worker(host, port, cookie) # does not return
364+
setup_worker(host, port, cookie, stdout_to_master=stdout_to_master, stderr_to_master=stderr_to_master) # does not return
363365
end
364366
elseif mode == MPI_TRANSPORT_ALL
365367
comm = MPI.Comm_dup(comm)

0 commit comments

Comments
 (0)