Lab 9 (Optional): PLP
Deadline: Tuesday, July 30, 11:59:59 PM PT
Setup
This lab is optional. The material that Lab 9 covers is in scope for the final exam, so we recommend that you complete it still. You must complete this lab on the hive machines (not your local machine). See Lab 0 if you need to set up the hive machines again.
In your labs
directory on the hive machine, pull any changes you may have made in past labs:
Still in your labs
directory on the hive machine, pull the files for this lab with:
If you run into any git
errors, please check out the common errors page.
Open MPI
The Open MPI project provides a way of writing programs which can be run on multiple processes. We can use its C libraries by calling their functions. Then, when we run the program, Open MPI will create a bunch of processes and run a copy of the code on each process. Here is a list of the most important functions for this class:
int MPI_Init(int* argc, char*** argv)
should be called at the start of the program, passing in the addresses of argc and argv.int MPI_Finalize()
should be called at the end of the program.int MPI_Comm_size(MPI_COMM_WORLD, int *size)
gets the total number of processes running the program, and puts it in size.int MPI_Comm_rank(MPI_COMM_WORLD, int *rank)
gets the ID of the current process (0 ∼ total number of processes - 1) and puts it in rank.int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, 0, MPI_COMM_WORLD)
(Man page)buf
is where the message to send should be storedcount
is the number of elements within the message to senddatatype
is the datatype of each element within the messagedest
is the process ID of the recipient of the messagetag
is the tag of the message- For the scope of this lab, feel free to use
0
.
- For the scope of this lab, feel free to use
comm
is the communication context of the message- For the scope of this class, use
MPI_COMM_WORLD
- For the scope of this class, use
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)
(Man page)buf
is used to store the received message(s)count
is the maximum number of elements to receivedatatype
is the datatype of each element within the received messagesource
is the source where the message should be from- If you want to receive a message from any source, set the source to be
MPI_ANY_SOURCE
. - The source of the message can be found in the
MPI_SOURCE
field of the outputted status struct.
- If you want to receive a message from any source, set the source to be
tag
is the tag that the message must be sent with- If you want to receive a message with any tag, set the tag to be
MPI_ANY_TAG
.
- If you want to receive a message with any tag, set the tag to be
comm
is the communication context that the message must be sent within- For the scope of this class, use
MPI_COMM_WORLD
- For the scope of this class, use
status
will store a status struct with additional information- If you don’t need the information in the status struct (e.g. because you already know the source of the message), set the status address to
MPI_STATUS_IGNORE
.
- If you don’t need the information in the status struct (e.g. because you already know the source of the message), set the status address to
For Open MPI, make sure you use mpicc
(instead of gcc
) to compile and mpirun
to run your programs.
Exercise 1: Hello Open MPI
- In
ex1.c
, use the above functions to write code to run a program that printsHello world
on multiple processes. For this exercise, since there's only one type of messages, feel free to use0
as thetag
andMPI_COMM_WORLD
ascomm
. - Compile the program with
mpicc -Wall ex1.c -o ex1
- Run the program with
mpirun ./ex1 [num tasks]
, where[num tasks]
is the number of tasks to run.- You can set the number of processes to use with
mpirun -np [num processes] ./ex1 [num tasks]
- If you choose to run with more processes than physical cores (4 for the hive machines), you must pass the
--oversubscribe
flag (e.g.mpirun --oversubscribe -np 100 ./ex1 100
)
- You can set the number of processes to use with
- Play around with the different number of processes and tasks and see how the output changes!
Feel free to refer to discussion 11 for an example on writing Open MPI code! The code structure will be extremely similar.
Submission
Save, commit, and push your work, then submit to the Lab 9 assignment on Gradescope.