-
Notifications
You must be signed in to change notification settings - Fork 0
python translations
In your C program that reads DUMPI traces add the cortex-python header:
#include <cortex/cortex-python.h>
After the call to cortex_dumpi_start_stream_read
, add the following:
cortex_python_set_module("MyAwesomeTranslator","MyTranslator");
cortex_undumpi_read_stream(profile, &cbacks, CORTEX_PYTHON_TRANSLATION, NULL);
Link your program with libcortex-python (in addition to libcortex. You don't need to link against libcortex-mpich).
In the C code above, you have indicated that your translations are provided in MyAwesomeTranslator.py
, within a class called "MyTranslator".
Let's take a look at an example of such file:
import cortex
class MyTranslator():
def MPI_Bcast(self, thread, **args):
dtype = args['datatype']
root = args['root']
comm = args['comm']
count = args['count']
print "MPI_Bcast called in Python, root = ", root
if not cortex.is_mpi_comm_world(comm):
print "Communicator is not MPI_COMM_WORLD, not translating"
return
if thread != root :
s = cortex.MPI_Status()
cortex.MPI_Recv(thread,count=count,datatype=dtype,source=root,tag=1234,comm=comm,status=s)
else :
size = cortex.world_size()
for i in range(size):
if i != thread:
cortex.MPI_Send(thread,count=count,datatype=dtype,dest=i,tag=1234,comm=comm)
First we import cortex to access its functions. Then we define the MyTranslator class. Cortex will create an instance of this class to to translate MPI events. To provide a translation function, for instance for MPI_Bcast, define an MPI_Bcast function in this class.
The prototype for such translation functions will always be the same: MPI_Something(self, thread, **args)
A useful way of knowing what arguments are passed to a particular function is to display the content of args
. Remember that this is a trace reader, not an actual implementation of MPI. You are processing an event, that is, the full representation of an MPI call and its results.
To post MPI events, use cortex.MPI_Something(thread,...)
, where the list of expected arguments are provided here. In the above code fore example, we translate an MPI_Bcast into an algorithm where each non-root process calls MPI_Recv and the root process loops over all destination and calls MPI_Send for each of them.
The cortex module also provides a few functions:
-
cortex.is_mpi_comm_world(comm)
returns true if the provided communicator handle is MPI_COMM_WORLD; -
cortex.is_mpi_comm_self(comm)
return true if the provided communicator handle is MPI_COMM_SELF; -
cortex.is_mpi_comm_null(comm)
returns true if the provided communicator handle is MPI_COMM_NULL; -
cortex.world_size()
returns the number of processes in MPI_COMM_WORLD; -
cortex.is_basic_datatype(t)
returns true if the provided datatype handle is one of MPI's basic datatypes; -
cortex.datatype_size(t)
returns the size of a basic datatype, in bytes (0 for user-defined datatypes);
Note that most MPI types such as datatypes, communicators, groups, etc. are defined as integer handles, and Cortex does no bookkeeping of these handles. If you see an MPI_Comm_dup event, it is your task (if you find it necessary) to record that the output handle is a copy of the input handle. In other words, if you want to keep track of which communicators are being created and how, you need to provide translation functions for the communicator manipulation functions (MPI_Comm_dup, MPI_Comm_split, etc.). For datatypes, bookkeeping is provided by Cortex.
Remember, once again, that you are translating a trace, not actually executing an MPI code. cortex.MPI_Recv won't pause until someone (who would that be anyway?) sends something. It simply generates an MPI_Recv event.
Note: You can pass NULL for the class name (second argument or cortex_python_set_module), in which case the translation functions are expected to be defined in the global scope instead of inside the class.