Home -> Research -> CoMPIler | |||||||||||||||||||||||||||||||
Home Events Past Events |
CoMPI Project Page at UIUC (Co-PI Torsten Hoefler)DescriptionThis is the page related to the ASCR DOE X-Stack software research project "Compiled MPI" (short: compi) at the University of Illinois at Urbana-Champaign (UIUC) led by Torsten Hoefler. This project is a joint project with Lawrence Livermore National Laboratory (LLNL, Co-PIs Dan Quinlan and Greg Bronevetsky) and Indiana University (IU, Co-PI Andrew Lumsdaine). UIUC and IU are responsible for runtime optimization and integration while LLNL handles the compiler infrastructure based on ROSE and the transformations. Grad Student RAs NeededWe are actively looking for grad student RAs at UIUC for the following two sub-projects: (1) datatype optimizations and (2) communication optimizations. A short description follows below, please consult Torsten Hoefler if you have questions or are interested to work on any of the projects.Communication OptimizationThis project deals with the static and dynamic optimization of communication schedules. Communication schedules are a set of communication operations and dependencies and define an order of their execution. A set of such operations and dependencies form a global communication graph. The goal of this project is to optimize the communication graph in a given model (e.g., LogGP) and to compare the quality of solutions. For example, a broadcast communication from node 0 to nodes 1..3 can be expressed as the set {(0,1), (0,2), (0,3), (0,4)} (where a tuple (x,y) represents communication from x to y) or {(0,1}, (1,3), (0,2)} in a tree-like shape. Using a broadcast tree is more efficient in this trivial example. The project aims to develop model-based optimization techniques for the optimization of such communication operation represented in the tuple-form above. The main work is to develop and proof optimality of algorithms working on this tuple-form using well-known communication models such as LogGP. Reaching optimality is generally very hard. We plan to follow three avenues: (1) analytical algorithms and proofs, (2) well-known optimization methods (linear or integer optimization), and (3) heuristics and learning-based methods. The results should be implemented in an MPI-like library. The student working on this project should know what MPI is, be familiar with the C and C++ programming languages and he should be very familiar with linear optimization, (mixed) integer programming and basic network models. The student should understand the papers Alexandrov et al. "LogGP: incorporating long messages into the LogP model—one step closer towards a realistic model for parallel computation" and Bruck et al. "Efficient Algorithms for All-to-All Communications in Multi-Port Message-Passing Systems" well. For previous work in this area see references [3]. MPI Shared Memory OptimizationThe goal is to optimize the implementation of MPI implementations for shared memory supercomputers. The student working on this project should know MPI and he should be very familiar with the C/C++ programming languages and computer architecture. Please contact Torsten Hoefler for more information and if you're a student at UIUC and are interested in this project. For previous work in this area see references [1,2].References
|
serving: 18.226.34.198:60483 | © Torsten Hoefler |