Abstract
Optimizing data-intensive workflow execution is essential to many modern scientific projects such as the Square Kilometre Array (SKA), which will be the largest radio telescope in the world, collecting terabytes of data per second for the next few decades. At the core of the SKA Science Data Processor is the graph execution engine, scheduling tens of thousands of algorithmic components to ingest and transform millions of parallel data chunks in order to solve a series of large-scale inverse problems within the power budget. To tackle this challenge, we have developed the Data Activated Liu Graph Engine (DALiuGE) to manage data processing pipelines for several SKA pathfinder projects. In this paper, we discuss the DALiuGE graph scheduling subsystem. By extending previous studies on graph scheduling and partitioning, we lay the foundation on which we can develop polynomial time optimization methods that minimize both workflow execution time and resource footprint while satisfying resource constraints imposed by individual algorithms. We show preliminary results obtained from three radio astronomy data pipelines.
Original language | English |
---|---|
Title of host publication | Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018 |
Place of Publication | USA |
Publisher | Association for Computing Machinery (ACM) |
ISBN (Electronic) | 9781450358637 |
DOIs | |
Publication status | Published - 11 Jun 2018 |
Event | 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Tempe, United States Duration: 11 Jun 2018 → 11 Jun 2018 |
Conference
Conference | 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 |
---|---|
Country/Territory | United States |
City | Tempe |
Period | 11/06/18 → 11/06/18 |