Partitioning SKA dataflows for optimal graph execution

Chen Wu, Andreas Wicenec, Rodrigo Tobar

Research output: Chapter in Book/Conference paperConference paper

1 Citation (Scopus)

Abstract

Optimizing data-intensive workflow execution is essential to many modern scientific projects such as the Square Kilometre Array (SKA), which will be the largest radio telescope in the world, collecting terabytes of data per second for the next few decades. At the core of the SKA Science Data Processor is the graph execution engine, scheduling tens of thousands of algorithmic components to ingest and transform millions of parallel data chunks in order to solve a series of large-scale inverse problems within the power budget. To tackle this challenge, we have developed the Data Activated Liu Graph Engine (DALiuGE) to manage data processing pipelines for several SKA pathfinder projects. In this paper, we discuss the DALiuGE graph scheduling subsystem. By extending previous studies on graph scheduling and partitioning, we lay the foundation on which we can develop polynomial time optimization methods that minimize both workflow execution time and resource footprint while satisfying resource constraints imposed by individual algorithms. We show preliminary results obtained from three radio astronomy data pipelines.

Original languageEnglish
Title of host publicationProceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018
Place of PublicationUSA
PublisherAssociation for Computing Machinery (ACM)
ISBN (Electronic)9781450358637
DOIs
Publication statusPublished - 11 Jun 2018
Event9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Tempe, United States
Duration: 11 Jun 201811 Jun 2018

Conference

Conference9th Workshop on Scientific Cloud Computing, ScienceCloud 2018
CountryUnited States
CityTempe
Period11/06/1811/06/18

Fingerprint

Scheduling
Engines
Pipelines
Radio astronomy
Radio telescopes
Inverse problems
Polynomials

Cite this

Wu, C., Wicenec, A., & Tobar, R. (2018). Partitioning SKA dataflows for optimal graph execution. In Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018 [a6] USA: Association for Computing Machinery (ACM). https://doi.org/10.1145/3217880.3217886
Wu, Chen ; Wicenec, Andreas ; Tobar, Rodrigo. / Partitioning SKA dataflows for optimal graph execution. Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018. USA : Association for Computing Machinery (ACM), 2018.
@inproceedings{d163fdf861114024a81a8c88cf98997f,
title = "Partitioning SKA dataflows for optimal graph execution",
abstract = "Optimizing data-intensive workflow execution is essential to many modern scientific projects such as the Square Kilometre Array (SKA), which will be the largest radio telescope in the world, collecting terabytes of data per second for the next few decades. At the core of the SKA Science Data Processor is the graph execution engine, scheduling tens of thousands of algorithmic components to ingest and transform millions of parallel data chunks in order to solve a series of large-scale inverse problems within the power budget. To tackle this challenge, we have developed the Data Activated Liu Graph Engine (DALiuGE) to manage data processing pipelines for several SKA pathfinder projects. In this paper, we discuss the DALiuGE graph scheduling subsystem. By extending previous studies on graph scheduling and partitioning, we lay the foundation on which we can develop polynomial time optimization methods that minimize both workflow execution time and resource footprint while satisfying resource constraints imposed by individual algorithms. We show preliminary results obtained from three radio astronomy data pipelines.",
keywords = "Graph execution, Scheduling, Square Kilometre Array",
author = "Chen Wu and Andreas Wicenec and Rodrigo Tobar",
year = "2018",
month = "6",
day = "11",
doi = "10.1145/3217880.3217886",
language = "English",
booktitle = "Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Wu, C, Wicenec, A & Tobar, R 2018, Partitioning SKA dataflows for optimal graph execution. in Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018., a6, Association for Computing Machinery (ACM), USA, 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018, Tempe, United States, 11/06/18. https://doi.org/10.1145/3217880.3217886

Partitioning SKA dataflows for optimal graph execution. / Wu, Chen; Wicenec, Andreas; Tobar, Rodrigo.

Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018. USA : Association for Computing Machinery (ACM), 2018. a6.

Research output: Chapter in Book/Conference paperConference paper

TY - GEN

T1 - Partitioning SKA dataflows for optimal graph execution

AU - Wu, Chen

AU - Wicenec, Andreas

AU - Tobar, Rodrigo

PY - 2018/6/11

Y1 - 2018/6/11

N2 - Optimizing data-intensive workflow execution is essential to many modern scientific projects such as the Square Kilometre Array (SKA), which will be the largest radio telescope in the world, collecting terabytes of data per second for the next few decades. At the core of the SKA Science Data Processor is the graph execution engine, scheduling tens of thousands of algorithmic components to ingest and transform millions of parallel data chunks in order to solve a series of large-scale inverse problems within the power budget. To tackle this challenge, we have developed the Data Activated Liu Graph Engine (DALiuGE) to manage data processing pipelines for several SKA pathfinder projects. In this paper, we discuss the DALiuGE graph scheduling subsystem. By extending previous studies on graph scheduling and partitioning, we lay the foundation on which we can develop polynomial time optimization methods that minimize both workflow execution time and resource footprint while satisfying resource constraints imposed by individual algorithms. We show preliminary results obtained from three radio astronomy data pipelines.

AB - Optimizing data-intensive workflow execution is essential to many modern scientific projects such as the Square Kilometre Array (SKA), which will be the largest radio telescope in the world, collecting terabytes of data per second for the next few decades. At the core of the SKA Science Data Processor is the graph execution engine, scheduling tens of thousands of algorithmic components to ingest and transform millions of parallel data chunks in order to solve a series of large-scale inverse problems within the power budget. To tackle this challenge, we have developed the Data Activated Liu Graph Engine (DALiuGE) to manage data processing pipelines for several SKA pathfinder projects. In this paper, we discuss the DALiuGE graph scheduling subsystem. By extending previous studies on graph scheduling and partitioning, we lay the foundation on which we can develop polynomial time optimization methods that minimize both workflow execution time and resource footprint while satisfying resource constraints imposed by individual algorithms. We show preliminary results obtained from three radio astronomy data pipelines.

KW - Graph execution

KW - Scheduling

KW - Square Kilometre Array

UR - http://www.scopus.com/inward/record.url?scp=85050072817&partnerID=8YFLogxK

U2 - 10.1145/3217880.3217886

DO - 10.1145/3217880.3217886

M3 - Conference paper

BT - Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018

PB - Association for Computing Machinery (ACM)

CY - USA

ER -

Wu C, Wicenec A, Tobar R. Partitioning SKA dataflows for optimal graph execution. In Proceedings of the 9th Workshop on Scientific Cloud Computing, ScienceCloud 2018 - Co-located with HPDC 2018. USA: Association for Computing Machinery (ACM). 2018. a6 https://doi.org/10.1145/3217880.3217886