Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

Flowcon: elastic flow configuration for containerized deep learning applications

Zheng, Wenjia, Tynes, Michael orcid logoORCID: 0000-0002-5007-1056, Gorelick, Henry, Mao, Ying, Cheng, Long orcid logoORCID: 0000-0003-1638-059X and Hou, Yantian orcid logoORCID: 0000-0001-8295-6871 (2019) Flowcon: elastic flow configuration for containerized deep learning applications. In: Proceedings of the 48th International Conference on Parallel Processing, 5 -8 Aug 2019, Kyoto, Japan. ISBN 978-1-4503-6295-5

Abstract
An increasing number of companies are using data analytics to improve their products, services, and business processes. However, learning knowledge effectively from massive data sets always involves nontrivial computational resources. Most businesses thus choose to migrate their hardware needs to a remote cluster computing service (e.g., AWS) or to an in-house cluster facility which is often run at its resource capacity. In such scenarios, where jobs compete for available resources utilizing resources effectively to achieve high-performance data analytics becomes desirable. Although cluster resource management is a fruitful research area having made many advances (e.g., YARN, Kubernetes), few projects have investigated how further optimizations can be made specifically for training multiple machine learning (ML) / deep learning (DL) models. In this work, we introduce FlowCon, a system which is able to monitor loss functions of ML/DL jobs at runtime, and thus to make decisions on resource configuration elastically. We present a detailed design and implementation of FlowCon, and conduct intensive experiments over various DL models. Our experimental results show that FlowCon can strongly improve DL job completion time and resource utilization efficiency, compared to existing approaches. Specifically, FlowCon can reduce the completion time by up to 42.06% for a specific job without sacrificing the overall makespan, in the presence of various DL job workloads.
Metadata
Item Type:Conference or Workshop Item (Paper)
Event Type:Conference
Refereed:Yes
Uncontrolled Keywords:cloud computing; deep learning; containerized application; resource management; high performance analytics
Subjects:UNSPECIFIED
DCU Faculties and Centres:DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing
Published in: Proceedings of the 48th International Conference on Parallel Processing. ICPP . Association for Computing Machinery (ACM). ISBN 978-1-4503-6295-5
Publisher:Association for Computing Machinery (ACM)
Official URL:http://dx.doi.org/10.1145/3337821.3337868
Copyright Information:© 2019 Association for Computing Machinery(ACM)
Use License:This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License. View License
Funders:European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 799066
ID Code:24290
Deposited On:19 Mar 2020 14:43 by Long Cheng . Last Modified 19 Mar 2020 14:43
Documents

Full text available as:

[thumbnail of main.pdf]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
1MB
Downloads

Downloads

Downloads per month over past year

Archive Staff Only: edit this record