Format

Send to

Choose Destination
Sensors (Basel). 2017 Sep 21;17(10). pii: E2172. doi: 10.3390/s17102172.

A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

Zhang J1,2,3,4,5, Tu H6,7, Ren Y8,9, Wan J10,11,12,13, Zhou L14,15, Li M16,17, Wang J18, Yu L19,20, Zhao C21,22, Zhang L23.

Author information

1
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. jilin.zhang@hdu.edu.cn.
2
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. jilin.zhang@hdu.edu.cn.
3
College of Electrical Engineering, Zhejiang University, Hangzhou 310058, China. jilin.zhang@hdu.edu.cn.
4
School of Information and Electronic engineering, Zhejiang University of Science & Technology, Hangzhou 310023, China. jilin.zhang@hdu.edu.cn.
5
Zhejiang Provincial Engineering Center on Media Data Cloud Processing and Analysis, Hangzhou 310018, China. jilin.zhang@hdu.edu.cn.
6
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. 152050103@hdu.edu.cn.
7
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. 152050103@hdu.edu.cn.
8
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. yongjian.ren@hdu.edu.cn.
9
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. yongjian.ren@hdu.edu.cn.
10
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. wanjian@hdu.edu.cn.
11
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. wanjian@hdu.edu.cn.
12
School of Information and Electronic engineering, Zhejiang University of Science & Technology, Hangzhou 310023, China. wanjian@hdu.edu.cn.
13
Zhejiang Provincial Engineering Center on Media Data Cloud Processing and Analysis, Hangzhou 310018, China. wanjian@hdu.edu.cn.
14
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. juliy26@hdu.edu.cn.
15
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. juliy26@hdu.edu.cn.
16
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. 161050009@hdu.edu.cn.
17
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. 161050009@hdu.edu.cn.
18
Supercomputing Center of Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China. 151050064@hdu.edu.cn.
19
Hithink RoyalFlush Information Network Co., Ltd., Hangzhou 310023, Zhejiang, China. wangjue@sccas.cn.
20
Financial Information Engineering Technology Research Center of Zhejiang Province, Hangzhou 310023, China. wangjue@sccas.cn.
21
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. yulifeng@myhexin.com.
22
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. yulifeng@myhexin.com.
23
Computer Science Department, Beijing University of Civil Engineering and Architecture, Beijing 100044, China. lei.zhang@bucea.edu.cn.

Abstract

In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

KEYWORDS:

disturbed machine learning; dynamic synchronous parallel strategy (DSP); parameter server (PS); sensors

Supplemental Content

Full text links

Icon for Multidisciplinary Digital Publishing Institute (MDPI) Icon for PubMed Central
Loading ...
Support Center