Format

Send to

Choose Destination
Neural Netw. 2019 Aug 2;119:74-84. doi: 10.1016/j.neunet.2019.07.011. [Epub ahead of print]

Nonlinear approximation via compositions.

Author information

1
Department of Mathematics, National University of Singapore, Singapore. Electronic address: matzuows@nus.edu.sg.
2
Department of Mathematics, National University of Singapore, Singapore. Electronic address: haizhao@nus.edu.sg.
3
Department of Mathematics, National University of Singapore, Singapore. Electronic address: zhangshijun@u.nus.edu.

Abstract

Given a function dictionary D and an approximation budget N∈N, nonlinear approximation seeks the linear combination of the best N terms [Formula: see text] to approximate a given function f with the minimum approximation error [Formula: see text] Motivated by recent success of deep learning, we propose dictionaries with functions in a form of compositions, i.e., [Formula: see text] for all T∈D, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. We further quantify the improvement of the best N-term approximation rate in terms of N when L is increased from 1 to 2 or 3 to show the power of compositions. In the case when L>3, our analysis shows that increasing L cannot improve the approximation rate in terms of N. In particular, for any function f on [0,1], regardless of its smoothness and even the continuity, if f can be approximated using a dictionary when L=1 with the best N-term approximation rate εL,f=O(N), we show that dictionaries with L=2 can improve the best N-term approximation rate to εL,f=O(N-2η). We also show that for Hölder continuous functions of order α on [0,1]d, the application of a dictionary with L=3 in nonlinear approximation can achieve an essentially tight best N-term approximation rate εL,f=O(N-2α∕d). Finally, we show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.

KEYWORDS:

Deep neural networks; Function composition; Hölder continuity; Nonlinear approximation; Parallel computing; ReLU activation function

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center