Controlling Model Complexity in Probabilistic Model-Based Dynamic Optimization of Neural Network Structures
Shota Saito and Shinichi Shirakawa: Controlling Model Complexity in Probabilistic Model-Based Dynamic Optimization of Neural Network Structures, In Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning, Lecture Notes in Computer Science, vol 11728. Springer, Cham. ※ Acceptance Rate for Oral Presentation: 24.3% ≒ 120 / 494
Abstract
A method of simultaneously optimizing both the structure of neural networks and the connection weights in a single training loop can reduce the enormous computational cost of neural architecture search. We focus on the probabilistic model-based dynamic neural network structure optimization that considers the probability distribution of structure parameters and simultaneously optimizes both the distribution parameters and connection weights based on gradient methods. Since the existing algorithm searches for the structures that only minimize the training loss, this method might find overly complicated structures. In this paper, we propose the introduction of a penalty term to control the model complexity of obtained structures. We formulate a penalty term using the number of weights or units and derive its analytical natural gradient. The proposed method minimizes the objective function injected the penalty term based on the stochastic gradient descent. We apply the proposed method in the unit selection of a fully-connected neural network and the connection selection of a convolutional neural network. The experimental results show that the proposed method can control model complexity while maintaining performance.