• Viacheslav Moskalenko Sumy State University
Keywords: neural gas, convolutional neural network, sparse coding, information criterion, classifier


Technologies for computer analysis of visual information based on convolutional neural networks have been widely used, but there is still a shortage of working algorithms for continuous unsupervised training and re-training of neural networks in real time, limiting the effectiveness of their functioning under conditions of nonstationarity and a priori uncertainty. In addition, the back propagation method for learning multi-layer neural networks requires significant computational resources and the amount of marked learning data, which makes it difficult to implement them in autonomous systems with limited resources. One approach to reducing the computational complexity of deep machine learning and overfitting is use of the neural gas principles to implement learning in the process of direct information propagation and sparse coding to increase the compactness and informativeness of feature representation.

The paper considers the use of sparse coding neural gas for learning ten layers of the VGG-16 neural network on selective data from the ImageNet database. At the same time, it is suggested that the evaluation of the effectiveness of the feature extractor learning be carried out according to the results of so-called information-extreme machine learning with the teacher of the output classifier. Information-extreme learning is based on the principles of population optimization methods for binary coding of observations and the construction of radial-basic decision rules optimal in the information criterion in the binary Hamming space.

According to the results of physical modeling, it is shown that learning without a teacher ensures the accuracy of decision rules to 96.4 %, which is inferior to the accuracy of learning with the teacher, which is equal to 98.7 %. However, the absence of an error in the training algorithm for the backward propagation of the error causes the prospect of further research towards the development of meta-optimization algorithms to refine the feature extractor's filters and parameters of the unsupervised training algorithm


Download data is not yet available.

Author Biography

Viacheslav Moskalenko, Sumy State University

PhD, Associate Professor

Department of Computer Science


Huang, Z., Pan, Z., Lei, B. (2017). Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sensing, 9 (9), 907. doi: 10.3390/rs9090907

Masci, J., Meier, U., Ciresan, D., Schmidhuber, J.; Honkela, T., Duch, W., Girolami, M. A., Kaski, S. (Eds.) (2011). Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction. Artificial Neural Networks and Machine Learning – ICANN 2011, 52–59. doi: 10.1007/978-3-642-21735-7_7

Labusch, K., Barth, E., Martinetz, Th. (2008). Learning Data Representations with Sparse Coding Neural Gas. 16th European Symposium on Artificial Neural Networks – ICANN 2008, 233–238. doi: 10.1007/978-3-540-87536-9_81

Labusch, K., Barth, E., Martinetz, T. (2009). Sparse Coding Neural Gas: Learning of overcomplete data representations. Neurocomputing, 72 (7-9), 1547–1555. doi: 10.1016/j.neucom.2008.11.027

Dovbysh, A. S., Moskalenko, V. V., Rizhova, A. S. (2016). Information-Extreme Method for Classification of Observations with Categorical Attributes. Cybernetics and Systems Analysis, 52 (2), 224–231. doi: 10.1007/s10559-016-9818-1

Dovbysh, A. S., Moskalenko, V. V., Rizhova, A. S. (2016). Learning Decision Making Support System for Control of Nonstationary Technological Process. Journal of Automation and Information Sciences, 48 (6), 39–48. doi: 10.1615/jautomatinfscien.v48.i6.40

Ng, H.-W., Dung Nguyen, V., Vonikakis, V., Winkler, S. (2015). Deep Learning for Emotion Recognition on Small Datasets Using Transfer Learning. 17th International Conference On Multimodal Interaction (ICMI’15). Seattle, 443–449. doi: 10.1145/2818346.2830593.

Simonyan, K., Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations (ICLR2015), 1–14.

Parpinelli, R. (Ed.) (2012). Theory and New Applications of Swarm Intelligence. London: InTech, 204. doi: 10.5772/1405

Moskalenko, V., Pimonenko, S. (2016). Optimizing the parameters of functioning of the system of management of data center it infrastructure. Eastern-European Journal of Enterprise Technologies, 5 (2 (83)), 21–29. doi: 10.15587/1729-4061.2016.79231

👁 409
⬇ 232
How to Cite
Moskalenko, V. (2017). DEVELOPMENT OF THE METHOD OF UNSUPERVISED TRAINING OF CONVOLUTIONAL NEURAL NETWORKS BASED ON NEURAL GAS MODIFICATION. Technology Transfer: Fundamental Principles and Innovative Technical Solutions, 34-36.
Computer Sciences