APPLICATION OF KOHONEN SELF-ORGANIZING MAP TO SEARCH FOR REGION OF INTEREST IN THE DETECTION OF OBJECTS
Abstract
Today, there is a serious need to improve the performance of algorithms for detecting objects in images. This process can be accelerated with the help of preliminary processing, having found areas of interest on the images where the probability of object detection is high. To this end, it is proposed to use the algorithm for distinguishing the boundaries of objects using the Sobel operator and Kohonen self-organizing maps, described in this paper and shown by the example of determining zones of interest when searching and recognizing objects in satellite images. The presented algorithm allows 15–100 times reduction in the amount of data arriving at the convolutional neural network, which provides the final recognition. Also, the algorithm can significantly reduce the number of training images, since the size of the parts of the input image supplied to the convolution network is tied to the image scale and equal to the size of the largest recognizable object, and the object is centered in the frame. This allows to accelerate network learning by more than 5 times and increase recognition accuracy by at least 10 %, as well as halve the required minimum number of layers and neurons of the convolutional network, thereby increasing its speed.
Downloads
References
Simard, P. Y., Steinkraus, D., Platt, J. C. (2003). Best practices for convolutional neural networks applied to visual document analysis. Proceedings of the Seventh International Conference on Document Analysis and Recognition. doi: https://doi.org/10.1109/icdar.2003.1227801
Novikova, N. M., Dudenkov, V. M. (2015). Modelirovanie neyronnoy seti dlya raspoznavaniya izobrazheniy na osnove gibridnoy seti i samoorganizuyuschihsya kart Kohonena. Aspirant, 2, 31–34.
Narushev, I. R. (2018). Neural network on the basis of the self-organizing kochonen card as a means of detecting anomalous behavior. Ohrana, bezopasnost', svyaz', 2 (3 (3)), 194–197.
Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition. doi: https://doi.org/10.1109/cvpr.2014.81
Girshick, R., Donahue, J., Darrell, T., Malik, J. (2016). Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38 (1), 142–158. doi: https://doi.org/10.1109/tpami.2015.2437384
Girshick, R. (2015). Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/iccv.2015.169
Ren, S. et. al. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 91–99.
He, K., Gkioxari, G., Dollar, P., Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/iccv.2017.322
Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi: https://doi.org/10.1109/cvpr.2016.91
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. Computer Vision – ECCV 2016, 21–37. doi: https://doi.org/10.1007/978-3-319-46448-0_2
Skuratov, V., Kuzmin, K., Nelin, I., Sedankin, M. (2019). Application of kohonen neural networks to search for regions of interest in the detection and recognition of objects. Eastern-European Journal of Enterprise Technologies, 3 (9 (99)), 41–48. doi: https://doi.org/10.15587/1729-4061.2019.166887
Haykin, S. (2008). Neyronnye seti: polniy kurs. Moscow: Izdatel'skiy dom Vil'yams.
Kohonen, T. (2001). Self-organizing maps. Vol. 30. Springer Science & Business Media, 502. doi: https://doi.org/10.1007/978-3-642-56927-2
Copyright (c) 2020 Victor Skuratov, Konstantin Kuzmin, Igor Nelin, Mikhail Sedankin

This work is licensed under a Creative Commons Attribution 4.0 International License.
Our journal abides by the Creative Commons CC BY copyright rights and permissions for open access journals.
Authors, who are published in this journal, agree to the following conditions:
1. The authors reserve the right to authorship of the work and pass the first publication right of this work to the journal under the terms of a Creative Commons CC BY, which allows others to freely distribute the published research with the obligatory reference to the authors of the original work and the first publication of the work in this journal.
2. The authors have the right to conclude separate supplement agreements that relate to non-exclusive work distribution in the form in which it has been published by the journal (for example, to upload the work to the online storage of the journal or publish it as part of a monograph), provided that the reference to the first publication of the work in this journal is included.