Researching a machine learning algorithm for a face recognition system
Abstract
This article investigated the problem of using machine learning algorithms to recognize and identify a user in a video sequence.
The scientific novelty lies in the proposed improved Viola-Jones method, which will allow more efficient and faster recognition of a person's face.
The practical value of the results obtained in the work is determined by the possibility of using the proposed method to create systems for human face recognition.
A review of existing methods of face recognition, their main characteristics, architecture and features was carried out. Based on the study of methods and algorithms for finding faces in images, the Viola-Jones method, wavelet transform and the method of principal components were chosen. These methods are among the best in terms of the ratio of recognition efficiency and work speed. Possible modifications of the Viola-Jones method are presented.
The main contribution presented in this article is an experimental study of the impact of various types of noise and the improvement of company security through the development of a computer system for recognizing and identifying users in a video sequence.
During the study, the following tasks were solved:
– a model of face recognition is proposed, that is, the system automatically detects a person's face in the image (scanned photos or video materials);
– an algorithm for analyzing a face is proposed, that is, a representation of a person's face in the form of 68 modal points;
– an algorithm for creating a digital fingerprint of a face, which converts the results of facial analysis into a digital code;
– development of a match search module, that is, the module compares the faceprint with the database until a match is found
Downloads
References
Schiller, D., Huber, T., Dietz, M., André, E. (2020). Relevance-Based Data Masking: A Model-Agnostic Transfer Learning Approach for Facial Expression Recognition. Frontiers in Computer Science, 2. doi: https://doi.org/10.3389/fcomp.2020.00006
Prakash, R. M., Thenmoezhi, N., Gayathri, M. (2019). Face Recognition with Convolutional Neural Network and Transfer Learning. 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT). doi: https://doi.org/10.1109/icssit46314.2019.8987899
Deng, J., Guo, J., Xue, N., Zafeiriou, S. (2019). ArcFace: Additive Angular Margin Loss for Deep Face Recognition. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi: https://doi.org/10.1109/cvpr.2019.00482
Wang, H., Wang, Y., Zhou, Z., Ji, X., Gong, D., Zhou, J. et. al. (2018). CosFace: Large Margin Cosine Loss for Deep Face Recognition. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. doi: https://doi.org/10.1109/cvpr.2018.00552
Power, J. D., Plitt, M., Gotts, S. J., Kundu, P., Voon, V., Bandettini, P. A., Martin, A. (2018). Ridding fMRI data of motion-related influences: Removal of signals with distinct spatial and physical bases in multiecho data. Proceedings of the National Academy of Sciences, 115 (9), E2105–E2114. doi: https://doi.org/10.1073/pnas.1720985115
Roux-Sibilon, A., Rutgé, F., Aptel, F., Attye, A., Guyader, N., Boucart, M. et. al. (2018). Scene and human face recognition in the central vision of patients with glaucoma. PLOS ONE, 13 (2), e0193465. doi: https://doi.org/10.1371/journal.pone.0193465
Favelle, S., Palmisano, S. (2018). View specific generalisation effects in face recognition: Front and yaw comparison views are better than pitch. PLOS ONE, 13 (12), e0209927. doi: https://doi.org/10.1371/journal.pone.0209927
Valeriani, D., Poli, R. (2019). Cyborg groups enhance face recognition in crowded environments. PLOS ONE, 14 (3), e0212935. doi: https://doi.org/10.1371/journal.pone.0212935
Tao, W., Huang, H., Haponenko, H., Sun, H. (2019). Face recognition and memory in congenital amusia. PLOS ONE, 14 (12), e0225519. doi: https://doi.org/10.1371/journal.pone.0225519
The task of pattern recognition. Available at: https://uk.wikipedia.org/wiki/Завдання_розпізнавання_образів
Copyright (c) 2021 Serhii Yevseiev, Anna Goloskokova, Olexander Shmatko

This work is licensed under a Creative Commons Attribution 4.0 International License.
Our journal abides by the Creative Commons CC BY copyright rights and permissions for open access journals.
Authors, who are published in this journal, agree to the following conditions:
1. The authors reserve the right to authorship of the work and pass the first publication right of this work to the journal under the terms of a Creative Commons CC BY, which allows others to freely distribute the published research with the obligatory reference to the authors of the original work and the first publication of the work in this journal.
2. The authors have the right to conclude separate supplement agreements that relate to non-exclusive work distribution in the form in which it has been published by the journal (for example, to upload the work to the online storage of the journal or publish it as part of a monograph), provided that the reference to the first publication of the work in this journal is included.