IRIS RECOGNITION METHOD BASED ON SEGMENTATION

The development of science and studies has led to the creation of many modern means and technologies that focused and directed their interests on enhancing security due to the increased need for high degrees of security and protection for individuals and societies. Hence identification using a person’s vital characteristics is an important privacy topic for governments, businesses and individuals. A lot of biometric features such as fingerprint, facial measurements, acid, palm, gait, fingernails and iris have been studied and used among all the biometrics, in particular, the iris gets the attention because it has unique advantages as the iris pattern is unique and does not change over time, providing the required accuracy and stability in verification systems. This feature is impossible to modify without risk. When identifying with the iris of the eye, the discrimination system only needs to compare the data of the characteristics of the iris of the person to be tested to determine the individual’s identity, so the iris is extracted only from the images taken. Determining correct iris segmentation methods is the most important stage in the verification system, including determining the limbic boundaries of the iris and pupil, whether there is an effect of eyelids and shadows, and not exaggerating centralization that reduces the effectiveness of the iris recognition system. There are many techniques for subtracting the iris from the captured image. This paper presents the architecture of biometric systems that use iris to distinguish people and a recent survey of iris segmentation methods used in recent research, discusses methods and algorithms used for this purpose, presents datasets and the accuracy of each method, and compares the performance of each method used in previous studies.


Introduction
With the increasing use of verification systems nowadays, biometric verification systems have become more common than ever [1]. At present, biological characteristics can be divided into physiological characteristics (such as the face, fingerprint, iris, retina and palm shape, etc.) and unique behaviour patterns (such as voice, signature, etc.) in terms of reliability [2,3]. Physiological characteristics are obviously superior to behavioral characteristics in uniqueness and safety in terms of reliability. From the perspective of high security, biometric identification technology does have high security and reliability [4]. Because it is more professional and sophisticated than general passwords and smart security devices. It is most commonly used in places that require high-security protection, such as government secret departments, legislative departments, banks and financial centres, laboratories, and private residences [5]. From the perspective of such techno logies, the scope of use can be said to be increasing day by day, and there are signs of civilianization. The types can be divided into channel control, computer access identification, and identity verification and receipt functions [6]. Among all biometrics, iris, in particular, is receiving attention as it has unique advantages as the pattern of the iris is unique and does not change over time, which provides the required accuracy and stability in verification systems. This feature is impossible to modify without risk [7][8][9]. Many methods and improvements have been introduced in iris recognition [10,11]. The process of iris recognition is to convert the scanned iris image into a digital Computer Sciences code and use an algorithm to extract the discriminative feature vector and store it in a computer database [12]. When identifying, only need to compare the iris characteristic data of the person to be tested to identify the individual [13]. Determining the correct segmentation methods of the iris is the most important stage in the verification system, including finding the limbic boundaries of the iris and the pupil of the eye, whether there is an effect of eyelids and shadows, and not exaggerating the centrality that reduces the effectiveness of the iris recognition system [14]. The sensors capture the iris segmentation images and are determined inside the image by the preprocessing unit, and the iris portion is extracted to process it. There are many ways to segment the iris. This paper presents a literature survey of the iris segmentation methods used in current research, the efficiency of each method, and a comparison of the performance.
Many studies dealt with the problem of iris segmentation. The following is a survey of the previous literature.
The paper [15] author is doing a presentation technique to improve the efficiency and accuracy of human identification and recognition via feature extraction. Compared to any other biometric system, the iris-based biometric system is the only steady and trustworthy one. Image segmentation, image normalization, image feature extraction, and matching are some of the approaches used in this work to create an iris biometric system. It significantly impacts the system's performance, precision, and dependability.
In the paper [16], a Gaussian triangle fuzzy filter and a smooth triangular mean filter were previously developed for image processing by blurring the regions outside the border to enhance the signals to noise rates. And the usage of the changed image in the training phase of the deep learning method's fuzzy process accelerates convergence and improves recognition accuracy rates. They discovered that noisy picture filters for signal-to-noise ratios enhanced accuracy and boosted convergence rate by a bigger margin.
In the paper [17], the author Building an iris identification system necessitates the development of a quick and accurate iris segmentation technique for less restricted iris pictures. The integrodifferential operator (IDO) of Daugman's algorithm is a strong iris segmentation technique; however, it takes a long time to locate the iris centre and eyelid margins. They use a new and rapid iris segmentation technique to overcome this challenge. To determine the exact location of the pupil center, the circular Gabor filter is utilized. The IDO is used to locate the iris and pupil circles, considering that the true centres of the iris and pupil are located in a narrow radius surrounding the approximate location of the pupil centre. They employ the live-wire approach to extract the upper and lower eyelid borders. Experiments show that the suggested iris segmentation method greatly reduces the time necessary to segment the iris.
In the paper [18], the researchers presented a new strategy that combines hardware and software to determine the distance to recognize the eye's iris. To compensate for the image obscuring, they used improved coding and resampled the obscured image, which increases the recognition rate of the frame, but it is high-cost.
The paper [19] suggested a code-level plan in heterogeneous iris recognition. A modified Markov network was used to illustrate the non-linear link between binary elements coding of diverse iris pictures. The model can also provide a weight map on the reliability of binary format in the iris template. In comparison to the present pixel-level, feature-level, and score-level solutions, broad exploratory findings of matching cross sensor, high-resolution versus low-resolution, and clear versus veiled iris pictures revealed that the code-level technique can achieve the most remarkable accuracy.
The paper [20] has developed an improved Daugman's algorithm iris recognition system that integrates iris restriction and iris encoded and matching methods. In Phase 1, the iris picture was used to determine the pupil's location and shape. In Phase 2, potential noise from residual eyelashes was extracted by using a «pure» iris component as a reference and making pixel-by-pixel approval decisions. The proposed method offers a demonstrable benefit in increasing speed while lowering the dismissal rate.
In the paper [21], the authors advised that YOLOv4 be used to get the iris area to increase localization accuracy and reduce reflected noise, successfully avoiding the difficulty of defining Computer Sciences inner iris borders in the presence of noise interference. Finally, an improved radial difference modulation approach is provided to increase the accuracy of localization of the outer iris borders. However, the training phase of this approach is lengthy and requires a large amount of data.
Previous studies dealt with iris recognition and segmentation methods, but they did not address finding and comparing iris segmentation methods and their impact on the accuracy of the verification system.
This work aims to study and compare the method of iris segmentation in recognition systems. To achieve the general goal, the following sub-objectives were identified: -study of recognition systems using the iris of the eye and determine the importance of the iris segmentation step from the acquired image; -scan the literature for iris recognition systems and identify the segmentation route and datasets used; -compare the most used methods and the accuracy of each method according to the data set used.

Materials and methods
This paper summarizes the iris recognition systems, their segmentation, the data sets used, and each method's accuracy concerning the data set used.

1. Eye composition
According to medical anatomy analysis, the eyeball wall is divided into outer, middle, and inner layers. The media is rich in blood vessels and pigment cells. The iris is located in the foremost part of the media, between the cornea and the lens, and consists of different layers. Its colour varies from race to race. There is a round hole called the pupil in the centre, and the outside is adjacent to the white of the eye. (Fig. 1). The innermost layer is the dermis layer with stains, which is the muscle layer that controls the pupil's muscles, and on the muscle layer is the stromal layer. This layer is connected by gelatinous tissue into an arc shape and radiating spiral blood vessels. Therefore, it is also called the vascular layer. The uppermost layer is the anterior corneal layer, distributed with chromatophores. The appearance of the iris is the combined effect of these four layers [22][23][24].

Fig. 1. Biological Characteristics of the Iris
The muscle layer of the iris contains two kinds of smooth muscles arranged in different directions: the pupillary sphincter, which is arranged in a circular shape around the pupil, is called the pupil sphincter. The contraction causes the pupil to shrink; the pupil dilatation muscle, which is arranged radially from the pupil to the surrounding, is called the pupil dilatation muscle, and contraction makes the pupil dilate. In this way, the surface of the iris has uneven radially arranged and interlaced wrinkled walls, forming a large number of recesses. These high and low changes in the structure form a unique texture [25]. The iris is the internal tissue of a protected eye, behind the cornea and crystal fluid. The appearance of the eye is composed of three parts: the sclera, iris, and pupil. The black part in the centre is the pupil area. The pupil shrinks or dilates as the intensity of the incident light changes, which affects the iris. The lighter part on both sides is the sclera area. It is located between the pupil and the sclera. The iris, sclera, and pupil borders are all approximately circular, which is important geometric information used in image matching. The iris is a fabric-like ring of various colours in the eye's pupil [26]. Each iris contains a unique feature based on crown, lens, filaments, spots, structures, pits, rays, wrinkles, and stripes. Genetic genes determine the formation of the iris. Human gene expression determines the shape, physiology, colour, and overall Computer Sciences appearance of the iris. Except for rare abnormal conditions, physical or mental traumas that cause changes in the appearance of the iris, the iris appearance can remain unchanged for decades [27].
On the other hand, the iris is visible from the outside, but at the same time, it is an internal tissue located behind the cornea. To change the appearance of the iris, very delicate surgery is required, and there is a risk of visual damage.

Iris recognition system
Usually, a complete iris recognition process includes iris image acquisition, iris preprocessing, iris feature extraction and iris comparison. The iris segmentation is in the preprocessing stage, and the result of iris segmentation will directly affect the accuracy of iris recognition. Iris segmentation can be divided into broad and narrow senses [28]. Iris segmentation is extracting effective iris texture areas that are not disturbed by noise [29,30]. The extraction result is a binary mask, where 1 represents effective iris pixels, and 0 represents other areas. Therefore, iris segmentation can also be regarded as the application of second-class semantic segmentation in iris images in addition to segmentation; the generalized iris segmentation also includes the inner and outer boundaries of the iris for positioning [31,32]. The normalization operation since the iris is located in the circular area between the black pupil of the human eye and the white sclera; it is susceptible to interference from eyelashes, shadows, light spots, and frames. The Iris recognition systems for identifying persons using the iris include several stages [33,34]. This stage is shown in Fig. 2.
Image acquisition: at this stage, the Images are taken by a camera placed relatively close distances. The taken image should have good brightness and resolution.
Image segmentation: this stage is one of the stages of preprocessing the image. In this stage, the iris region is isolated from the rest of the digital image. This area is closed between two circles. The inner circle represents the separation between the iris and the sclera and the second between the iris and the pupil.
Normalizing: After the iris segmentation, the next stage is normalizing the iris. The purpose of this process is to generate the iris code for comparison.
Feature extraction: in this stage, important features are extracted for matching. Usually, the iris image contains important and unimportant features. This stage creates the important features that the system needs to match.
Matching: at this stage, the features stored in the database are compared with those currently extracted to be identified.

Fig. 2. Iris Recognition System
As shown in Fig. 2, the Enrollment and Verification process follows the same steps, and matching is the final stage for any recognition system.

3. Iris Segmentation Technique
Iris segmentation Technique can be roughly divided into two categories: boundary-based methods and pixel-based methods. The former mainly obtain the isolated iris area by locating the inner and outer boundaries of the iris, upper and lower eyelids, removing eyelash occlusion, specu lar reflection, etc. Daugman's algorithm [35,36] integral differential operator and circular Hough transformation [37] is the most typical work. These two methods assume that the inner and outer boundaries of the iris are circular, and the gray pixel values vary greatly on the iris boundary. Its integrodifferential operator calculates the sum of the radial gray changes along the boundary of the circle centre and finds the maximum value to determine the parameters of the iris inner and outer circle. Wildes first uses a gradient-based edge detection algorithm such as Canny edge detection to detect iris edge points and performs Hough transform based on the obtained edge points to obtain the iris's inner and outer circle parameters. The above two types of methods have obtained better segmentation results on the ideal clear iris image. Still, the effect of the iris image obtained under the long-distance and visible light is not good, and the noises such as upper and lower eyelids and specular reflection are not detected. Based on the above two types of algorithms, a lot of work has appeared to improve segmentation performance [38]. Suggest an effective way to address this problem. First, the image of a normal iris is divided into subregions according to the characteristics of the iris tissue. Then local binary patterns (LBP) are adopted to represent the tissue of each sub-region. Finally, Adaboost Learning is implemented to identify the most unique LBP fraud detection features. In particular, a kernel density estimation scheme is proposed to complement the insufficient false iris images during Adaboost training. In [39] used Hough transform and integrodifferential operation strategies for iris segmentation. Iris recognition structures capture a picture from the individual eye. Then the photo captured is segmented and normalized for the encoding procedure. The matching technique, Hamming Distance, is used to match the iris code inside the database climate. It is far identical with the newly enrolled for verification degree. In [40], hybridization is proposed among Daugman's Integrodifferential Operator (IDO) with edge base methods was realized by taking the benefits of the best characteristics of each strategy to beautify the precision and reduce the specified time. In [41] used the Viterbi algorithm for iris segmentation. The Viterbi algorithm is used on tworeso lution iris gradient images: high-resolution images are used to locate the iris's fine inner and outer contours to obtain the segmentation mask; the evaluation benchmark of iris segmentation algorithm 399 low-resolution images are used for positioning Rough contours to obtain inner and outer circles. In [42] a novel total variation model is proposed that uses l1 norm regularization to suppress noise robustly, generates images with clear boundaries and uses an improved circular Hough transform to perform internal and external operations on the generated images. Circle positioning has obtained accurate detection results. In addition, a series of novel post-processing operations are used to obtain the binary mask of the iris accurately. An unsupervised segmentation method was especially used to deal with noisy images and obtained the best segmentation results in the mobile terminal iris challenge MICHEGI. This method determines the approximate position of the iris through a series of preprocessing operations such as reflection correction, internal and external circle detection using modified integrodifferential operators, normalization processing, upper and lower eyelid positioning, etc. Then the iris texture is modelled as a multiple. The spatial probability model of the spectrum and the adaptive threshold me thod are used to detect the noise pixels and indirectly obtain the effective iris area. The author is also in NICE. Tested on the I data set, it ranked first with an E1 error rate of 0.0124, which exceeded NICE. The championship algorithm of the I competition fully proves the robustness and accuracy of the algorithm [43]. In addition to modelling the inner and outer boundaries as a circle, divide the segmentation into four stages: the first stage enhances the image contrast; the second stage uses the HOG descriptor and SVM method to locate the iris position; the third stage is for the positioning of the film. The GrowCut algorithm is used to extract iris pixels in the domain; the fourth stage uses post-processing to remove holes, shadows, and reflections [44]. This method usually requires manual feature design, and feature extraction and segmentation training are separated. Therefore, iris segmentation in complex scenes faces great challenges. In general, Computer Sciences the traditional iris segmentation algorithm contains a lot of preprocessing and manual operations, so the accuracy of the method is easily affected by these intermediate processing, which affects the algorithm's robustness.

4. Comparison of iris segmentation methods
Finding irises from images taken to identify people is important in recognition systems that rely on irises as the basis of their work. Table 1 summarizes studies that used iris segmentation to recognize people in terms of methods used, the data set, and the iris segmentation methods. In paper [54], Feature extraction is done using an e_cient multi-resolution 2D Log-Gabor. The wavelet transforms computed facial features using the singular spectrum analysis (SSA). And use hybrid fusion level. In [46] a novel normalization method is proposed on the iris image segmented by circular Hough transform for classification. In [47], this article provides an algorithm for iris detection that is stable against noise. Its computing complexity is low enough to be functional in actual applications. The algorithm calculates a special iris signature using the Contourlet Transform and the Shannon Entropy. The created descriptor can be checked against a database of iris codes to determine if the iris in the database exists or not. In [48], new feature extraction and classification system is suggested to focus on grey-level discrepancy and hybrid. For experimental results, the CASIA-Iris V3 dataset is used. In [49] The Hough and Daugman are used for segmentation, the result showing hough transform is more accurate for iris segmentation. In [50] PCA was used to minimize the data. The minimum distance is used to verify the similarities bet ween the features and training image. Three similarity techniques are used between the iris Computer Sciences function and the example saved in the database. The cosine approach provides stronger performance than other approaches without a reduction. In paper [43], the researchers studied the effect on the eye's image while it was taken using a regular camera, which in turn would reduce the classification performance based on thresholding segmentation. In paper [51] Eye images that depend on visible wavelengths are used by the system. The built-in camera of a smartphone acquires these images. Four key stages proceed through the development of the system, including segmentation, translation, normalization, and matching. The machine uses seven different matching strategies to align iris representations to faces [46]. Fusion of face, iris, and fingerprint is used at the score fusion level. First, min-max for normalization is used. Weighted sum rules are used to get fusion. Results show that multi-biometric systems outperform un-biometric systems gives the best results. In [52], an iris and fingerprint recognition system was designed and biometric features matched using Score Level. Iris segmentation was dependent on. In [53], Using an iris and a fingerprint, a biometric identification system design and Score Level matching biometric for feature extracted with Hough transform for Iris segmentation. In [54], the iris's segmentation process suffers from a lighting problem using Daugman's algorithm. The study suggested using a method to improve the work of this algorithm. Iris segmentation has an impact on the accuracy of identification systems using the iris, and to determine the most used methods and an impact on the accuracy of the system applied to data sets, the research that dealt with this concept was compared, and the following are the results of the comparison.

Results and discussion 1. Accuracy of iris recognition systems
The identification using the iris is considered one of the systems with high reliability and accuracy, as in Fig. 3 shows the accuracy of each system about the segmentation method.

Fig. 3. Highest accuracy for each segmentation method
Hough transform followed by direction intensity achieved the highest accuracy. At the same time, the use of the threshold method was less accurate than the rest of the methods. Table 1 summarizes all the methods used to target iris segmentation, giving and performance related to the data sets used. Various types of iris segmentation methods have been examined, and such as thresholding, Circular Hough Transform, Hough transforms etc. Each technology has its advantage. Also, achieve high recognition rates by using Hough transform, the predominant method for iris segmentation. Fig. 4 shows the use of each method and the research orientation. The Circular Hough Transform has been used in many studies and is the most common method for iris segmentation, followed by the Hough Transform and then the rest of the methods.

Datasets used in iris recognition systems
The system's data set used to train and apply effects the recognition rate. Standard and local data sets were used to build recognition systems. Fig. 5 shows the number of times each group was used in the study's research.

Fig. 5. Datasets used in iris recognition systems
The CASIA dataset is the standard data for iris recognition systems. The data set has several versions, and the first version is the most widely used.
Most of the studies have used the Hough Transform method, and it has given high accuracy in the iris recognition systems for the different data sets used. This method is the most popular because it is less affected by noise and, therefore, suitable for different data sets.
This study was limited to the fragmentation part regarding discrimination accuracy on data sets. Other factors can affect the system's accuracy, the use of machine learning algorithms, and the comparison of systems as a whole.
One of the defects in a particular partial study (iris segmentation) in a system consisting of several stages is that it does not give a comprehensive view of the compatibility of segmentation methods with the rest of the stages, as the data set, segmentation method and data set used were taken into consideration.
In this study, the effect of iris segmentation methods for recognition systems was presented, and one of the difficulties facing this type of study is that the methods for measuring the performance of systems are different in addition to the fact that local data sets can be ambiguous in comparisons as the laboratory environment is not fixed. Several significant points may be drawn from all of the articles listed in this study: -the majority of the research data sets are collected in standard settings. There is no noise or parking that is loud; Computer Sciences -when the form of the pupil's borders cannot be approximated as circles, no approach considers how exactly the iris and pupil portions are split; -no approach has been developed to identify all four noise sources in a single algorithm: eyelashes, eyelids, reflections, and pupils. The inner and outer borders, eyelashes, and eyelids are displayed in stages, resulting in a large increase in the system's processing time; -typically, circuit fitting techniques disclose the inner and outer limitations. Because the iris' edges are not perfectly circular, this is a mistake; -the outer border of the iris does not have distinct edges under noisy settings.

Conclusions
Iris recognition systems are characterized by high accuracy and low error rate. The system consists of several stages, and the iris segmentation stage impacts the accuracy of the system. The study found that iris segmentation methods impact the accuracy of discrimination.
Hough transform is the most common method for iris segmentation. It gives high accuracy as it is not affected by noise compared to other methods for different data sets and systems.
The most widely used data set is CASIA. The system's accuracy for the data set reached 99.8 % using the Hough transform in the segmentation stage.