The procedure is called enrollment and involves creating an enrollment data record of the biometric data subject (the person to be enrolled) and storing it in a biometric enrollment database. For the purpose of recognition, the biometric data subject (the person to be recognized) presents his or her biometric identifier to the biometric recording device, which generates a recognition biometric sample from it. From the biometric recognition sample, biometric feature extraction creates biometric features that are compared to one or more biometric templates from the biometric registration database.

## INTRODUCTION

The system has been tested using bin miss rate and performs better compared to the traditional k-means approach. It uses Discrete Wavelet Transformation (DWT) of biometric images and randomized processing strategies for hashing. In this scheme, the input image is decomposed into approximate, vertical, horizontal and diagonal coefficients using the discrete wavelet transform.

BIOMETRICS IDENTIFICATION SYSTEMS

CHARACTERISTICS OF BIOMETRIC SYSTEMS

## BIOMETRIC MODALITIES

*VOICE**INFRARED FACIAL AND HAND VEIN THERMOGRAMS**FINGERPRINTS**FACE**IRIS**GAIT**KEYSTROKE DYNAMICS**SIGNATURE AND ACOUSTIC EMISSIONS**ODOR**RETINAL SCAN**HAND AND FINGER GEOMETRY*

Larger representations of the finger are based on the whole image, finger ridges or salient features derived from the ridges (minutiae). In static signature verification, only geometric (shape) features of the signature are used to authenticate an identity. A component of the odor emitted by a human body (or any animal) is characteristic of a particular individual.

## CHALLENGES IN DESIGN OF SYSTEM

Nevertheless, hand geometry has become acceptable in a number of installations for identity authentication applications in recent years. The system may be able to determine the subject's identity or decide that the subject is not represented in the database. Some research issues in the design of biometrics-based identification systems are described below.

Obtaining relevant data for biometrics is one critical process that has not received adequate attention. Segmentation: separating input data into foreground (object of interest) and background (irrelevant information). There are many opportunities to include the data capture context, which can further help improve system performance and avoid unwanted metrics (and subsequent recapture of desired metrics).

With cheap desktop computing and large input bandwidth, the context of the data capture could typically be made richer to improve performance. The rigorous and realistic models of input measurements and Metrics for assessing the quality of a measurement. When the choice to reject a poor quality input measurement is not available (eg in older databases), the system may try to extract a useful signal from the noisy input measurements.

In addition, robust and realistic models of the object of interest often facilitate cleaner and better design of segmentation algorithms.

## Representation :To detect the machine-readable representations completely capture the invariant and discriminatory information in the input measurements is the most challenging

Although a number of existing identification systems routinely assign a quality index to the input measurement indicating its desirability for matching, the approach to such a quality assessment metric is subjective, debatable, and typically inconsistent. How to improve the input measurements without introducing any artifacts is an active research topic. Similarly, conventional foreground/background separation typically relies on ad hoc processing of input measurements, and improving the information bandwidth of the input channel (e.g., using more sensory channels) often provides very efficient routes to segmentation.

Representation: Detecting the machine-readable representations and fully capturing the invariant and discriminative information in the input measurements is the biggest challenge. The issues of salient features of an information source should also be explored. However, the usability of the systems using such representation schemes may be limited due to factors such as brightness variations, image quality variations, scars and large global distortions present in the fingerprint image, as these systems essentially resort to template matching strategies for verification .

Some system designers try to circumvent this problem by constraining the representation to be derived from a small (but consistent) part of the finger. However, if the same representation is also used for identification applications, the resulting systems may run the risk of limiting the number of unique identities that can be handled, simply due to the fact that the number of distinguishable templates is limited. On the other hand, an image-based representation makes fewer assumptions about the application domain (fingerprints) and therefore has the potential to be robust to wider varieties of fingerprint images.

Feature Extraction: Given the raw input metrics, automatically extracting the given representation is an extremely difficult problem, especially when the input metrics are.

However, incorporating these features into a fully automated fingerprinting system is not feasible because it is not easy to reliably detect these features using state-of-the-art image processing techniques. Rigorous models of feature representations are useful in a reliable extraction of the features from the input measurements, especially in noisy situations. Matching: At the heart of a matching is a similarity function that quantifies the intuition of similarity between two representations of the biometric measurements.

A representation scheme and a similarity metric determine the accuracy performance of the system for a given population of identities; therefore, the choice of appropriate similarity scheme and representation is critical. Where to find the compromise between matcher complexity and control of the environment is an open problem. The three-dimensional shape of the finger is imaged on the two-dimensional surface of the glass plate.

Typically, this (non-homogeneous) mapping function is determined by finger pressure and contact on the glass plate. Non-uniform contact: the ridge structure of a finger would be fully captured if the ridges of the part of the finger being imaged are in full optical contact with the glass plate. Different image processing operations can introduce inconsistent biases to confound the location and orientation estimates of details reported by their grayscale counterparts.

A typical imaging system distorts the image of the object being observed due to imperfect imaging conditions.

## Search, organization, and scalability: systems dealing with a large number of identities should be able to effectively operate as the number of users in the system increases to its

### EVALUATION OF A BIOMETRIC SYSTEM

*FRR IN DETAIL**FAR IN DETAIL**HOW DO THE FAR/FRR PAIRED GRAPHS AFFECT A BIOMETRIC SYSTEM**RECIEVER OPERATING CHARACTERISTIC (ROC) of a BIOMETRIC SYSTEM*

Rate of false identification is the probability in an identification that the biometric features are wrongly assigned to a reference. Due to the statistical nature of the false rejection rate, a large number of verification trials must be performed to obtain statistically reliable results. The total FRR for N participants is defined as the average of FRR(n):. 1.1) The values are more accurate with higher number of participants (N).

However, in many systems, rejections due to poor quality are generally independent of the threshold value. Due to the statistical nature of false acceptance rates, a large number of fraud attempts must be made to obtain statistically reliable results. Number of all independent fraud attempts against a person (or characteristic) n. 1.3) These values are more reliable with more independent attempts per person/attribute.

The key number for determining statistical significance is the number of independent trials. The approximation (~) shows that only the expected values of the measured FMR and FNMR error rates are equal to the probability of PH or In contrast, FRR's false rejection rate is the number of authorized user similarity scores that fall below this same threshold compared to the total number of queries.

By integration (in practice, successive summation) of the probability distribution curves, FAR and FRR graphs are determined, which depend on the.

FEATURE LEVEL CLUSTERING OF

## DATABASES

*INTRODUCTION**FUZZY C MEANS**Initialization of the partition matrix**Calculation of fuzzy centers**Updating membership and cluster centers**K MEANS ALGORITHM**SIGNATURE BIOMETRICS AS A CASE STUDY**Feature extraction and training**Identification strategy**CONCLUSION**DWT BASED HASH*

With the increase in the size of the biometric database, reliability and scalability issues become the bottleneck for low response time, high search and retrieval efficiency in addition to accuracy. The sum of the membership degrees of a particular data point belonging to more than one cluster is always one. From the available biometric features, it was deduced that each feature set has an association with more than one group and may differ with data from the same group.

Looking at the image, we can identify two clusters near the two data concentrations. In the FCM approach, instead, the same given datum does not exclusively belong to a well-defined cluster, but can be placed on a middle path.[9] In this case, the membership function follows a smoother line to indicate that each datum can belong to multiple clusters with different values of the membership coefficient. At this point, we need to recompute k new centroids as barycenters of the clusters resulting from the previous step.

This creates a separation of objects into groups from which the metric to be reduced can be calculated. The characteristics of signature images can be classified into two categories: global and local [9]. Local features, unlike global features, are susceptible to small distortions such as dirt, but are unaffected by other regions of the signature.

Many global features, such as the global baseline, centroid, and black pixel distribution, also have their local counterparts.

CODED EAR BIOMETRIC

## SYSTEM

*INTRODUCTION**EAR BIOMETRICS AS A CASE STUDY**DISCRETE WAVELET TRANSFORM**Examples**IMAGE HASHING**PROPOSED ALGORITHM**Image Decomposition**Generation of Hash Code**Matching**CONCLUSION**FEATURE LEVEL CLUSTERING OF LARGE BIOMETRIC DATA**DWT BASED HASH CODED BIOMETRIC SYSTEM**CONCLUSION*

The locations shown are measured from specially adjusted and normalized photographs of the right ear. Assuming a population mean standard deviation of four units (ie, 12 mm), the 12 measurements yield a space of less than 17 million distinct points. Although simple solutions (eg adding more measurements or using a smaller metric) to increase the size of the space are obvious, the method is also not suitable for machine vision due to the difficulty of locating the anatomical point that serves as a starting point for the measurement system. The discrete wavelet transform (DWT) is an implementation of the wavelet transform using a discrete set of wavelet scales and translations that obey some defined rules.

From the images above (Figure 3.2) and (Figure 3.3) one can see that the DWT spectrum obtained using Daubechies 20 wavelets has the lowest number of non-zero terms (or terms significantly above zero). Dyadic grid: We can also plot the data obtained through DWT in a 2+1D graph similar to the result of the continuous wavelet transform. This is very simple and reflects the main uncertainties of the data obtained by wavelet transform.

Image size is reduced by a factor of 8 (for level 3 decomposition). The dimensions of the rectangle are taken at random. This is necessary to ensure that in each iteration, the sizes of the rectangles are random. The transformation matrix is then multiplied by the approximation matrix obtained by decomposing the image in the step above.

To measure the performance of the system, the bin-miss rate is obtained by varying the number of bins as shown in Figure 4.1. At that threshold the accuracy is about 96.37% FAR is 0.17% and FRR is 7.07%. The Receiver Operating Characteristic is clearly visible in the ROC curve plot as shown in Figure 4.3. To further strengthen the robustness of the system, it can be tested in multiple modes.