Chapter 7

Multimodal Biometric Data Indexing

Multi-biometric based person identification is gaining its importance and present trend is to process a large amount of biometric data which is in the order of millions. The problem in such situations is to deal with high dimensional features from two or more biometric traits. This demands high computation time to identify a query template. In this work, an approach to index a large pool of multi-biometric data has been proposed so that the matching process can be accomplished in a real time without compromise in accuracy of person identification. The proposed indexing technique is based on the relative scores. First, a small set of reference subjects is selected. Then, the subjects are enrolled (retrieved) into (from) the database using proposed indexing approach. At the time of enrollment (retrieving), the relative scores are calculated against the set of reference subjects corresponding to each trait. The scores are combined using SVM based score level fusion technique. These scores are used to generate index key for a subject. Based on the index code values, the subject identities are stored into the database. Index spaces are created in the database and store subjects’ identities into index space based on the relative index key values. At the time of querying, a candidate set is created for a query index key corresponding to each biometric trait. A new rank level fusion technique is introduced on the retrieved candidate sets using SVM rank. The different steps in the proposed approach are shown in Fig. 7.1. Figure 7.1(a) shows an overview of the reference subject selection and Fig. 7.1(b) shows the different steps of enrolling a subject into the database and retrieving a set of subjects from the database.

The rest of the chapter is organized as follows. Feature extraction and score calculation methodologies from multi-biometric traits will be discussed in Section 7.1 and 7.2, respectively. In Section 7.3, the techniques of reference subject selection will be described. Section 7.4 will brief about the score calculation against the selected reference subjects. Score fusion technique will be described in Section 7.5. Section 7.6 will explain the proposed index key generation method for multimodal biometric system. The storing and retrieving techniques shall be discussed in Section 7.7 and Section 7.8, respectively. Section 7.9 will introduce the proposed rank level fusion technique. Performances of the proposed indexing method shall be presented in Section 7.10. Section 7.12 will summarize the chapter.

e9781614517450_i0282.jpg

Figure 7.1: Overview of the proposed approach.

7.1 Feature Extraction

In this work, a subject having a multiple number of samples is considered. A sample is characterized with the images of multiple biometric traits. A sample of a subject is identified with a unique identity and all samples of a subject are grouped together. Let assume that there are P number of subjects and each having Q number of samples, and three biometric traits are considered for each sample. Figure 7.2 shows the representation of samples for a subject with multiple biometric traits. In Fig. 7.2, B1, B2, and B3 represent three biometric traits and e9781614517450_i0283.jpg denotes the qth sample of the pth subject for the biometric trait B1.

The extracted feature set of a subject for all samples and all biometric traits is denoted by Eq. (7.1). F (e9781614517450_i0284.jpg) represents the feature set of the qth sample of the pth subject for the biometric trait B1. The number of feature vectors for a biometric trait depends on the feature extraction method. F (e9781614517450_i0285.jpg) may contains a single feature vector or a set of feature vectors. Eq. (7.2) shows the feature vectors of the F (e9781614517450_i0286.jpg) where B denotes a biometric trait. In Eq. (7.2), KB and LB represent the number of feature vectors extracted from a sample and the number of features in a single feature vector for the biometric B, respectively.

e9781614517450_i0287.jpg

Figure 7.2: Representation of multimodal biometric samples.

In this approach, three biometric traits namely iris, fingerprint and face are considered. These biometric traits (in this case, B1, B2 and B3 are iris, fingerprint and face biometric traits, respectively) are commonly used in different applications [17, 18] due to their uniqueness, stability, reliability and accessibility. The state of art techniques are used to extract features from these biometric traits. Daugman’s IrisCode-based method [6, 7], Jain’s filter bank-based method [16] and Du’s SURF-based method [8] are applied to extract features from iris, fingerprint and face biometric traits, respectively. Summary of the extracted features from iris, fingerprint and face biometric traits are given in Table 7.1.

7.2 Score Calculation

The similarity between two samples of a biometric trait are measured and represented as a score value. Let F (e9781614517450_i0290.jpg) and F (e9781614517450_i0291.jpg) be the feature vectors of the qth sample of the pth subject and the nth sample of the mth subject, respectively for a biometric trait B. The score for a biometric trait B is represented as e9781614517450_i0292.jpg and calculate using Eq. (7.3).

Thus, using Eq. (7.3), the similarity score value between two samples belong to a particular biometric trait can be measured. It may be noted that SCOREB() function is different for different biometric traits. Existing methods are used to calculate the scores for different biometric traits and described in Table 7.1. Daugman’s Hamming distance method [7], Jain’s Euclidean distance method [16] and Du’s Euclidean distance method [8] are used to calculate the scores for iris, fingerprint and face, respectively.

Table 7.1: Feature summary of different biometric traits

e9781614517450_i0294.jpg

7.3 Reference Subject Selection

The proposed multimodal biometric indexing approach is based on the relative scores. To calculate relative scores, the reference subjects need to be decided. All individuals’ data will be inserted into the database based on the relative scores with respect to the reference subjects. The reference subjects with a single sample are chosen in such way that the scores generated by individuals with respect to the reference subjects give a high degree of distinctiveness. Let < e9781614517450_i0295.jpg > be all Q samples of the pth subject (p = 1, 2, . . . , P ) where P is the total number of subjects with the biometric trait B. M subjects (e9781614517450_i0296.jpg) with single sample are to be selected as reference subjects from all samples of all P subjects for the biometric trait B and let e9781614517450_i0297.jpg denotes the selected sample of the Mth subject. The selection of reference subjects with single sample for a biometric trait is done in two steps. The details of these steps are described in the following.

7.3.1 Sample Selection

In this step, a distinct sample has been chosen for each subject for a biometric trait which gives high variance in scores compared to other samples for that biometric trait of that subject. Let e9781614517450_i0298.jpg, the rth sample of the pth subject for a biometric trait B, gives the maximum variance among all samples of the pth subject for the biometric B. To select e9781614517450_i0299.jpg, first, scores of the ith sample (e9781614517450_i0300.jpg) of the pth subject with all other samples (e9781614517450_i0301.jpg and j = 1, 2, ... , Q) of the pth subject are calculated for the biometric trait B and the variance vp,i(B) of the scores is computed for the ith sample. The scores for the ith sample with all other samples for the pth subject is represented in Eq. (7.4) and the score variance for the ith sample of the pth subject is calculated in Eq. (7.5).

In Eq. (7.4), Sp,i(B) denotes the score vector of length Q − 1 and e9781614517450_i0304.jpg denotes the score between the i and j samples of the p subject for B. In Eq. (7.5), µp,i(B) represents the mean of all scores for the ith sample of the pth subject for biometric trait B and calculated using Eq. (7.6).

The variance for the all samples (e9781614517450_i0306.jpg) of the pth subject is calculated using Eq. (7.4), (7.5) and (7.6) and a variance vector vp(B) is created for the pth subject (see Eq. (7.7)).

The maximum variance from vp(B) is determined and the corresponding sample are selected as a distinct sample of the pth subject for the biometric trait B. Let vp,r(B) be the maximum variance in Vp(B). The r sample (Br ) is selected as a distinct sample for the p subject for B and represented as e9781614517450_i0308.jpg. In this way, the distinct samples (e9781614517450_i0309.jpg) are selected for all P subjects of biometric trait B.

7.3.2 Subject Selection

Now, there are all subjects with one distinct sample (e9781614517450_i0310.jpg) for a biometric trait. In this step, M subjects will be selected as reference subjects from these subjects which yield the high diversity in scores with respect to other subjects. In other words, e9781614517450_i0311.jpg are to be selected as reference subjects which give top M variances among all P subjects. It may be noted that a reference subject contains single biometric template from each trait. To choose the reference subjects, the same maximum variation finding strategy is followed as in sample selection. Here, the main difference is that the variation among all subjects are computed rather than all samples of a subject. First, a score vector Sp,r(B) is calculated for the pth subject (see Eq. (7.8)) with all other subjects for a biometric trait B and the score variance is computed for the pth subject in Eq. (7.9).

In Eq. (7.8), rp and rq are the selected samples for the pth and qth subject, respectively and e9781614517450_i0314.jpg is the score between the selected samples of the pth and qth subjects for the biometric trait B. The length of the score vector Sp,r (B) is P − 1. In Eq. (7.9), µp(B) is the mean of all scores between the pth subject and all other subjects for the biometric trait B. The value of µp(B) is calculated using Eq. (7.10).

A variance vector of all subjects is calculated for the biometric trait B using Eq. (7.9) and the vector is represented with v(B) (see Eq. (7.11)).

The top M variances can be found from v(B) and M subjects are selected corresponding to the top variances as a reference subjects for each biometric trait. Let e9781614517450_i0317.jpg (m = 1, 2, . . . , M ) be the subject with top mth variances. e9781614517450_i0318.jpg are selected as reference subjects for the biometric trait B and represent as R1(B), R2(B), . . . , RM (B). The reference subjects contain single template for each biometric trait. In this way, reference subjects can be selected for all other biometric traits. The extracted features of three biometric traits (B1, B2 and B3) of reference subjects are defined in Eq. (7.12). The feature vector of the mth (m = 1, 2, . . . , M ) reference subject for biometric trait B is represented in Eq. (7.13). In Eq. (7.13), KB and LB denote the number of feature vectors extracted from the biometric trait B and the length of a feature vector, respectively.

7.4 Reference Score Calculation

Scores of all samples against all reference subjects are calculated using Eq. (7.3). The match score of the qth sample (q = 1, 2, ... , Q) of the pth subject (p = 1, 2, ... , P) against the mth reference subject (m = 1, 2, . . . , M ) for B is denoted as e9781614517450_i0321.jpg (see Eq. (7.14)).

The scores for the qth samples of the pth subject with respect to all reference subjects are given in Eq. (7.15). In Eq. (7.15), e9781614517450_i0323.jpg denotes a tuple which contains scores of the qth samples of the pth against the mth reference subject for all biometric traits. Scores for all p (p = 1, 2, . . . , P ) and all q (q = 1, 2, . . . , Q) are calculated, where P and Q are the number of subjects and number of samples for each subject, respectively.

7.5 Score Level Fusion

After the calculation of scores for all samples of each biometric trait„ next task is to combine the scores of all biometric traits for a sample. Note that the scores of a biometric trait is in different scales. Hence, the scores of each biometric trait are normalized before combining it. The scores of three biometric traits are combined using SVM classifier [19, 22]. In the following, the score normalization technique followed by score fusion (combining) technique are discussed.

7.5.1 Score Normalization

Several score normalization techniques (min-max, z-score, tanh, sigmoid, reduction of high-scores effect (RHE), etc.) exist in literature [15, 33, 12, 31]. Table 7.2 shows the effectiveness of the different score normalization methods with respect to robustness, scalability, efficiency and ease of use. From Table 7.2, it can be seen that RHE method [12] is more robust and efficient among all other normalization techniques. Hence, RHE normalization technique has been applied in this work. All scores of each biometric trait are normalized using

Table 7.2: Characteristics of the different score normalization methods

e9781614517450_i0325.jpg

Eq. (7.16).

In Eq. (7.16), e9781614517450_i0327.jpgis the normalized score of e9781614517450_i0328.jpgfor the biometric B. S(B) denotes the distribution of all scores and S(B) denotes the genuine score distribution [12] for the biometric B.

7.5.2 Score Fusion

The normalized scores are used to calculate the multimodal score for a sample. There exist several score fusion methods (maximum rule, minimum rule, sum rule, product rule, weighted sum rule, likelihood ratio, SVM and etc.) [15, 12, 24, 30, 31, 29]. The characteristics of the different fusion methods are shown in Table 7.3. From Table 7.3, it can be seen that the SVM based score fusion is the most efficient with respect to the different parameters like accuracy, robustness and scalability. Hence, SVM classifier is preferable to classify the genuine and imposter subjects using the normalized scores [25, 13, 9, 26, 12, 32]. In this work, SVM classifier [19, 22] is used to calculate the multimodal score because SVM-based classification performs better for biometric data classification [9, 12, 32].

Let e9781614517450_i0329.jpgthe normalized score vector of the qth sample of the pth subject with respect to the mth reference subject. Then, the SVM classification function for e9781614517450_i0330.jpgis represented using kernel trick [19, 34] (see Eq (7.17)).

In Eq. (7.17), Ns is the number of support vectors, Si denotes the ith support vector, yi is the class associated with Si, K(·, ·) represents a non-linear kernel function and αi represents Lagrange multiplier associated with Si. The sign of e9781614517450_i0332.jpgrepresents the class of the e9781614517450_i0333.jpg and the absolute value of e9781614517450_i0334.jpgindicates the confidence of e9781614517450_i0335.jpgbeing in that class. In this approach, the confidence value e9781614517450_i0336.jpgis considered as a combined score. The combination of all biometric traits is considered as multimodal biometric trait and denoted it as B4. The multimodal score of the qth sample of the pth subject with respect to the mth reference subject is represented by e9781614517450_i0337.jpgthat is, e9781614517450_i0338.jpg

Table 7.3: Characteristics of the different score fusion methods

e9781614517450_i0339.jpg

To decide the support vectors (Si), classes (yi) of each support vector and Lagrange multipliers (αi) associated with each support vector, the SVM classifier is trained with a set of training samples. 280134 training data are used to train the SVM classifier. These data are created from 100 users of the Gallery set. Detail description of the training procedure is given in Section 7.10.3.

Note that the multimodal scores e9781614517450_i0340.jpgfor all samples (q = 1, 2, . . . , Q) of all subjects (p = 1, 2, . . . , P ) against all reference subjects (m = 1, 2, . . . , M ) calculated as mentioned above (also see Eq. (7.17)) are in different scales than the unimodal biometric traits. Hence, the multimodal scores are normalized using RHE normalization technique (see Eq. (7.16)) and the normalized multimodal score of the qth sample of the pth subject against the mth reference subject is represented with e9781614517450_i0341.jpg.

Now, there are four normalized score values for a sample with respect to a reference subject. Four normalized score values of the qth sample of the pth subject against the mth reference subject is denoted with a vector e9781614517450_i0342.jpg. The normalized score vector for all samples of all subjects against all reference subjects are generated and represented in Eq. (7.18) where p = 1, 2, . . . , P and q = 1, 2, ... , Q. These score vectors are used to generate index key for a sample. Index key generation technique is discussed in the next section.

7.6 Index Key Generation

To generate an index key for a sample of a subject the normalized score values obtained as discussed above are used. Note that there are four normalized score values with respect to a reference subject and there are M reference subjects. Hence, M index keys are created and each index key is of four dimensions corresponding to four normalized score values. Each dimension is called a key feature of an index key. To compute an index key feature the distribution of the scores of a biometric trait with respect to a reference subject for all samples of all subjects are used. Let sm(B) denotes a vector which contains normalized scores of all samples of all subjects against the mth reference subject for the biometric trait B. sm(B) is represented in Eq. (7.19).

The distribution of the score vector sm(B) may not be uniform. But the target is to generate uniformly distributed index keys which helps to store the identities of subjects into the database in a well distributed manner. So, uniformly distributed score vector is constructed prior to index key generation. To do this, histogram equalization [10] technique is applied on the score values of the vector sm(B). The equalized score e9781614517450_i0345.jpgfor a score e9781614517450_i0346.jpgof the qth sample of the pth subject against the mth reference subject for the biometric B is calculated in Eq. (7.20). In Eq. (7.20), T (·) is a transfer function on the input normalized score to get the equalized score, pSm(B) denotes probability density function (PDF) of scores against the mth reference subject for the biometric trait B, minsm(B) represents the minimum value in the vector sm(B) and r is the dummy variable for integration.

The equalized score in Eq. (7.20) is used as a key feature of an index key. The index key of the qth sample of the pth subject for the mth reference subject is represented with e9781614517450_i0348.jpg. M index keys are generated for a sample of a subject. All M index keys of the qth sample of the pth subject are shown in Eq. (7.21).

7.7 Storing

The index keys for all samples of all subjects are generated according to the proposed index key generation method. To store the identities of subjects based on the index keys, first index spaces are created into the database and then the identities are stored into the index spaces. The methods for index space creation and storing data are discussed in the following.

7.7.1 Index Space Creation

An index space into the database corresponding to each reference subject is created. If there are M reference subjects then M index spaces are created. Figure 7.3 shows an overview of the mth index space which is corresponding to the mth reference subject. Each index space contains one table for each biometric trait. There are four biometric traits B1, B2, B3 and B4 where B4 represents multimodal trait in this approach. Four tables in each index space are created. In Fig. 7.3, table for a biometric trait B is denoted with e9781614517450_i0350.jpg(in the mth index space) and the length of the table is denoted with LB. The length of each table depends on the enrollments of subjects into the database and is decided experimentally (see Section 7.10.5.2). The index values of the cells for the table e9781614517450_i0351.jpgare represented with 1, 2, . . . , LB . Each cell of the table contains a list called IDL which stores a set of identities of subjects (see Fig. 7.3).

e9781614517450_i0352.jpg

Figure 7.3: Index space for the mth reference subject.

7.7.2 Storing Multimodal Biometric Data

The identity of each sample is stored into the database based on the generated index keys which is stated as follows. For the key value e9781614517450_i0353.jpgof the index key e9781614517450_i0354.jpg, the m index space and table e9781614517450_i0355.jpg are selected. The unique identity of the qth sample of the pth subject is represented as e9781614517450_i0356.jpg where p and q represent the subject number and sample number of the subject, respectively. The identity e9781614517450_i0357.jpg are stored into the identity list IDL corresponding to a cell of the table e9781614517450_i0358.jpg. The cell position in the table is calculated based on the index key value. The cell position (tB ) in the e9781614517450_i0359.jpgfor the key value e9781614517450_i0360.jpgis found using Eq. (7.22) and Eq. (7.23). In Eq. (7.22) and Eq. (7.23), LB denotes the length of the table, and minƒm(B) and maxf m(B) represent the minimum and maximum values of all key values of all samples of all subjects against the mth reference subject for the biometric B, respectively and calculated using Eq. (7.24).

where,

Note that if there are P number of subjects with Q number of samples each, M number of reference subjects and each subject has B biometric traits then there are M index spaces and each index space has B number of tables. N = P × Q number of identities are stored in each table.

Illustration: The storing technique is illustrated with an example. Let there be five subjects (P = 5) and each subject has four samples (Q = 4) to be enrolled into the database. Also assume that there are three reference subjects (R1, R2 and R3). The four dimensional index keys are generated for the third sample (q = 3) of the fifth subject (p = 5) with respect to all three reference subjects which is shown in Fig. 7.4(a). Three index spaces are created corresponding to each reference subject in the database and each index space contains four tables. Figure 7.4(b) shows all tables of the first index space. Let assume that the length of each table is 5, and the range of key values for B1, B2, B3 and B4 with respect to the first reference subject are 0 to 0.8, 0 to 0.7, 0 to 0.5, and 0 to 1, respectively. Now, it is shown how to store the identity of a subject for the first index key (highlighted row in Fig. 7.4(a)). The first index space is selected to store identity e9781614517450_i0364.jpg corresponding to the first index key e9781614517450_i0365.jpg.The identity will be stored into all tables e9781614517450_i0366.jpg e9781614517450_i0367.jpgof the first index space. The cell positions tB1, tB2, tB3 and tB4 of the e9781614517450_i0368.jpgfor the e9781614517450_i0369.jpgare calculated using Eq. (7.22), respectively. The calculated cell positions for four tables are 4, 3, 2 and 4, respectively. The sample identity e9781614517450_i0370.jpgis stored into the identity list (IDL) at the 4th 3rd 2nd and 4th cell positions of e9781614517450_i0371.jpgand e9781614517450_i0372.jpgrespectively. The highlighted cells in Fig. 7.4(b) show the positions for storing the identity e9781614517450_i0373.jpg.

e9781614517450_i0374.jpg

Figure 7.4: Example of storing the 3rd sample of the 5th subject into the 1 st index space of the database.

7.8 Retrieving

Once all samples of individuals are enrolled into the database, it can be used to retrieve data for a match. In retrieving, the samples are found from the database which are the most similar to a query sample. The index keys are generated for the query sample as discussed in Section 7.6 and the query index keys are represented in Eq. (7.25) where indxm denotes the query index key correspond to the mth reference subject, f m(B) denotes key feature value corresponding to the biometric trait B and M represents the number of reference subjects.

Now, a set of identities of subjects are retrieved for each query index key. For this purpose, a candidate set is maintained for each biometric trait. Note that there are four biometric traits and each trait is related to a dimension of an index key. Let the candidate sets are CSET(B1), CSET(B2), CSET(B3) and CSET(B4) for the biometric traits B1, B2, B3 and B4, respectively. Each candidate set consists of two fields: id and vote. The retrieved subject identities are stored into the id field and number of occurrences of the subject identities are stored into the vote field. To retrieve the subject identities for the mth query index key indxm, the mth index space is searched and the list of subject identities IDL is retrieved from a particular cell of the table in the mth index space for a key feature value of the indxm. For the key feature value f m(B) of the indxm, IDL from the tBth cell of the e9781614517450_i0376.jpgis retrieved. The cell position tB in the table is calculated by Eq. (7.22) and Eq. (7.23). The retrieved identities for the f m(B) are stored into id field of the candidate set CSET(B). The number of occurrence of each subject identities is counted and stored in the vote field of the candidate set CSET(B) corresponding to the subject identity. Note that a subject have more than one sample in the database, and the superscript and subscript values refer to the subject and sample identifiers, respectively. The count is done based on the subject identifier. For example, if e9781614517450_i0377.jpgidentities are retrieved for biometric B then Id5 and Id3 are stored into the id field, and 2 and 1 in the vote field of candidate set CSET(B), respectively.

The key values calculated corresponding to a reference subject for a query may differ from the key values of the matched subjects. This may leads to decrease the accuracy of the proposed system. To address this limitation, the ±δ neighbor cell positions of the cell tB is considered in a table. The identity lists (IDL) are retrieved from tB − δ to tB + δ cell positions of the table. The value of δ is decided experimentally (see Section 7.10.5.3).

 

In this way, all candidate sets CSET(B1), CSET(B2), CSET(B3) and CSET(B4) are generated for all biometric traits B1, B2, B3 and B4. These candidate sets are used for rank level fusion to rank the retrieved identities.

 

Illustration: The retrieving of candidate set from the database for a given query subject is illustrated with an example. In this example, 3 reference subjects are considered to generate the index key. Let assume that the database stores the identities for 5 subjects and each subject has four samples. The database consists of four index spaces. Figure 7.5(a) shows the stored identities in the tables e9781614517450_i0378.jpg and e9781614517450_i0379.jpgof all index spaces ( for 1st 2nd and 3rd index spaces, respectively) for the biometric trait B1. In each table, the identities of all subjects are stored. The index keys of a query subject are shown in Fig. 7.5(b). The candidate set CSET(B1) is generated for the B1. To generate CSET(B1), the f (B1) key values of all index keys (see Fig. 7.5(b)) are considered and the respective positions are found in the three tables using Eq. (7.22). The positions in the e9781614517450_i0380.jpg, e9781614517450_i0381.jpgand e9781614517450_i0382.jpgfor the key values f (B1) = 0.417, 0.465, 0.378 are 3, 3 and 2. The identities are retrieved from the 3rd, 3rd and 2nd positions of the e9781614517450_i0383.jpgand e9781614517450_i0384.jpg, respectively, and count the number of occurrences of each subject. From Fig. 7.5(a), it can be seen that Id1, Id4 and Id5 occur 10, 4 and 7 times, respectively. Therefore, Id1, Id4 and Id5 in CSET(B1) receive 10, 4 and 7 votes, respectively (see Fig. 7.5(c)). Similarly, CSET(B2), CSET(B3) and CSET(B4) can be generated for the biometric traits B2, B3 and B4, respectively.

e9781614517450_i0385.jpg

Figure 7.5: Example of retrieving candidate sets for a given query.

7.9 Rank Level Fusion

For a query subject a candidate set is retrieved corresponding to each trait. A candidate set for a biometric trait contains a set of retrieved subjects and a vote for each retrieved subject. Therefore, each retrieved subject has different order of similarities based on different biometric traits. To decide the order of similarity for the retrieved subjects with the query, the rank level fusion is done among all candidate sets. Rank level fusion is done in two steps. These steps are described in the following.

7.9.1 Creating Feature Vector for Ranking

In Section 7.8, four candidate sets (CSET(B1), CSET(B2), CSET(B3) and CSET(B4)) have been retrieved. From these candidate sets, a set of feature vectors are created for ranking. Let consider a total of N subject identities are retrieved in all four candidate sets for a query. Each subject identity has own vote in each candidate set. The votes of the retrieved subjects is represented in each candidate set as shown in Table 7.4. In Table 7.4, Idi denotes the identity of the ith retrieved subject and vi(B) denotes the vote for that subject identity in the candidate set CSET(B). For each retrieved subject, a feature vector is created from the number of votes of that subject in each candidate set. The feature vector for the subject identity Idi is represented in Eq (7.26).

e9781614517450_i0386.jpg

(7.26)

7.9.2 SVM Ranking

In this step, each feature vector is ranked using SVM ranking method [20, 34]. Unlike SVM classification function, which outputs a distinct class for a feature vector, the ranking function gives an ordering of feature vectors. The ranking function outputs a score for each feature vector, from which a global ordering of feature vectors is constructed [20, 34]. Let vi feature vector is preferred to vj feature vector and specified as vi vj. The objective is to find a global function F(·) which outputs a score such that F(vi) > F(vj) for any vi vj. The global ranking function F (·) on a feature vector vi can be computed using SVM [20, 34] and represented in Eq. (7.27). In Eq. (7.27), K(·,·) is kernel function, Ns is the number of support vectors, (Vk Vl) is the pair wise difference support vector and αk,l is the Lagrange multiplier associated with the (Vk Vl).

Table 7.4: Retrieved subjects and their votes in each candidate set

e9781614517450_i0387.jpg

The pair wise difference support vector (Vk Vl ) and the Lagrange multiplier αk,l for the global ranking function F (·) are computed from a set of labeled training data. The detail method to train the SVM for ranking can be found in [20, 22]. To generate the set of training data, T number of queries are randomly selected from the gallery set and a set of feature vectors is created for each query using the method discussed in Section 7.9.1. Let Rt be the training data generated from the tth query (t = 1, 2, . . . , T ) and is represented in Eq. (7.28).

In Eq. (7.28), e9781614517450_i0390.jpg denotes the ith feature vector for the tth query and e9781614517450_i0391.jpgthe rank of e9781614517450_i0392.jpgNote that the training data Rt follows strict ordering [20], that is, e9781614517450_i0393.jpg.The ranks of the feature vectors in the training data are given by manually or any automatic ranking method [20, 22]. In this approach, a rank is assigned to each feature vector of a training data by an automatic method which is discussed in the following.

The ranking method for training data is based on a weighted feature vector. Let e9781614517450_i0394.jpg be the weighted value of the ith feature vector e9781614517450_i0395.jpgfor the tth training data. The weighted value e9781614517450_i0396.jpg is computed by Eq. (7.29).

In Eq. (7.29), e9781614517450_i0398.jpg represents the weight of the ith feature vector e9781614517450_i0399.jpg for the tth training data. The weight is assigned to each feature vector of a training data in such a way that if the identity related to the feature vector is present in all candidate sets then it gets higher preference than the other feature vector. The weight e9781614517450_i0400.jpg is computed using Eq. (7.30).

The weighted values are calculated for all feature vectors of a training data using Eq. (7.29) and rank the feature vector based on the weighted values. The rank one is given to the feature vector with the highest weighted value, rank two to the feature vector with the second highest weighted value and so on. In this way, the rank is assigned to each feature vector of all training sets.

 

Illustration: The SVM-based ranking of each retrieved subject is illustrated with an example. Figure 7.6(a) shows the retrieved candidate sets for a query. There are 5 unique subjects among all candidate sets. A feature vector is generated for each subject identity. For the subject identity Id1, there are 10, 6, 3 and 5 votes in CSET(B1), CSET(B2), CSET(B3) and CSET(B4). Hence, the feature vector v1 corresponding to Id1 is < 10 6 3 5 >. The feature vectors for all retrieved subjects are shown in Figure 7.6(b). The value of SVM ranking function is computed for each feature vector which is shown in Figure 7.6(c). In Figure 7.6(c), it can be seen that the feature vector v1 related to Id1 gives the maximum value for SVM-ranking function. The rank 1 is assigned to the subject identity Id1.

e9781614517450_i0402.jpg

Figure 7.6: An example of SVM-based rank to each retrieved subject for a query.

7.10 Performance Evaluation

To study the efficacy of the proposed multimodal biometric data indexing approach, a number of experiments have been conducted on virtual multimodal database and measured the performances with the metric defined in Section 4.7.1. This section starts with the description of database. Then, the experiments carried out and the experimental results are presented.

7.10.1 Database

There are few publicly available multimodal databases with iris, fingerprint and face biometric traits. Out of which, WVU [5, 4] and BioSecure [1] are the two multimodal database which contain three biometric traits, namely iris, fingerprint and face. But the authority of the WVU database does not disclose the privacy of the users. Hence, WVU face database is also not available to the research community. On the other hand, BioSecure [1] database contains less number of users and this database is not freely available to the research community. As a way out, a set of virtual users are created from three publicly available large unimodal databases and the experiments are performed on these virtual user dataset. CASIAV3I [2], WVU [5, 4] and FRGC Still version 2.0 [27, 28, 3] unimodal biometric databases are used for iris, fingerprint and face biometric traits, respectively. The description of these databases are given in the following.

 

Iris Database: The CASIAV3I database [2] contains 2639 eye images from 395 eyes of 249 persons. One to twenty six images are captured from each eye. It may be noted that the iris data of left and right eyes of a person are different. Hence, each eye is treated as a unique subject. The summary of CASIAV3I iris database is given in Table 7.5. Figure 7.7(a) shows two sample eye images of CASIA database. At least two samples from each subject are considered so that use can be at least one sample for enrollment into the database and one sample for probing. There are 372 subjects which have at least two samples for each subject. These subjects are used to create virtual users in this experiments.

Fingerprint Database: The WVU fingerprint database [5, 4] contains 7136 images of 270 persons. The images are captured from 4 different fingers (left index, left thumb, right index, right thumb). Three to twenty images are captured from each finger. As each finger of a person is different, the each finger is considered as unique subject. Hence, there are 1080 unique subjects in the WVU fingerprint database. The features can not be extracted from the fingerprint images of 320 subject. So, 750 subjects are used to create virtual users in this experiment. The summary of WVU fingerprint database is given in Table 7.5. Two Sample fingerprint images selected randomly from WVU database are shown in Fig. 7.7(b).

Table 7.5: Summary of biometric databases and virtual users

e9781614517450_i0403.jpg

Face Database: FRGC Still version 2.0 database [27, 28, 3] contains 16028 frontal face images of 466 subjects. The images from each subject are captured in indoor environment with two different protocols (FERET and Mugshot) and two different facial expressions (Neutral and Smiley) [27]. In this experiment, a subset of 4007 face images of 466 persons has been taken with neutral expression captured with FERET protocol (FERET-Neutral). These images are captured in Fall 2003 and Spring 2004 semester of 2003-2004 academic year. The images from each subject are captured in indoor environment. A brief summary of the FRGC database is given in Table 7.5. All these images are used for virtual user creation. Figure 7.7(c) shows two face images of two different subjects.

 

Virtual Users: A sample (instance) of a virtual user consists of three images from three biometric traits, namely iris, fingerprint and face. The virtual users are created from the above mentioned databases. Those subjects from the databases are considered which have at least two samples because at least one sample is needed for enrollment and one sample for probe. Note that there are 372, 1080 and 466 subjects have at least two samples for iris, fingerprint and face biometric databases, respectively. From these three databases 372 virtual users (subjects) can be created with three biometric traits. To create virtual user (subject), first, all samples of a subject are selected from the database of each biometric trait. The subject is selected randomly. Then, a sample image from each biometric trait of the subject is taken and used as a sample image of virtual user. It may be noted that each subject has some uniqueness with respect to each biometric trait. Hence, the subjects or samples are not used to create a virtual user which are already selected for a virtual user. This helps to create virtual users with unique multi biometric traits. In this procedure, 372 virtual users are created and each virtual user has 2 to 20 samples, resulting in a total of 2625 samples. The summary of virtual users is given in Table 7.5.

e9781614517450_i0404.jpg

Figure 7.7: Sample images of CASIAV3I iris, WVU fingerprint and FRGC face databases.

7.10.2 Evaluation Setup

To evaluate the performance of the proposed approach, all samples of all virtual users are divided into two sets: Gallery and Probe. The samples in the Gallery set are enrolled into the index database and samples in the Probe set are used as query to search the index database. 80% samples of each subject are used to create the Gallery set and other 20% to create the Probe set. The samples of the Gallery set are selected randomly from each subject.

The experiments have been done with an Intel Core2Duo processor (2.00 GHz) and 2.0-GB memory. The developed programs are executed with GCC 4.3 compiler.

7.10.3 Training of SVM-based Score Fusion Module

SVM-based score fusion technique is used to combine the scores of different traits and the SVM needs to train before using it in score fusion. The model [22, 21, 14] is trained with known training samples. The training samples contain the scores of all traits and their classes. The class of the training sample is genuine if the score is calculated from the samples of the same subject and imposter if the samples are from different subjects. To create training samples, first, 100 subjects with 2 to 10 samples per subject are selected form the Gallery set. From these selected samples, 2,498 genuine and 2,77,636 imposter pairs are created. Then, the scores between each pair of sample for each biometric trait are calculated using Eq. (7.3) given in Section 7.2 and the scores are normalized using the RHE score normalization method described in Section 7.5.1. Thus, a total of 2,80,134 training data have been created. Each training data has been assigned either genuine or imposter class based on pairs (genuine or imposter). The SVM is trained with these training data and perform 5-fold Cross validation [22, 14] to measure the performance with training data. 96.75% cross validation accuracy has been observed, that is, the SVM training is of good classification accuracy. SVMligtht tool [22, 21] is used for training and testing the SVM model.

7.10.4 Training of SVM-based Ranking Module

To train the SVM in rank level fusion, an enrolled database, a set of query samples and the retrieved candidate sets for each query with their ranking are required. For this purpose, 1 sample from 100 subjects is randomly selected as query from the gallery set and the rest of the samples of Gallery set are enrolled into the database. For each query sample, four candidate sets are retrieved corresponding to four biometric traits using the proposed retrieving technique (see Section 7.8). To create training data for SVM ranking, it has to be assigned a rank to each candidates retrieved in the candidate set. To do this, a feature vector is created corresponding to each candidate retrieved candidate set using the proposed method described in Section 7.9.2. An initial rank is given to each training data using Eq. (7.29) as mentioned in Section 7.8. In this way, 100 training sets are created from the 100 query samples. 5-fold cross validation [14, 23] is performed with these training data and observe 92.46% cross validation accuracy. The SVMRank tool [23, 20] is used to train and test the SVM in ranking module.

7.10.5 Validation of the Parameter Values

Three parameters are used in this proposed indexing approach. The first parameter is the number of reference subjects (M) which is used in index key generation. The second parameter is the size of the table (LB) in index space which is used in storing the biometric data into the index space. The third parameter is the number of neighbor cells (δ) of a table. The values of these three parameters are validated experimentally. The experimental validations are given in the following.

7.10.5.1 Number of Reference Subjects (M)

In this approach, the index keys are generated with respect to a number of reference subjects. The number of index keys depends on the number of reference subjects. If more number of index keys are considered from more number of reference subjects then better HR can be achieved, however, the PR is also increased. So, the number of reference subjects should be chosen in such a way that it gives good HR with low PR. To do this, experiment is performed with different number of reference subjects (M = 5, 10, 15, 20 and 25). To evaluate the result without biasing toward reference subject the samples of all reference subjects are removed from the gallery and probe sets and the HR and PR are measured. The result is shown in Fig. 7.8. It is observed that the HR remains almost same though the number of reference subjects is more than 10 and the PR increases for more than 10 reference subjects. Hence, 10 reference subjects are considered in this approach.

7.10.5.2 Size of Table (LB)

The HR and PR of the proposed indexing technique depend on the number of entries into a cell of the index table. If number of entries into the cell are more then the chances of finding a query subject within the cell increase; as a result HR increases and vice-versa. At the same time, the PR also increases as more number of samples in the cell which need to be retrieved at the time of querying. The number of entries depends on the number of cells of a table which is referred as table size. The number of entries into a cell decreases when table size increases and vice versa. Hence, the HR and PR of different number of enrolled samples are measured with different table sizes. In this experiment, sizes of the table is calculated by taking different percentages of the total number of enrolled samples. The table size is considered 10% to 100% of enrolled samples with step of 10% increment. The experiments have been done with 500, 1000, 1500 and 2000 enrolled samples and results are presented in Fig. 7.9. From Fig. 7.9(a), it is observed that HR decreases rapidly beyond the table size as 30% of the total enrolled samples for different number of enrollments. Whereas in Fig. 7.9(b), it can be seen that PR decreases when the table size is less than 20% of the total enrolled samples for different number of enrollments. Hence, the size of the table is chosen as 20% of the total number of enrolled samples.

e9781614517450_i0405.jpg

Figure 7.8: HR and PR with different number of reference subjects.

7.10.5.3 Number of Neighbor Cells (δ)

For an index key, a set of stored identities are retrieved from a cell and its neighborhood cells of a table. If more number of cells are considered then the HR will be increased but the PR will be also increased. Hence, there should be a choice for selecting the number of neighborhood cells (δ). For this purpose, an experiment is conducted with different values of δ. The values of δ are chosen from 1 to 10 and the HR and PR are measured at each δ value. The result is presented in Fig. 7.10. From Fig. 7.10, it can be seen that by incrementing the value of δ from 1 to 5 the HR is increased from 95.81% to 99.55% with the increase of PR 2.57%. However, incrementing the value of δ from 5 to 10, the HR increases only 0.30% with the increase of PR 4.93%. The value of δ = 5 gives 99.55% HR at 13.86% PR. The value of δ = 5 is selected in the experiment.

e9781614517450_i0406.jpg

Figure 7.9: HR and PR with different number of enrolled samples and different table sizes.

7.10.6 Evaluation

The efficiency of the proposed indexing technique is judged with respect to accuracy, searching time and memory requirement. After selecting the 10 reference subjects, the Gallery set contains 1957 samples and Probe set contains 668 samples from 372 subjects with iris, fingerprint and face biometric traits. Experiments are evaluated with these Gallery and Probe sets. In experiment, the indexing is performed using iris, fingerprint and face trait separately, and different combinations of these three traits (multimodal) also.

7.10.6.1 Accuracy

To analyze the accuracy of the proposed approach, HR and PR are measured. The HR and PR achieved by indexing with iris, fingerprint, face and different combinations of these three traits are reported in Table 7.6. From Table 7.6, it can be seen that the indexing with three traits gives 99.55% HR which is higher than the indexing with uninodal or combination of any two traits. However, the PR is little higher than the unimodal or combination of any two traits.

The result is also substantiated in terms of CMS which gives the probability of at least one correct identity present within a top rank. How CMS varies with rank is shown in Fig. 7.11 as CMC curve for indexing with each unimodal traits as well as for the multimodal indexing. From Fig. 7.11, it is observed that 91.62%, 92.96% and 86.98% CMSs are possible at the top 30 ranks for indexing with iris, fingerprint and face, respectively. On the other hand, indexing with the combination of three traits gives 99.25% CMS at the 30th rank.

e9781614517450_i0407.jpg

Figure 7.10: HR and PR for different values of δ with the proposed indexing technique.

Further, the performance of the proposed method is analyzed with respect to FPIR and FNIR. To do this, first FMR and FNMR are calculated as follows. Each query template of the Probe set is

Table 7.6: HR and PR of the proposed indexing technique with unimodal and multimodal traits

Biometric trait HR PR
Iris 93.56 14.63
Fingerprint 95.96 12.98
Face 90.27 15.86
Iris+Fingerprint 97.75 17.62
Fingerprint+Face 97.21 16.44
Iris+Face 93.86 17.18
Iris+Fingerprint+Face 99.55 17.77

matched with each template in the Gallery set using SVM classification. 3738 genuine pairs and 13,03,538 imposter pairs are chosen from the Gallery and Probe, and the genuine score and imposter scores are calculated for each genuine and imposter pairs, respectively. Finally, FNIR and FPIR are calculated for the identification system without indexing and with indexing using Eq. (4.20) and (4.21), respectively. The trade-off between FPIR and FNIR for the identification system without indexing is shown in Fig. 7.12. Figure 7.12 also shows the trade-off between FPIR and FNIR for indexing with iris, fingerprint, face and combining of three traits. From the experimental results, it may be interpreted that 5.22% FNIR can be achieved at 1% FPIR without indexing where as 2.72%, 2.43%, 3.04% and 2.42% FNIR can be achieved at 1% FPIR for iris, fingerprint, face and multimodal (combination of iris, fingerprint and face) indexing. From Fig. 7.12, also it can be observed that using the proposed indexing approach a lower FPIR can be achieved for an FNMR.

e9781614517450_i0409.jpg

Figure 7.11: CMC curves of the proposed indexing technique with different combination of biometric traits.

e9781614517450_i0410.jpg

Figure 7.12: FPIR versus FNIR curves of the proposed identification system without any indexing and with iris, fingerprint, face and multimodal based indexing.

7.10.6.2 Searching Time

First, the run-time complexity is analyzed with big-O notation for gallery match corresponding to a query sample. Let N be the total number of samples enrolled in the database from P number of individuals and an individual has L number of samples. In this approach, for given a query template, constant time is required to find a position in the table and retrieve the list of identities (IDL) stored at the position for a key value of a query index key. Adding these list into a set SB can be done in constant time. There are 10 index keys and each has 4 key values. Hence, the time complexity of retrieving candidates from the table for a given query is O(1). Each retrieved candidate is processed to create feature vector for rank level fusion. Let IL be the average number of candidates retrieved from the database. Note that IL << N, N be the total number of samples enrolled in the database. Then, the feature generation process for ranking can be accomplished in time O(IL). The rank of all IL candidates can be computed using SVM ranking in O(IL) time. In the worst case, when all samples are stored in one position, then IL is equal to N, which is very unlikely to occur.

The search efficiency is also analyzed by measuring the average time taken to retrieve the templates from the database for a given query sample. Let tp be the average time to perform operation like addition, subtraction, and assignment operations. This indexing approach requires six comparisons to retrieve candidates corresponding to one key values of a query index key and a candidate set of size IL for a single trait is retrieved using 10 key values of all index keys, one corresponding to each reference subject. There are total four candidate sets. Therefore, the time taken to retrieve a candidate set of size IL is (tp × 6) × 40. The number of features generated for ranking is IL and requires IL × tp × 4. Let tsv be the time taken to calculate SVM rank score for a feature vector. Therefore, IL × tsv time is required to calculate rank of IL feature set. Let ts be the time to compare query template with one stored template for matching. Hence, the search time using the proposed indexing approach is (tp × 6) × 40 + IL × tp × 4 + IL × tsv + IL × ts where tp < tsv < ts. On the other hand, linear search requires N × ts time. Thus, this indexing approach takes less time than the linear search approach because IL << N.

The average retrieving time taken in indexing with iris, fingerprint, face and multimodal for different database sizes is given in Table 7.7. The average time taken to search a query without indexing is also reported in Table 7.7. It can be observed that the average retrieving time remains almost same with the increasing size of the databases. From Table 7.7, it can be seen that the proposed approach perform faster than the searching without indexing.

7.10.6.3 Memory Requirement

Table 7.7: Average retrieving time for a query with different database sizes.

e9781614517450_i0411.jpg

The memory requirement is calculated to store the identities into the database. In this approach, four tables are considered corresponding to each reference subject in an index space and ten reference subjects in the database. Hence, there are 40 tables in the database. Each table stores the identities of all samples. Let 4 bytes memory is required to store the subject identity of a sample and 1 byte is required to store the sample identity of the subject. 232 subjects and 256 samples per subject can be stored using 5 bytes memory. Suppose, each cell requires 4 bytes memory to store the reference of the identity list (IDL). The table size (LB) depends on the number of enrollments. In this approach, the table size is considered as 20% of the number of enrollments. If there are N samples need to be enrolled, then the memory requirement (MemoryN ) can be calculated using Eq. (7.31).

The memory requirement is calculated for different number of enrollments and result is shown in Fig. 7.13. It is observed that the memory requirement increases linearly with the database size.

7.11 Comparison with Existing Work

The objective of this work matches with the work done by Gyaourova et al. [11]. Gyaourova et al. [11] proposes multimodal indexing by applying two fusion techniques. In first technique, they concatenate the index code of face and fingerprint and then retrieve the data from database using concatenated index code. In another technique, they first retrieve the data from database using individual index code of face and fingerprint and then take the union of the retrieved identities. To compare the proposed approach, this approach and the existing approaches are evaluated on the same database of fingerprint and face trait. The score used in existing an proposed approach is calculates by proposed score calculation method. The results are compared with respect to HR, PR, dimensionality of index code, search complexity and retrieving time. Table 7.8 shows the comparison results. From Table 7.8, it can be seen that the proposed approach gives the almost same HR at low PR with low dimensional index key than the approaches proposed in [11]. Here, only two unimodal traits (fingerprint and face) and combination of fingerprint and face as multimodal trait are used. Hence, the proposed approach uses a total of 30 dimensions index keys (10 index keys each with 3 dimensions) for face and fingerprint based indexing whereas Gyaourova et al. approach [11] uses 500 dimensions index keys (2 index keys with 250 dimensions). Further, the searching complexity of the proposed approach is O(1) however, O(N ) for the approach [11]. Again the approach retrieves the candidate set from the database in less time than the approach in [11].

e9781614517450_i0413.jpg

Figure 7.13: Memory requirements for different number of enrolled samples in the proposed indexing technique.

Table 7.8: Comparison with existing multimodal indexing techniques [11]

e9781614517450_i0414.jpg

7.12 Summary

This chapter proposes a new indexing mechanism to reduce the search space for a multibiometric-based identification system. In this approach, only ten reference subjects are used to generate index code. The reference subject-based index key generation gives the high discriminant index codes from different subjects. The proposed storing and indexing mechanism for multimodal biometric identification system is novel. The storing and indexing mechanisms support for any number of biometric traits. The indexing mechanism allows to retrieve a small set of identities from the database in O(1) time. SVM-based rank level fusion is also proposed to combine the retrieved identities using different traits. Proposed indexing technique is tested with iris, fingerprint and face biometric traits. Only four dimensional index key is generated. The approach is tested with a set of virtual users which is created from the most popular CASIAV3I iris, WVU fingerprint and FRGC face databases. The experimental results shows that 99.55% HR can be achieved at 17.77% PR with the combination of iris, fingerprint and face biometric traits. From the experimental results using unimodal and different combination of the traits, it can also be observed that combining different traits gives better performance than any unimodal traits. Comparison with existing work shows that the proposed approach gives better performance than existing approaches. The approach takes on the average 0.042 millisecond to retrieve a small set of identities with database size 1957 for a query sample. Please note that, this time remains constant even the database size increases. Also, it may be concluded that the proposed approach is applicable to biometric identification systems with large multimodal biometric data and accomplishes an identification in real time without compromising the accuracy.

Bibliography

  • [1] BIOSECURE. The BioSecure Multimodal Database. URL http://biosecure.it-sudparis.eu/AB/index.php?option=com_content&view=article&id=11&Itemid=14 . (Accessed on November, 2012).
  • [2] CASIA Iris Database. CASIA-IrisV3-Interval. URL http://www.cbsr.ia.ac.cn/IrisDatabase.htm. (Accessed on September, 2010).
  • [3] NIST FRGC. Face Recognition Grand Challenge (FRGC). URL http://www.nist.gov/itl/iad/ig/frgc.cfm. (Accessed on May, 2012).
  • [4] WVU Iris Database. Multimodal Biometric Dataset Collection, BIOMDATA, Release 1,. URL http://citer.wvu.edu/multimodal_biometric_dataset_collection_biomdata_release1 . (Accessed on September, 2010).
  • [5] S. Crihalmeanu, A. Ross, S. Schuckers, and L. Hornak. A Protocol for Multibiometric Data Acquisition, Storage and Dissemination. Technical report, West Virginia University, Lane Department of Computer Science and Electrical Engineering, Morgantown, WV, 2007.
  • [6] J. Daugman. The Importance of being Random: Statistical Principles of Iris Recognition. Pattern Recognition, 36(2):279–291, 2003.
  • [7] J. Daugman. How Iris Recognition Works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21–30, 2004.
  • [8] G. Du, F. Su, and A. Cai. Face Recognition using SURF Features. In Proceedings of the SPIE Pattern Recognition and Computer Vision (MIPPR 2009), volume SPIE-7496, pages 749628–1–749628–7, Yichang, China, October-November 2009.
  • [9] J. Fierrez-Aguilar, J. Ortega-Garcia, D. Garcia-Romero, and J. Gonzalez-Rodriguez. A Comparative Evaluation of Fusion Strategies for Multimodal Biometric Verification. In Proceedings of the IAPR International Conference on Audio and Video-based Person Authentication, pages 830–837, Guildford, UK, June 2003.
  • [10] R. C. Gonzalez and R. E. Woods. Digital Image Processing. Prentice Hall, 2nd edition, 2002.
  • [11] Aglika Gyaourova and Arun Ross. Index Codes for Multibiometric Pattern Retrieval. IEEE Transactions on Information Forensics and Security, 7(2):518–529, 2012.
  • [12] M. He, S.-J. Horng, P. Fan, R.-S. Run, R.-J. Chen, J.-L. Lai, M. K. Khan, and K. O. Sentosa. Performance Evaluation of Score Level Fusion in Multimodal Biometric Systems. Pattern Recognition, 43(5):1789–1800, 2010.
  • [13] S.-J. Horng, Y.-H. Chen, R.-S. Run, R.-J. Chen, J.-L. Lai, and K. O. Sentosal. An Improved Score Level Fusion in Multimodal Biometric Systems. In Proceedings of the International Conference on Parallel and Distributed Computing, Applications and Technologies, pages 239–246, Higashi Hiroshima, Japan, December 2009.
  • [14] C.-W. Hsu, C.-C. Chang, and C.-J. Lin. A Practical Guide to Support Vector Classification. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2003. URL http://www.ee.columbia.edu/~sfchang/course/spr/papers/svm-practical-guide.pdf .
  • [15] A. Jain, K. Nandakumar, and A. Ross. Score Normalization in Multimodal Biometric Systems. Pattern Recognition, 38(12): 2270–2285, 2005.
  • [16] A. K. Jain, S. Prabhakar, L. Hong, and S. Pankanti. Filterbank-based Fingerprint Matching. IEEE Transactions on Image Processing , 9(5):846–859, 2000.
  • [17] A. K. Jain, A. Ross, and S. Prabhakar. An Introduction to Biometric Recognition. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):4–20, 2004.
  • [18] A. K. Jain, P. Flynn, and A. Ross, editors. Handbook of Biometrics . Springer-Verlag, New York, 1st edition, 2007.
  • [19] T. Joachims. Advances in Kernel Methods - Support Vector Learning , chapter Making Large-Scale SVM Learning Practical, pages 169–184. MIT Press, 1999.
  • [20] T. Joachims. Optimizing Search Engines using Clickthrough Data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining, pages 133–142, New York, USA, August 2002.
  • [21] T. Joachims. Training Linear SVMs in Linear Time. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining, pages 217–226, New York, USA, August 2006.
  • [22] T. Joachims. SVMlight: Support Vector Machine. Cornell University, 2008. URL http://svmlight.joachims.org/. (Accessed on May, 2012).
  • [23] T. Joachims. SVMrank: Support Vector Machine for Ranking. Cornell University, 2009. URL http://www.cs.cornell.edu/People/tj/svm_light/svm_rank.html . (Accessed on May, 2012).
  • [24] T. Joshi, S. Dey, and D. Samanta. Multimodal Biometrics: State of the Art in Fusion Techniques. International Journal of Biometrics (IJBM), 1(4):393–417, 2009.
  • [25] D. R. Kisku, P. Gupta, and J. K. Sing. Fusion of Multiple Matchers using SVM for Offline Signature Identification. In Proceedings of the International Conference Future Generation Information Technology-Security Technology, volume CCIS-58, pages 201–208, Jeju Island, Korea, December 2009. doi: http://dx.doi.org/10.1007/978-3-642-10847-1_25.
  • [26] A. Kumar and D. Zhang. Personal Recognition using Hand Shape and Texture. IEEE Transactions on Image Processing, 15(8): 2454–2461, 2006.
  • [27] J. Phillips and P. J. Flynn. Overview of the Face Recognition Grand Challenge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 947–954, San Diego, USA, June 2005.
  • [28] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, and W. Worek. Preliminary Face Recognition Grand Challenge Results. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, pages 15–24, Southampton, UK, April 2006.
  • [29] A. Ross and A. Jain. Information Fusion in Biometrics. Pattern Recognition Letters, 24(13):2115–2125, 2003.
  • [30] A. Ross and A. K. Jain. Multimodal Biometrics: An Overview. In Proceedings of the 12th European Signal Processing Conference, pages 1221–1224, Vienna, Austria, September 2004.
  • [31] A. Ross, K. Nandakumar, and A. K. Jain. Handbook of Multibiometrics . Springer, 2006.
  • [32] R. Singh, M. Vatsa, and A. Noore. Intelligent Biometric Information Fusion using Support Vector Machine. In Proceedings of the Soft Computing in Image Processing: Recent Advances, pages 327–350, New Jersey, USA, December 2006.
  • [33] R. Snelick, U. Uludag, A. Mink, M. Indovina, and A. K. Jain. Large-Scale Evaluation of Multimodal Biometric Authentication using State-of-the-Art Systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3):450–455, 2005.
  • [34] H. Yu and S. Kim. Handbook of Natural Computing , volume I, chapter SVM Tutorial: Classification, Regression, and Ranking. Springer, 2010.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset