• No results found

Classification of consistent and inconsistent LEs

2.4 Experimental Results and Analysis

2.4.3 Classification of consistent and inconsistent LEs

Experiments were carried out to see the effectiveness of the proposed method in discrimi- nating consistent and inconsistent LEs. We have randomly sampled 10000 pairs of images with different LEs and 10000 pairs with the same LEs for each of Yale B and Multi-PIE datasets, as in [8]. The pairs from different lighting conditions are consideredinconsistentand the pairs from the same lighting condition are considered consistent. For each pair of faces, we have calculated the distance between the LC vectors, estimated from the two images, using Equa- tion (2.18). We have computed the receiver operating characteristic (ROC) curve to show the discrimination ability of the proposed LE estimation method. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at different thresholds, TH-2553_136102029

2.4 Experimental Results and Analysis

False Alarm Rate

0 0.2 0.4 0.6 0.8 1

Detection Rate

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1 ROC Curve

Kee and Farid Peng et al.

Proposed

(a)

0 0.2 0.4 0.6 0.8 1

False Alarm Rate 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Detection Rate

ROC curve

Kee and Farid Peng et al.

Proposed Method

(b)

Figure 2.8: ROC curves for different methods showing the ability to discriminate the consistent and the inconsistent LEs on (a) Yale B dataset and (b) Multi-PIE dataset, when using the specific 3D model for each individual in Kee and Farid’s and Penget al.’s methods.

False Alarm Rate

0 0.2 0.4 0.6 0.8 1

Detection Rate

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

ROC Curve

Kee and Farid Peng et al.

Proposed

(a)

0 0.2 0.4 0.6 0.8 1

False Alarm Rate 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Detection Rate

ROC curve

Kee and Farid Peng et al.

Proposed Method

(b)

Figure 2.9: ROC curves for different methods showing the ability to discriminate the consistent and the inconsistent LEs on (a) Yale B dataset and (b) Multi-PIE dataset, when using a generic 3D face model for all the individuals in Kee and Farid’s and Penget al.’s methods.

2. Estimation of Lighting Environment for Exposing Image Splicing Forgeries

Table 2.2: Comparison of the discriminative power of the proposed method with the state-of- the-art methods on Yale B and Multi-PIE datasets, when the specific 3D face model is used for each individual in Kee and Farid’s and Penget al.’s methods.

Yale B Multi-PIE

AUC (%) DR(%)@10%FAR AUC (%) DR(%)@10%FAR

Kee and Farid 96.1 89.4 92.3 76.2

Penget al. 97.7 93.9 97.4 93.3

Proposed 98.1 97.4 97.7 94.5

Table 2.3: Comparison of the discriminative power of the proposed method with the state-of- the-art methods on Yale B and Multi-PIE datasets, when a generic 3D face model is used for all the individuals in Kee and Farid’s and Penget al.’s methods.

Yale B Multi-PIE

AUC (%) DR(%)@10%FAR AUC (%) DR(%)@10%FAR

Kee and Farid 93.2 82.9 90.8 70.0

Penget al. 96.4 90.9 95.8 90.5

Proposed 98.1 97.4 97.7 94.5

separating the consistent and the inconsistent pairs. We consider the inconsistent case as the positive class and the consistent case as the negative class.

We have compared our method with two existing methods, namely Kee and Farid’s [7] and Peng et al.’s [8] methods. These methods are also specifically designed to detect composite images containing human faces. The ROC curves of the three methods on Yale B and Multi- PIE datasets are shown in Figure 2.8a and 2.8b, respectively. In the case of Kee and Farid’s and Peng et al.’s methods, specific 3D models are used for each of the 10 subjects. The area under the curve (AUC) values computed from the ROC curves and the detection rate at 10%

false alarm rate (DR(%)@10%FAR) are shown in Table 2. On Yale B dataset, Kee and Farid’s method achieves an AUC of 96.1% and a DR of 89.4%, and Penget al.’s method achieves an AUC of 97.7% and a DR of 93.9%. The proposed method achieves an AUC of 98.1% and a DR of 97.4% on Yale B dataset. On Multi-PIE dataset, Kee and Farid’s method achieves an AUC of 92.3% and a DR of 76.2%, and Penget al.’s method achieves an AUC of 97.4% and a DR of 93.3%. The proposed method achieves an AUC of 97.7% and a DR of 94.5% on Multi-PIE dataset. This implies that the proposed method can discriminate the consistent and inconsistent LEs well and perform better than the state-of-the-arts.

TH-2553_136102029

2.4 Experimental Results and Analysis

Another set of experiments is performed to see the effect of using a single 3D face model for all subjects in the discriminative power of the methods by Kee and Farid and Peng et al.

This is important as in real forensics scenarios, it may be difficult to create specific 3D face models for each individual present in an image. We have applied these two methods on Yale B and Multi-PIE datasets using a single generic 3D face model for all individuals. The ROC curves for three methods on Yale B and Multi-PIE datasets are shown in Figure 2.9a and Figure 2.9b respectively. The AUC values and DRs at 10% FAR are listed in Table 3. As can be seen from Table 2.2 and Table 2.3, the AUC value achieved by Kee and Farid’s method drops from 96.1% to 93.2%, and the DR drops from 89.4% to 82.9% on Yale B dataset, when a generic 3D face model instead of a specific model for each individual. On Multi-PIE dataset, the AUC value of Kee and Farid’s method drops from 92.3% to 90.8%, and the DR drops from 76.2% to 70.0% when a single 3D model is used instead of specific 3D models. The AUC value achieved by Peng et al.’s method drops from 97.7% to 96.4%, and DR drops from 93.9% to 90.9% on Yale B dataset. On Multi-PIE dataset, AUC value achieved by Penget al.’s method drops from 97.4% to 95.8%, and DR drops from 93.3% to 90.5% when a generic 3D face model is used.

The drop in the accuracy of Kee and Farid’s and Peng et al.’s methods in the case of a single generic 3D face model is expected as the computed 3D normals are not accurate in this case.

On the othe hand, the proposed method’s AUC and DR values are same as the earlier case, i.e.,AUC of 98.1%, DR of 97.4% and AUC of 97.7%, DR of 94.5% on Yale B and Multi-PIE datasets, respectively. This is because the proposed method does not depend on any specific face models. Therefore, in real-life forensics scenarios, the proposed method is more reliable than the state-of-the-art methods.