• No results found

2. Estimation of Lighting Environment for Exposing Image Splicing Forgeries

thanrth, and the distance between the authentic faces (i.e., ID 2 and ID 3) are very small. The maximum distancedamong the lighting coefficient vectors estimated from the faces present in the forged image (Figure 2.13(a)) is 0.19, which is more than the threshold valuerth. Therefore, the forged image is correctly classified as forged. On the other hand, all the pair-wise distances in the authentic image (Figure 2.13(b)) are below the thresholdrth, as can be seen in Table 2.6.

The maximum distance d for the authentic image is found to be 0.10, which is less than rth. Hence, the authentic image is also correctly classified as authentic by the proposed method.

2.6 Summary

The LEs are estimated by projecting the front pose face images onto a low-dimensional lighting model, computed from a set of front pose face images of a single individual through the PCA.

While the state-of-the-art methods need to create a specific 3D face model to estimate the LE from a test face image accurately, the proposed method can estimate the LE from any test face using a single lighting model. The experimental results on Yale Face Database B, Multi-PIE, and our own database show the efficacy of the proposed method with respect to the state-of-the- art. The limitation of the proposed method is that it can detect splicing forgeries only in images containing near-front pose human faces. When the faces deviate from the front pose, the LE estimation of the proposed method gives inaccurate results.

3

Exposing Splicing Forgeries in Digital Images through the Discrepancies in Dichromatic Plane Histograms

TH-2553_136102029

The last chapter proposed an LE-based forensics method for detecting spliced faces present in images. The method extracted the LEs from the face regions of the persons present in the image using a subspace-based LE estimation method. The method is applicable only to human portraits involving near front-pose faces. This chapter proposes a forensics method to detect spliced faces of any pose utilizing the sourceillumination colouras a cue.

The knowledge of source illumination colour is very useful in many computer vision tasks.

For instance, in human-computer interaction [74], the ability to remove the effect of illumi- nation colour from the input images and videos, known as colour constancy, is desirable for better performance. This is because the colours of object surfaces change as the illumination colour changes, which affects the computer vision systems that rely on the object colour in- formation. Most of the computational colour constancy methods [75], [36], [37] achieve the colour constancy by first estimating the illumination colours from the input images and then normalizing them using the illumination colours to produce the canonical images under a white light source [76]. In image forensics, the illumination colour has proven to be an effective cue for detecting splicing forgeries [29], [9]. The current and the next chapters will discuss more about the use of illumination colour for exposing splicing forgeries.

Similar to LE-based forensics methods, the illumination colour-based methods are consid- ered to be effective since it is not easy to match the exact illumination colour in a composite image [9], [54]. As in the case of LE-based methods, there is no anti-forensics method avail- able to counter the illumination colour-based forensics techniques. These observations motivate us to propose an illumination colour-based forensics method for detecting splicing forgeries in human group portraits.

The proposed method extracts a novelillumination-signaturefrom the face region of each person present in an image. To be effective in forensics, this illumination-signature should be similar for the faces coming from the same illumination environment and different for the faces coming from different illumination environments. This chapter proposes to use thedichromatic plane histogram (DPH) [77] as the illumination-signature for detecting the face splicing forg- eries. It is computed from the facial region of each person present in the image by applying the

3. Exposing Splicing Forgeries in Digital Images through the Discrepancies in Dichromatic Plane Histograms

3D Hough transform. Thedichromatic reflection model (DRM)[78] is exploited for computing this histogram. Assuming the skin material of the facial region to be the same for all persons, the DPHs are expected to be similar for faces coming from the same illumination colour. On the other hand, for faces coming from different illumination environments, the DPHs will show inconsistency.

The rest of the chapter is organized as follows. Section 3.1 provides an overview of illumina- tion colour-based forensics methods. Section 3.2 gives a detailed background on the DRM and the DPH. Section 3.3 presents the proposed method, and Section 3.4 discusses the experimental results on splicing detection. Finally, Section 3.5 presents a summary of the chapter.

3.1 An Overview of Illumination Colour-based Image Forensics

In illumination colour-based image forensics, the source illumination colours extracted from different parts of an image are utilized for detecting splicing forgeries. The motivations for using the illumination colour as a cue for detecting splicing are as follows. In an authentic image, all the different parts are lit by the same illumination sources. A spliced image may include parts copied from images captured under different illumination sources. Therefore, comparing the illumination colours estimated from different parts of an image could reveal the splicing forgery. Here, the key assumption is that the spliced and the authentic parts of a forged image may look visually similar, but the illumination colour extracted from them will differ.

Gholap and Bora [29] introduced the use of illumination colour as a cue for detecting splic- ing forgeries. This method estimates the illumination colour from different parts of an image using the DRM [78]. This model is elaborated in Subsection 3.2.1. For estimating the illumi- nation colours from the image, the method requires specular regions to be manually extracted from the image. If there is more than one illumination colours present in the image, the method decides it as a spliced image. The limitation of this method are the following: 1) it fails in images that are captured under multiple illumination sources, as it assumes the presence of a single illumination source, and 2) it requires the presence of specular highlights in the images, which are manually selected.

TH-2553_136102029

3.1 An Overview of Illumination Colour-based Image Forensics

Franciset al.[40] proposed a method, where the illuminat colour is estimated from the nose- tip of each person present in the image using the DRM. The illuminant colours, extracted from different persons present in an image, are compared with each other to judge the authenticity of the image. This method has limitations similar to those of Gholap and Bora’s method. More specifically, it also assumes the presence of a single illumination and requires manual selection of the nose-tip from the faces. Wu and Fang [39] proposed another method where an image is divided into different non-overlapping blocks. Then, the illuminant colour is estimated from each block using the generalized grey-edge (GGE) method [37]. Assuming one block as the reference block, the angular error between the illuminant colours estimated from each block and the reference block is computed. If the angular error is more than a pre-defined threshold, the image is decided as spliced. Since the method requires the manual selection of a reference block, the result changes when a different reference block is selected.

Riess and Angelopoulou [79] proposed to create a new image, called the illuminant map (IM), using the illumination colours estimated from the input image. First, the input image is segmented into homogenous regions, called superpixels, using the graph-based segmentation method [80]. Then, the illumination colour from each superpixel is estimated using a modifi- cation of theinverse-intensity chromaticity (IIC)method [38], and the superpixel is recoloured using the estimated illumination colour. The intuition behind this method is that the parts in the IM of an authentic image will have similar colour features, as all the parts of an authentic image are captured under the same illumination sources. The spliced regions in the IM of a spliced image will have colour features different from those in the authentic regions. In this method, as the forgery is detected manually by observing the IM, it might introduce human errors.

Carvalhoet al.[9] proposed a method for automatic detection of face splicing forgeries by classifying the face regions of the IM (face-IM) using a machine learning-based classifier. The authors created two IMs from each image using two different illumination estimation methods, namely the IIC [38] and the GGE [37] methods. The authors observed that the face-IMs com- puted from an authentic image have similar visual features. On the other hand, in a spliced image, the spliced faces have visual features different from those of the authentic faces. Based TH-2553_136102029

3. Exposing Splicing Forgeries in Digital Images through the Discrepancies in Dichromatic Plane Histograms

on this observation, the authors proposed to extract texture [81] and gradient-based [82] de- scriptors from the face regions of the IM. The IM is converted to YCbCr colour space, and the Y channel is utilized for computing both the descriptors. Then, the features are classified in a pair-wise manner using a support vector machine (SVM) classifier. More specifically, for each face-IM pair, computed using the same illumination estimation method, the same type of features extracted from the two face-IMs are concatenated and classified using the SVM. In their follow-up work, Carvalhoet al.[41] proposed to extract three types of features from the face-IMs, namely texture, shape and colour features. More specifically, the authors proposed to compute three texture descriptors in [81], [83], [84], two shape descriptors in [85], [86], and four colour descriptors in [87], [88], [89], [90]. Also, in this work, Carvalhoet al. proposed to convert the IM to three different colour spaces, namely Lab, HSV, and normalized RGB colour spaces. Similar to [9], here also, the features are classified in a pair-wise manner by concate- nating similar features of the two face-IMs computed from the same type IM converted to the same colour space. Thek-nearest neighbour classifier is utilized for classifying the features. Al- though these methods are very effective, their performances drop in low-resolution and highly compressed images. This is because in the case of low-resolution and highly compressed im- ages, the IM computation becomes less accurate and hence the features computed from them become less discriminative.