• No results found

Generalization to Unseen Manipulations

5.4 Experimental Results

5.4.1 Manipulation Detection Results

5.4.1.3 Generalization to Unseen Manipulations

a)An experiment is carried out to check the generalization ability of the proposed method in detecting manipulations not considered in the training phase. For this, we have trained the network on SP and DP pairs of patches, where the patches come from four classes, i.e., UA, GB, MF, and RS. There are 500,000 training pairs of both SP and DP images, sampled from the four classes in the same way as explained in the first experiment. Once, the network is trained, we test it on a set of image patches coming from image editing operations not present in the training stage, i.e., AWGN, GC, and JPEG. The test set includes 50,000 image patches from each of the three classes. To check the performance of the network in classifying these editing operations, we perform the one-shot classification by applying Algorithm 5.1. For this, we have one image patch from each of these three manipulation classes as references. We also have one reference image from each of the four classes used in the training stage. Table 5.5 shows the accuracies of the proposed method in detecting the patches coming from manipulations not seen in the training stage. It can be seen that the network classifies images coming from AWGN, GC, and JPEG classes with accuracies 96.61%, 95.24%, and 97.91% respectively. This shows the generalization capability of the proposed method to unknown types of image editing operations.

This is an important advantage of the proposed method over Bayar and Stamm’s method [50], as their method can only detect manipulations present in the training stage.

b)An experiment is carried out to see the generalization performance of the network in de- tecting editing operations with arbitrary values of the parameters. This is a practical scenario as any parameter value can be used while manipulating an image with a particular editing op- eration. A test set is created by manipulating the UA patches using the six manipulations with arbitrary values of the parameters, as shown in Table 5.6. This test set contains 50,000 images for each manipulation, where the values of the parameters for the manipulations are selected randomly. In this experiment, we use the pre-trained siamese network from the first experi- TH-2553_136102029

5.4 Experimental Results

Table 5.6: Editing operations with variour parameters considered in this work

Editing Operation Detail

Gaussian blurring (GB) σ= 1.1,1.5,2 for eachKs=3 and 5 Median filtering (MF) Ks= 3,5,7

Resampling (RS) Scaling = 1.2,1.5,1.7,2, bilinear interpolation Noise addition (AWGN) σ= 1.5,1.7,2

Gamma correction (GC) Parameter (γ) = 1.5,1.7,2 JPEG compression (JPEG) QF= 70,80,90

ment, which was trained on the UA class and six manipulations listed in Table 5.1 with a single parameter value for each manipulation. The manipulations are detected using the one-shot clas- sification technique, where the reference image for each class is created using the manipulations listed in Table 5.1,i.e., using a single parameter value for each manipulation. The classification accuracies are shown in Table 5.7. The network achieves a maximum accuracy of 95% for the MF class and a minimum accuracy of 85% for the GC class. This establishes the generalization power of the network in detecting manipulations with values of parameters other than the ones used during training.

Table 5.7: Genealization accuracies on single manipulations with arbitrary parameters

Manipulation GB MF RS AWGN GC JPEG

Accuracy 90.96% 95.70% 87.64% 90.42% 85.56% 90.44%

c)An experiment is carried out to test the generalization ability of the network in detecting multiple editing operations applied on a single image. The ability to detect/discriminate mul- tiple manipulations in images is important from the forensics point of view. This is because i) it gives the information regarding the processing history of an image, and ii) it can expose forgeries, such as splicing, copy-move, and retouching, as the forged image parts are generally processed by multiple editing operations to make them look visually plausible. We have created six more versions of the dataset by manipulating each image patch present in the UA class by two subsequent editing operations. In this way, the following six versions of manipulations were created: Gaussian blurring-median filtering (GB-MF), median filtering-Gaussian blurring (MF- GB), Gaussian blurring-resampling (GB-RS), resampling-Gaussian blurring (RS-GB), median filtering-resampling (MF-RS), resampling-median filtering (RS-MF). Here, the manipulation A-B means an image patch is first manipulated using the editing operation A and then edited TH-2553_136102029

5. Siamese Convolutional Neural Network-based Approach to Universal Image Forensics

Table 5.8: Genealization accuracies on double manipulations with seven training manipulation classes

Manipulation GB-MF MF-GB GB-RS RS-GB MF-RS RS-MF Average Accuracy 94.85% 93.60% 94.16% 90.42% 95.82% 95.53% 94.06%

using the operation B.

For this experiment, we use the network trained on images undergoing the single manip- ulation operations listed in Table 5.1. The double manipulations are then detected using the one-shot classification strategy by applying Algorithm 5.1. We assume to have one image from each of the double manipulation classes as required in the one-shot classification technique.

The accuracies of the network on these double manipulation classes are listed in Table 5.8.

The network classifies GB-MF, MF-GB, GB-RS, RS-GB, MF-RS, and RS-MF manipulations with accuracies 94.85%, 93.60%, 94.16%, 90.42%, 95.82%, and 95.53%, respectively. This shows that even though the network was trained only to detect single editing operations, it can generalize well to double manipulations detection with accuracies of more than 90%.

d)In this experiment, we test the generalization ability of the proposed method in detecting single and multiple manipulations after re-compression. For this, we create another version of each of the seven single manipulations and six double manipulations by re-compressing them using JPEG compression with a QF of 90. More specifically, this new version contains the manipulations as follows: OR-JPEG, GB-JPEG, MF-JPEG, RS-JPEG, AWGN-JPEG, GC- JPEG, GB-MF-JPEG, MF-GB-JPEG, GB-RS-JPEG, RS-GB-JPEG, MF-RS-JPEG, RS-MF- JPEG. Then, these manipulations are detected through one-shot classification by applying the pre-trained siamese network from the first experiment. Table 5.9 shows the detection accura- cies for these manipulations. The network is able to achieve an average accuracy of 82.91%

with a maximum accuracy of 87.86% for MF-JPEG manipulation and a minimum accuracy of 79.29% for MF-GB-JPEG manipulation. It is observed that the accuracies achieved by the proposed method on the double manipulations followed re-compression are lower than those Table 5.9: Generalization accuracies on different manipulations followed by re-compression with a QF of 90

UA-JPEG GB-JPEG MF-JPEG RS-JPEG AWGN-JPEG GC-JPEG GB-MF-JPEG MF-GB-JPEG GB-RS-JPEG RS-GB-JPEG MF-RS-JPEG RS-MF-JPEG

84.23% 86.46% 87.86% 84.23% 84.59% 83.94% 79.61% 79.29% 81.83% 79.46% 81.74% 81.76%

TH-2553_136102029

5.4 Experimental Results

on the single manipulations followed by re-compression. This is expected as the application of the second manipulation may remove the trace of the first manipulation. When the images are re-compressed with a QF of 70, the average detection accuracy over all the 12 classes drops down to 75.25%. These results suggest that the siamese network, trained on singly manipulated images, can detect the singly and doubly manipulated images after re-compression with decent accuracies.

5.4.1.4 Dependence of Generalization Accuracy on Number of Training Manipulations