**6.2 Proposed Method**

**6.2.2 Noise-Stream Encoder-Decoder (NSED)**

This stream focuses on learning low-level manipulation-related features. The low-level im- age features are proven to be helpful in distinguishing different types of image manipulation operations [50]. These features are also shown to be able to localize different types of forg- eries [16]. In splicing forgeries, the spliced parts are likely to have noise levels different from TH-2553_136102029

6. Two-stream Encoder-Decoder Network for Localizing Image Forgeries

those in the authentic regions. In copy-move and content-removal forgeries, the forged regions are generally passed through different types of post-processing operations to make them look visually undetectable. These post-processing operations change the local dependencies of the pixels present in the forged regions and hence introduce low-level artefacts.

The NSED takes the green colour channel of the input image to the encoder. The green channel is selected for extracting the low-level artefacts because there are 25% more green pixels than the red and the blue ones in the Bayar filters, which are used by most camera devices as the preferred CFA [45], [145]. Due to this reason, the green channel contains the least noise among the 3 colour channels. Hence, the low-level artefact extraction from this channel is more reliable [50]. The encoder in this stream is similar to the one in the ISED. However, it employs an additional convolutional layer, called the high-pass filter (HPF) layer [96], [50], [16], before the normal convolutional layer. The HPF layer computes the high-pass residuals from the input image to suppress the image contents and enhance the noise contents. The kernels in the HPF layer follow certain constraints to compute the high-pass residuals. Depending upon the constraints put on the kernels, the HPF layers available in the literature can be categorized as: (i) the median filter residual layer, (ii) the constrained convolutional layer, and (iii) the SRM filter layer. The median filter residual layer [96] employs a fixed set of weights in the kernels to compute the median filtered residuals as the output of the HPF layer. The constrained convolutional layer [50] extracts content-adaptive high-pass residuals by learning weights under a pre-defined constraint, given by

w^{1}_{k}(0,0)=−1
andX

l,m,0

w^{1}_{k}(l,m)= 1

(6.1)

where w^{1}_{k}(l,m) denotes the weight at position (l,m) of the kth filter and w^{1}_{k}(0,0) denotes the
weight at the center of the corresponding filter kernel. This constraint in Equation (6.1) is
enforced on the weights of the constrained layer during the training stage. More specifically, at
the end of each training iteration, the central filter weight of each constrained filter kernel is set
to -1, and the non-central filter weights are normalized by dividing by their sum.

TH-2553_136102029

6.2 Proposed Method

The SRM filter layer computes high pass residuals from the input image by applying a fixed set of SRM filters [116]. The constrained high-pass filters and the SRM filters have re- cently been shown to be effective in different forensics problems [16], [51]. In this work, we have experimented with the constrained convolutional and the SRM filter layers and found that the constrained high-pass filters perform better than the SRM filters. This is also intuitive as the constrained convolutional layer’s content-adaptive filters learn the kernel weights from the training data itself. As the SRM filters are fixed filters, they may not be able to extract optimal features for forgery detection tasks. Hence, we have preferred to use the constrained filters for all the experiments reported in this chapter.

Therefore, the first layer of the encoder is a constrained convolutional layer with 3 filters of size 5×5. The rest of the encoder has 4 convolutional layers, 4 residual blocks, and 4 max- pooling layers. The sizes of the kernels in the normal convolutional and the max-pooling layers are 3 ×3 and 2 × 2, respectively. The numbers of filters in the normal convolutional layers are 32, 64, 128, and 256. An input image of size 256×256×1 is fed to the encoder, and the feature map of size 16 × 16× 256 is produced as the output. For any other sizes, the input image is first resized to 256×256×1. The low-level artefacts present in forged images, i.e., the inconsistencies in the noise level and traces related to various image processing operations, are more affected by the image resizing operation compared to the high-level artefacts. In this work, we assume the forged regions to be sufficiently large. Then, there will be sufficiently many forged pixels in the resized image, which are obtained by performing the downsampling operation (e.g., bilinear averaging) on the forged pixels only. Under these conditions, the low- level artefacts of the forged regions will still be available in the downsampled images. Since we train the proposed network on downsampled images, it will learn the additional low-level artefacts present in them.

The coarse feature maps produced by the encoder are then fed to the decoder, which per- forms upsampling and convolution to produce the dense feature maps. The decoder of this stream has the same architecture as the one in the image-stream. Therefore, the output of the NSED is the feature maps of size 256×256×32

TH-2553_136102029

6. Two-stream Encoder-Decoder Network for Localizing Image Forgeries