The Photoshop name is synonymous with image editing, but the same company behind the popular image editor is training an artificial intelligence program how to recognize manipulated photos. Adobe researchers, including senior researcher Vlad Morariu, recently published work on training a machine learning system to recognize doctored photos.
Rather than looking at all adjustments in general, including innocent adjustments like simple color corrections, the team focused on three types of image edits that can be used to create misleading images. Those “tampering techniques” include splicing or merging parts of one image with another, cloning sections within the same image, and removing an object.
To train a system to recognize a faked photo within a matter of seconds, the team fed the software thousands of manipulated images in order to help train the computer to spot photo trickery. The software looks at a handful of different clues that suggest an image might be manipulated. One method is to look for the slight differences in the red, green and blue color channels from altered pictures.
The second method the program uses for finding fakes is to generate a noise map of the image. The noise or grain that a camera captures, which is more obvious at high ISOs but appears in all images, has a unique pattern to each camera sensor. Because of that unique pattern, objects that have been added from another image will pop in that noise map. Using both techniques together helped improve the program’s accuracy.
As concern for the viral spread of fake news grows, more companies are looking to implement programs to help spot not just inaccurate text but altered images. Facebook, for example, recently expanded its fake news algorithms to include images and video.
“Adobe is certainly a leader in creating tools that can produce manipulated images, so I think that makes Adobe uniquely positioned to also create tools that can help people determine if they are in possession of an authentic or manipulated image,” Morariu said.
The software won’t label a manipulated image with 100 percent certainty but could help reduce the number of fakes floating around feeds — if the program makes it out of research stages and sees wider implementation. Recognizing things like copied objects may help flag doctored images, but not other categories of fakes, such as pairing an unaltered image with an unrelated story or an event that happened at a different time.
Until then, there are a handful of ways to spot a fake photo without becoming a forensic photo expert.