Can you trust what you see?
Once upon a time seeing was believing. Not anymore. Today with the ever increasing capabilities of AI, seeing can no longer be trusted, at least in an online world. This lack of trust puts a question mark behind just about everything you see online. One study calls this “visual deception” and makes the following noteworthy statement: “As synthetic deception becomes more seamless, the findings underscore the instability of visual trust”
Recently we’ve been running a series of tests on a variety of images, in part, to validate the accuracy of our image detection models. We wanted to assess a cross section of images and eventually settled on forty headshots, male and female, across four ethnicities. Here is the basic structure, images and outcome of the tests:
In our first pass we identified AI images only 20% of the time. We then tweaked the model ever so slightly and on the second pass we identified AI 70% of the time. The model has since been tweaked again and on the third pass we dropped the AI identification to 62.5%. Does this make sense, and how is that possible? Prior to the third pass we added support for WEBP and AVIF image formats. It’s possible the increased resolution improved the detection of photos that have been cleaned up and enhanced with AI but are actual photographs instead of being images generated by AI. Needless to say, correctly identifying images is a learning process and constantly changing. The models learn and get better with use and admittedly, forty images is a small sample set.
A couple points to note and remember. There are at least three broad categories when working with images: 1) authentic photographs such as those produced by AP photographers (although good luck in finding raw unaltered images since professional gear and nearly all phone cameras have some level of AI built in). 2) AI generated images. These are complete fabrications from artificial intelligence. 3) Blended images or a combination of the two. AI enhanced photographs are a great example of these. Professional looking headshots are in wide use today and it’s common to take ordinary photos and turn them into polished headshots.
Visual integrity and ethics in today’s world is a question of degrees and judgment precisely because AI is prevalent in most if not all image production and reproduction. I suppose the question becomes one of intent. Example, the last image in my collection of headshots (the handsome dude in the white t-shirt); shows up on multiple LinkedIn profiles as well as in many social media posts. It’s a pretty easy conclusion that some of those folks are misleading at best or at worst outright lying about what they look like.
One other test I’d like to share – several weeks ago we came across a post on Medium where a reasonably thorough evaluation was done of eight image detectors. (Sightengine, Decopy AI, Illuminarty, Hive Moderation, WasitAI, undetectableAI, Winston AI, AIDetector) They used eight different types of images. The best of the group identified 7 of the 8 images correctly. The worst identified 5 of 8 correctly. And how did AI TruthTeller® fare on the same eight images? We correctly identified 7 of the 8 images. We re-ran the same images today and the same result – 7 of 8 correctly identified.
The only image missed, seen here to the left, was digitally created artwork in a program called Blender. Blender created images are not our focus and since it is an obvious piece of art work we’re okay with not being able to tell if it was hand drawn or created by artificial intelligence.
Photo by Vadim Bogulov on Unsplash
Let me add here that we use private models and the models are fine tuned on a regular basis. In the Medium post where we achieved 87.5% accuracy, we were using our initial image detection model. Headshots are a specialized segment so we tweaked the model and immediately jumped to 70% AI detection. AI models are defined in many ways by the data they consume and the rules that govern them. Human Resource departments could realistically use a product like ours, train and orient the models for headshots and continually improve the accuracy of the AI detection process to have a very robust and useful tool.
Lastly, AI detection in its many forms is a necessary part of our digital society. We will find fake images or AI enhanced images in the news, on social media, on dating sites, in student applications for college, job sites and a host of other places. We need to have some level of confidence in what we see. Good judgment depends on it and human relationships are built on it. Whether you use our product, AI TruthTeller® or another, we strongly encourage you to enhance your abilities to discern and detect deception across all forms of communication.