The world is starting to get scary, it almost feels like we live daily in a Black Mirror episode and it is because artificial intelligence has reached a point of technical perfection where anyone who wants, with a phone and a free app, can make images that are impossible to distinguish if they are real or not.
There are no longer six fingers on people’s hands, nor strange objects or letters placed backwards… It is scary and it has made visual trust die automatically. We no longer assume that what we see existed at some point because we assume that it was generated by an AI and a good prompt.

Photographic images have been for two centuries proof, memory and testimony, but now they have stopped being reliable, and the example we bring you today is more than proof of that, which one was made by AI and which one not?
A very fast transformation
It is worrying because we are entering the era of permanent doubt, where everything is fake, photos, news… everything is transforming our relationship with memories, communication and above all, with truth.
The transition will not be smooth nor voluntary, because we will not have time to adapt. Visual trust has been our cognitive anchor for thousands of years, from survival to digital culture.
Change in perception
For centuries, our brain learned that seeing was synonymous with knowing, and photography reinforced that idea because it became the first valid evidence, both for the social environment and for the judicial one. But now that idea has been lost, images created with AI are, besides being hyper-realistic, too fast for critical thinking to act to assess whether what it sees is real or not.
The explosion of disinformation
The concern is no longer industrial, organized and expensive disinformation that we have experienced, but “domestic” disinformation (let’s call it that). Anyone can produce images without needing knowledge or resources, a teenager, an ex… And they will be able to do it because they can, and we are not really being aware of how dangerous this can be in bad hands…
Technical perfection arrived first
Technology has reached a level of realism that our cognitive and social capacities cannot handle because the human brain is not designed to question every image it sees.
But it is something that we will have to do now, especially because critical thinking is now necessary.
Living in suspicion mode
Every photograph or frame we see on social networks now has a cognitive micro-task, we will have to evaluate, suspect, contrast and decide if we believe that it is real or not.
Now, sharing images will be something strategic and of course, there will be those who use this “tool” to take advantage and do harm. If you stop to think about it, it is terrifying to think about because any of us could end up being a victim of this “novelty”.
The loss of collective memory
Photos have been the tool that gave coherence to our memories.
They were not only images, they were anchors of identity, but now there is the possibility that everything could be falsified… what will become of our memories?
And the future?
It is scary, we are going to have to start doubting everything. In the past, if you saw an image taken with a camera of the Eiffel Tower you knew that someone had been in Paris, and if you saw a photo with some celebrity, you knew that those two people had taken the photo (for real) but now no. Now everything can be true, and everything can be false. We are in Schrödinger’s era.
And the inevitable consequence will be to request mandatory certification for images, a verifiable digital seal, corporate seals… Surely governments will also want to join this verification because we risk a lot with a simple photo, from our personal relationships to democracy itself.
A world without spontaneity
If everything must be verified, nothing can be immediate, and if nothing is immediate, the experience becomes impoverished… Reality would lose freedom, we would too.
And in the hypothetical case that we have to request verification for real photos, it will surely be the big tech companies who offer it, and it will be very absurd because we will be paying for something that, for centuries, has been free: the presumption that what was photographed existed.
Welcome to the age of suspicion
We cannot educate the population to evaluate thousands of images a day, we would exhaust our cognitive capacity. We will no longer be able to trust what we see, in what others have photographed.
And it is something we have to educate ourselves in, our parents and grandparents have never had to doubt a photograph, we face a life of continuous suspicion…
Is there no solution?
No, we cannot train an AI to detect images generated by other AI, each detector improves the generators, each generator improves the detectors.
We cannot educate people to think critically if thousands of fake images are processed every day, and we will not be able to legislate over it because technology is much faster than laws.
Photography as a tool of truth has died, nothing will seem real even if it is, and vice versa. Welcome to the era of infinite suspicion…
