Is AI the end of photography?

In the AI generated image the wool around the neck and on the chest of the animal looked, well contrived. (apart from getting the breeds wrong)
 
In the AI generated image the wool around the neck and on the chest of the animal looked, well contrived. (apart from getting the breeds wrong)
It looks like the pile on a cotton rug. The head looks too small to me to.
 
People buy photography magazines because the images are real and authentic, and created by human beings.
I agree, but many seemingly do not agree.
I can remember about 20 years ago going to see the film Troy and being really impressed by the battle scenes, until my wife said they were probably CGI. I was then much less impressed.
Whether they actually were CGI (at that time) is not the point, what is the point is that artificially created stuff which does not actually exist (nor have they gone to the trouble of employing and clothing thousands of extras) is just not the same, not for me anyway.
Another example was a programme I watched a good few years ago about space with Brian Cox. I thought the pictures and videos of the surface of some of the planets were absolutely fantastic, till my wife then reminded me that they were probably CGI, and I then totally lost interest and turned it off. It did occur to me that Mr Cox should have made it clear to viewers what was actually real shots and which was not because I just assumed it was all CGI (even stuff which may not have been) and had no interest in it from that point.
I want to see real stuff.....
However, my personal reservations about CGI generated content do not appear to be widely shared bearing in mind the fact almost all of it is now CGI.....
 
Last edited:
CGI is used on the grounds of cost, safety and drama.

Personally, I welcome it for these purposes, i.e. if I'm watching a James Bond film I know that the whole thing is designed to entertain, and none of the content pretends to be authentic.

But, when and if used to deceive, i.e. if the content pretends to be authentic, then that's different.
 
CGI is used on the grounds of cost, safety and drama.

Personally, I welcome it for these purposes, i.e. if I'm watching a James Bond film I know that the whole thing is designed to entertain, and none of the content pretends to be authentic.

But, when and if used to deceive, i.e. if the content pretends to be authentic, then that's different.
But surely you'd find, say, a battle scene with thousands of extras in it that was actually real much more impressive than a CGI version ?
Similarly, a car accident, or stunt driving, in a Bond film that was actually real would be worth watching, but not so much a CGI version ?
 
I see film as entertainment, and expect to suspend reality. If the story works then the film is good, and if not, no amount of CGI or extras can compensate.
 
I think the main place they do huge battle scenes is in Asian cinema. The extras are cheap and the films are very popular. Pay an extra £100 a day and it soon mounts up.
 
In the AI generated image the wool around the neck and on the chest of the animal looked, well contrived. (apart from getting the breeds wrong)
the eyes look weird as well. Like they’re from a wiser animal than a sheep is. A sheep’s eyes should look a bit more…. care-free. It’s actually frowning, sheep aren’t meant to be able to frown
 
Last edited:
For me photograph is a hobby (it’s not something I make money from). In the future I can see publications going to AI image creating software to get the image they want as they can have a play to get their ideal photo rather than go to photographers or stock photographs. It’s scary how easy it is now.

Even for a hobby if it was just about getting the photo I could sit at the computer and get some great AI images without the early starts or venturing out into the cold. In the future will we see ‘AI photographers’?

Right now, as I type my option here, there is an ongoing wildfires around Los Angeles. I've been catching up on BBC News's website, following some other newspapers on Twitter (now known as X), and seeing them on Instagram. I see real life photos of the events that are happening.

They're real life photos of real life events.

There will always be the need for human photographers to go and take news photos of events.

It is doubtful in future, that publications will go for AI, as they would still need real humans to take real photos of real life events.

If a motoring publication magazine started using AI generated images of cars for their written articles, I would lose interest and give up reading. I would rather see a real photo of a real car, after all, you can't get into an AI car and take it for a test drive.

No, I think publication will still need to use real photos, otherwise they risk losing readers due to misleading them with phoney images of non-existence subjects.

I'm sure human photographers will be still around for a very long time.
 
Right now, as I type my option here, there is an ongoing wildfires around Los Angeles. I've been catching up on BBC News's website, following some other newspapers on Twitter (now known as X), and seeing them on Instagram. I see real life photos of the events that are happening.

They're real life photos of real life events.

There will always be the need for human photographers to go and take news photos of events.

It is doubtful in future, that publications will go for AI, as they would still need real humans to take real photos of real life events.

If a motoring publication magazine started using AI generated images of cars for their written articles, I would lose interest and give up reading. I would rather see a real photo of a real car, after all, you can't get into an AI car and take it for a test drive.

No, I think publication will still need to use real photos, otherwise they risk losing readers due to misleading them with phoney images of non-existence subjects.

I'm sure human photographers will be still around for a very long time.

Yes, I agree with that. There is no way an AI generated image can capture the immediacy of a bush fire.
 
I would be interested to know how closely the images apparently created actually matched the source images that the software was trained on. It would not surprise me if you could actually identify parts on the generated pictures that came directly from the source material. And if that were the case, the whole area of copying, plagiarism and copyright should start to apply a strong magnifying glass to generated images, with royalties being paid to copyright owners.

I wonder if the software would tell you which pictures it used as source material.

I did a short course on machine learning a couple of years back. Even a short catalog of training images was 50,000+ images in size for something like work out if a cat or a dog and then draw a dog.
The “AI models” would also generate further images of dogs and train its self on them in addition to the original database, and so dilute further and further from an single source image,
 
It could be argued that the AI isn't "generating" a picture of a seal, deer, or squirrel, but copying an existing one, and just putting a different background on it.

AI only knows what a seal looks like by seeing a previous photograph of one. And memorising/storing it.

So it could be argued that all it's actually doing is searching the internet (or its memory bank that it's been trained with), finding an existing photo of a seal, finding an existing photo of some nice background including a sunset, and putting them together. It hasn't actually created anything from scratch at all.

The machine learning models at their base level mostly takes a large catalog of seal pictures and identities the similarity’s on pixel by pixel basis. It then generates a new image of a seal from that combined data.
 
AI is progressing rapidly, but I don't think it'll kill the photography entirely. Maybe I'm just an optimistic idiot that doesn't want to face the truth, but I really do believe there will be a huge market for those that DOESN'T want to deal with AI and some brands and professionals will advertise themselves exactly through the fact that they don't use AI. I don't photograph for a living, it's a hobby of mine, so I'm not really worried about being "replaced" with AI. I don't use AI tools for editing, though, the only remotely close thing that I have is Photoworks that has some AI-driven tools. If using AI ever becomes a necessity I'll pribably adapt. But I can see how worrying it all is for those that do photography for a living.
 
I did a short course on machine learning a couple of years back. Even a short catalog of training images was 50,000+ images in size for something like work out if a cat or a dog and then draw a dog.
The “AI models” would also generate further images of dogs and train its self on them in addition to the original database, and so dilute further and further from an single source image,
And not ‘photography related’ as such but this is where the model fails. When it starts training itself based on its own results, that’s where the errors creep in unchecked.
I read something the other day where someone had asked Google for reviews of an essay, and one of the 5 sources given was completely made up (AI). This could have led to someone quoting a completely fictional source in their ‘research’.

It is also how we end up with those ‘portraits’ with the hands that are completely unrealistic.

I’ve said this elsewhere. Training computers to do things humans find difficult is fairly straightforward, training them to do ‘human’ things is very difficult. No one has yet built a robot that can carry a cup of tea over sand dunes. And they’re a long way away from doing so.
 
The power of a photograph is it's visual representation of an authentic interaction between subject and photographer.
This ^^^^^

Actually I'd drop the word authentic - the definition is too vague.

A photograph is created by the interaction between photographer, subject, light & location. It's a process dependent on particular circumstances.

AI might be able to make comparable images but it'll never recreate *your* images of *your* subjects.

Full disclosure: I regularly use AI tools in my retouching. It can do the dull repetitive stuff much faster and more reliably than I can do it by hand.
 
Last edited:
This ^^^^^

Actually I'd drop the word authentic - the definition is too vague.

A photograph is created by the interaction between photographer, subject, light & location. It's a process dependent on particular circumstances.

AI might be able to make comparable images but it'll never recreate *your* images of *your* subjects.

Full disclosure: I regularly use AI tools in my retouching. It can do the dull repetitive stuff much faster and more reliably than I can do it by hand.
I think, when choosing the word authentic, I was looking for something to define the power of the subjects involvement in the making of a photograph gives an expectation of reality, which affects the way people view photographs, and the way we create them.

Of course the idea of reality only goes so far. An example I have posted before, is "only" photographing the run down and boarded up shops in your local high street, can be false and true at th same time This may well be a true record of what you photographed, but by "not" photographing the thriving businesses in your high street, it is a false record of the high street reality.

So the photographs are both true and false depending on how you view and use them. But people "believe" photographs in a way they don't believe other media.

So, in the context of this thread, AI and my full post, I was using authentic to reflect the responsibility of photographer to produce photographs that are authentic to the experience they shared with the subject.

Other that that, I agree with you post. just trying to explain why I chose to include "authentic".

edited to correct some mis-typed words
 
Last edited:
This may well be a true record of what you photographed, but by "not" photographing the thriving businesses in your high street, it is a false record of the high street reality.
I agree.

This article, from 2009, reviews the status of photographic evidence, at a time when manipulating visual data was, at best, difficult...


The rise of far more complex manipulation by "AI" (a term I find misleading at best) has to raise the bar considerably, for the use of images as evidence in court.
 
Light and location are part of the subject. IMO
You could also add sounds, smells and other things such as weather, All of these things are likely to affect how you feel about (see) your subject.
 
In my opinion, AI is a load of tosh for people who can't do things for themselves.
 
Not always for me

Location: sometimes. I think it's often about the interaction of the subject - usually a human in my case - with the location.
Light: not often. I usually make my own.
Always for you, and everyone else.

I think it's a mistake to think of the subject of a photograph as only the object being depicted. In reality it's everything about it and its surroundings.

Consider Monet's haystacks. They are not just paintings of haystacks, they're a series of paintings of changed/changing light on haystacks. You could argue that the light is the subject and the haystacks are simple props.

I feel the same about 'composition' in pictures. It's not only about shapes arranged in a picture space, it's about light, gesture and all sorts of small things.

This splitting of elements of picture construction prevents people seeing (no pun intended) seeing the big picture. It's why the response to pictures can be 'great light', or 'stunning composition'. The rest of the picture's make up being bland.

I'll stop here and agree to disagree.
 
Last edited:
I agree.

This article, from 2009, reviews the status of photographic evidence, at a time when manipulating visual data was, at best, difficult...


The rise of far more complex manipulation by "AI" (a term I find misleading at best) has to raise the bar considerably, for the use of images as evidence in court.
Interesting artcle.

From the very beginning photographs have been manipulated, sometimes with photographic manipulation e.g. Gustave Le Gray's composite skies in the 1850s to setting up the subject matter.

History is so full of photographs deliberately misrepresenting reality, it's possibly amazing they ever achieved the "camera never lies" status. Which brings me back to my original post,
 
History is so full of photographs deliberately misrepresenting reality, it's possibly amazing they ever achieved the "camera never lies" status.
Indeed.

The Soviets were famously effective at that, adding and subtracting faces as they came into or fell out of favour.
 
I can only speak for myself and my experience.

As someone who works in the creative industry (advertising agency), I can speak to where the state of AI is currently in terms of our adoption.

Our work is sometimes meant to capture reality, but more often it's meant to convey an idea. We used to draw the idea, get it approved, then go and shoot it. But that's expensive and it's easier to use stock. which is where we're at currently with a lot of clients. At some point in the future, it will be cheaper and easier to use AI, but currently it's time consuming to generate something usable.

The benefit is that you get a unique image. The danger of stock is the classic comment we always get: "It has to be stock imagery, but it can't look like stock" - cue days of trawling through libraries to try and find something suitable. At least with AI, no one else will buy the same image from the same site.

Legally, we can't just generate images on midjouney and use them on client work. This is due to to issues of copyright. Especially those sites that trawl the internet and scrape images from anywhere. There were a few early adopters that used AI images in campaigns and got caught out with this, though I've lost track of how successful the lawsuits were.

We can, however, utilise sites like Shutterstock (now merged with Getty) that have their own AI generator. As their imagery is generated from images that already exist on their site and can be attributed as all the meta date is contained within them. All the component parts of the final images are acknowledged and each owner will be compensated. Though I can only imagine the amount will be a pittance - even compared to current stock prices

We also have our own proprietary AI site within our agency (we're part of WPP - one of the biggest holding companies in the world), but it's several steps behind the likes of midjourney and Dall.e.

Some clients that we have (currently) won't accept any AI imagery, no matter where it comes from. Others will accept it if we accept the risk.

What I feel AI image generation lacks currently is control. You might generate one good image of a scene, but when you love most of it, but want to change a small portion, it generates a whole new image. And it may or may not be entirely different to the one you liked. It's like asking another person to generate an image from the same prompts.

So if we're trying to put together a storyboard where you see the same person in several scenes, it used to struggle. It's getting better as you can now feed in reference images and keep the people the same throughout.

It's all changing though, and very, very quickly. Very soon you'll be able to have full control of angles, repetition, everything that you'd be able to do in the real world.

When you're in the process of image generation, it's a very different landscape to image capture. IMHO.

Though I did see this the other day and it made me smile... more and more miracle shots every day.

ECcGqq3XkAE005e.jpg



Everything changes - not just photography. We all adapt, but I'm also concerned about how good AI is getting at conceptualising as I've still get a few years left before I can afford to retire.

For anyone that does layout work, if you've not seen how Adobe are upping the game with layouts using AI, then that's also frightening. Designers will also be starting to worry about it with launches like the Adobe Project Remix a lot.

View: https://www.youtube.com/watch?v=UGgdC3RvyMQ
 
Last edited:
I agree, but many seemingly do not agree.
I can remember about 20 years ago going to see the film Troy and being really impressed by the battle scenes, until my wife said they were probably CGI. I was then much less impressed.
Whether they actually were CGI (at that time) is not the point, what is the point is that artificially created stuff which does not actually exist (nor have they gone to the trouble of employing and clothing thousands of extras) is just not the same, not for me anyway.
Another example was a programme I watched a good few years ago about space with Brian Cox. I thought the pictures and videos of the surface of some of the planets were absolutely fantastic, till my wife then reminded me that they were probably CGI, and I then totally lost interest and turned it off. It did occur to me that Mr Cox should have made it clear to viewers what was actually real shots and which was not because I just assumed it was all CGI (even stuff which may not have been) and had no interest in it from that point.
I want to see real stuff.....
However, my personal reservations about CGI generated content do not appear to be widely shared bearing in mind the fact almost all of it is now CGI.....

But this has always been the case in films. While practical effects were much more common before CGI, scenes in Star Wars were famously matte painted to be full of stormtroopers, for example.

And while I can’t name any off-hand I bet hundreds of early films had painted backdrops out of windows.

Those sorts of illusions don't annoy me, but when the movie depends entirely on them, I lose interest. So I'm not a huge fan of most of the Marvel films. All Blue/Greenscreen - I almost prefer the Ray Harryhausen stop-motion stuff.
 
Last edited:
I totally disagree that publications will adopt AI images either. Otherwise why not just pick up a book on paintings or a storybook with fictional images of wildlife and landscapes etc.

People buy photography magazines because the images are real and authentic, and created by human beings.

The human element of photography can't be overstated.

If magazines forgot about cameras then they aren't photography magazines. It'd just be digital art and that would be as boring as watching paint dry. I want to see photographs of animals or landscapes which actually exist, not something that's been generated by a computer.

And whether it's a hobby or work, I cannot and never will understand the satisfaction in typing a fictional image up and sitting there looking at it with any sense of pride. Pride comes from knowing you made the image, not some algorithm of 1s and 0s.


You speak as a photographer (and a very good one at that) but you have to remember that most USERS of photographs don't give a **** about whether an image is real or generated by AI. They just want something that looks nice and/or illustrates the point they want to make, or fills in a gap in the page. If they genuinely cared about the quality of the images they are using they would use quality images and pay for them, rather than pick up something from shutterstock or wherever for £1 or whatever the going rate is now.
 
But this has always been the case in films. While practical effects were much more common before CGI, scenes in Star Wars were famously matte painted to be full of stormtroopers, for example.
They were following a fine old tradition.

Films such as Georges Méliès "The Man in the Moon" and "A Kingdom of Fairies" were almost entirely painted effects with a bit of live action thrown in. When they were new, at the very beginning of the 20th Century, they were very impressive. I find them still to have a great deal of charm.
 
With the discussion on identifying AI, I assume people are aware of the "Content Credentials" technology launched by Adobe, New York Times and ironically Twitter. But just in case, there is a link below

It's built into the latest Leica cameras, Nikon are introducing it to Nikon cameras and Capture One works with it, Presumably other cameras and software also work with it , but these are the ones I know about.

Leica has some blurb and links here:

 
Phil made a good point earlier, as more and more AI images get generated, the AI will be learning from it's own images, inbreeding so to speak. I can't see that being a good thing long term.
 
You speak as a photographer (and a very good one at that) but you have to remember that most USERS of photographs don't give a **** about whether an image is real or generated by AI. They just want something that looks nice and/or illustrates the point they want to make, or fills in a gap in the page. If they genuinely cared about the quality of the images they are using they would use quality images and pay for them, rather than pick up something from shutterstock or wherever for £1 or whatever the going rate is now.
I think you are correct. You only have to look at what camera phones are doing now. Putting people into the photograph. it isn't "real" but in a years time when they look back at that holiday, no one will say "how come Jenny (or whoever) is in the photo. It will be look at us all at X or Y.

I have just got back into using Lightroom and Photoshop. and the AI tool for getting rid of things like birdsh!t off pictures of statues etc is excellent, and far quicker than before.
 
I used to get a decent amount of work shooting products, and food for restaurants. Both of these areas have netted me nothing these past six months, and AI is definitely to blame. Same as my stock photography. I suspect AI is using my old work to generate free crap for businesses.

Some areas will always want real photos of the actual product.

Events and weddings will never want AI.

Humans will always want to create their own art.

I'm quietly hopeful that AI will come to an end once people realise how many resources are being used to generate bad plagiarised art. but I think we're a long way from that point.
I used to think the same but many people just don't seem to care although I partially blame media coverage of AI which is frequently very positive and lacks the criticism that should be there. The recent announcement of OpenAI approaching AGI (Artificial General Intelligence) was clearly utter rubbish and slated heavily on many tech sites but I noticed in general coverage it was often praised that they were moving to the next step and so forth. It gives this impression that these LLM AI systems are just getting started but the already significant problems are only going to keep getting worse, they need more and more training data but they can't afford to do that legally.

I am still hopeful though that copyright law is going to bring these AI companies down, the expression on the OpenAI CTO's face when she's asked if they're using Youtube to train Sora says it all (at 4:30):

View: https://youtu.be/mAUpxN-EIgU?t=264


Other AI companies have also been found to be stealing the transcript data from Youtube but I suspect Google isn't doing anything because they themselves are also stealing data for their own LLM AI systems. However these companies are also stealing data from films and documentaries:


I was going to say I think large movie companies would take action to protect their copyright but I guess you could argue many of them are keen to get use genAI system to lower production costs.
 
I am still hopeful though that copyright law is going to bring these AI companies down, the expression on the OpenAI CTO's face when she's asked if they're using Youtube to train Sora says it all (at 4:30):

It certainly does say it all. :mad:
 
I suspect a key reason ordinary people don't care about AI creating unreal images is the expectation that photographers have been doing it for years in Photoshop. WE have trained them to accept it.
 
I would be interested to know how closely the images apparently created actually matched the source images that the software was trained on. It would not surprise me if you could actually identify parts on the generated pictures that came directly from the source material. And if that were the case, the whole area of copying, plagiarism and copyright should start to apply a strong magnifying glass to generated images, with royalties being paid to copyright owners.

I wonder if the software would tell you which pictures it used as source material.
I suspect the first legal case will make things interesting, if there is a source but it can't be identified when requested by the legal process, who is being prosecuted the person who set the parameters in Ai or the coders/owners of the Ai tool.

Its a bit like driverless vehicles, if they crash and cause a death who is at fault, the owners of the car or the vehicle manufacturers?
 
Its a bit like driverless vehicles, if they crash and cause a death who is at fault, the owners of the car or the vehicle manufacturers?

I would expect a shared liability, with the owner/operator being required to have insurance as well as the maker.
 
Back
Top