It looks like the pile on a cotton rug. The head looks too small to me to.In the AI generated image the wool around the neck and on the chest of the animal looked, well contrived. (apart from getting the breeds wrong)
I agree, but many seemingly do not agree.People buy photography magazines because the images are real and authentic, and created by human beings.
But surely you'd find, say, a battle scene with thousands of extras in it that was actually real much more impressive than a CGI version ?CGI is used on the grounds of cost, safety and drama.
Personally, I welcome it for these purposes, i.e. if I'm watching a James Bond film I know that the whole thing is designed to entertain, and none of the content pretends to be authentic.
But, when and if used to deceive, i.e. if the content pretends to be authentic, then that's different.
Surely CGI, or any of this stuff, is supposed to look authentic, and therefore it is by definition, deceiving ?But, when and if used to deceive, i.e. if the content pretends to be authentic, then that's different.
the eyes look weird as well. Like they’re from a wiser animal than a sheep is. A sheep’s eyes should look a bit more…. care-free. It’s actually frowning, sheep aren’t meant to be able to frownIn the AI generated image the wool around the neck and on the chest of the animal looked, well contrived. (apart from getting the breeds wrong)
For me photograph is a hobby (it’s not something I make money from). In the future I can see publications going to AI image creating software to get the image they want as they can have a play to get their ideal photo rather than go to photographers or stock photographs. It’s scary how easy it is now.
Even for a hobby if it was just about getting the photo I could sit at the computer and get some great AI images without the early starts or venturing out into the cold. In the future will we see ‘AI photographers’?
Right now, as I type my option here, there is an ongoing wildfires around Los Angeles. I've been catching up on BBC News's website, following some other newspapers on Twitter (now known as X), and seeing them on Instagram. I see real life photos of the events that are happening.
They're real life photos of real life events.
There will always be the need for human photographers to go and take news photos of events.
It is doubtful in future, that publications will go for AI, as they would still need real humans to take real photos of real life events.
If a motoring publication magazine started using AI generated images of cars for their written articles, I would lose interest and give up reading. I would rather see a real photo of a real car, after all, you can't get into an AI car and take it for a test drive.
No, I think publication will still need to use real photos, otherwise they risk losing readers due to misleading them with phoney images of non-existence subjects.
I'm sure human photographers will be still around for a very long time.
I would be interested to know how closely the images apparently created actually matched the source images that the software was trained on. It would not surprise me if you could actually identify parts on the generated pictures that came directly from the source material. And if that were the case, the whole area of copying, plagiarism and copyright should start to apply a strong magnifying glass to generated images, with royalties being paid to copyright owners.
I wonder if the software would tell you which pictures it used as source material.
It could be argued that the AI isn't "generating" a picture of a seal, deer, or squirrel, but copying an existing one, and just putting a different background on it.
AI only knows what a seal looks like by seeing a previous photograph of one. And memorising/storing it.
So it could be argued that all it's actually doing is searching the internet (or its memory bank that it's been trained with), finding an existing photo of a seal, finding an existing photo of some nice background including a sunset, and putting them together. It hasn't actually created anything from scratch at all.
And not ‘photography related’ as such but this is where the model fails. When it starts training itself based on its own results, that’s where the errors creep in unchecked.I did a short course on machine learning a couple of years back. Even a short catalog of training images was 50,000+ images in size for something like work out if a cat or a dog and then draw a dog.
The “AI models” would also generate further images of dogs and train its self on them in addition to the original database, and so dilute further and further from an single source image,
This ^^^^^The power of a photograph is it's visual representation of an authentic interaction between subject and photographer.
I think, when choosing the word authentic, I was looking for something to define the power of the subjects involvement in the making of a photograph gives an expectation of reality, which affects the way people view photographs, and the way we create them.This ^^^^^
Actually I'd drop the word authentic - the definition is too vague.
A photograph is created by the interaction between photographer, subject, light & location. It's a process dependent on particular circumstances.
AI might be able to make comparable images but it'll never recreate *your* images of *your* subjects.
Full disclosure: I regularly use AI tools in my retouching. It can do the dull repetitive stuff much faster and more reliably than I can do it by hand.
Light and location are part of the subject. IMOA photograph is created by the interaction between photographer and subject,light & location. It's a process dependent on particular circumstances.
I agree.This may well be a true record of what you photographed, but by "not" photographing the thriving businesses in your high street, it is a false record of the high street reality.
You could also add sounds, smells and other things such as weather, All of these things are likely to affect how you feel about (see) your subject.Light and location are part of the subject. IMO
Light and location are part of the subject. IMO
Always for you, and everyone else.Not always for me
Location: sometimes. I think it's often about the interaction of the subject - usually a human in my case - with the location.
Light: not often. I usually make my own.
Interesting artcle.I agree.
This article, from 2009, reviews the status of photographic evidence, at a time when manipulating visual data was, at best, difficult...
The rise of far more complex manipulation by "AI" (a term I find misleading at best) has to raise the bar considerably, for the use of images as evidence in court.
Indeed.History is so full of photographs deliberately misrepresenting reality, it's possibly amazing they ever achieved the "camera never lies" status.
Not to mention 'looted watches'. https://en.wikipedia.org/wiki/Raising_a_Flag_over_the_Reichstag#EditingIndeed.
The Soviets were famously effective at that, adding and subtracting faces as they came into or fell out of favour.
I agree, but many seemingly do not agree.
I can remember about 20 years ago going to see the film Troy and being really impressed by the battle scenes, until my wife said they were probably CGI. I was then much less impressed.
Whether they actually were CGI (at that time) is not the point, what is the point is that artificially created stuff which does not actually exist (nor have they gone to the trouble of employing and clothing thousands of extras) is just not the same, not for me anyway.
Another example was a programme I watched a good few years ago about space with Brian Cox. I thought the pictures and videos of the surface of some of the planets were absolutely fantastic, till my wife then reminded me that they were probably CGI, and I then totally lost interest and turned it off. It did occur to me that Mr Cox should have made it clear to viewers what was actually real shots and which was not because I just assumed it was all CGI (even stuff which may not have been) and had no interest in it from that point.
I want to see real stuff.....
However, my personal reservations about CGI generated content do not appear to be widely shared bearing in mind the fact almost all of it is now CGI.....
I totally disagree that publications will adopt AI images either. Otherwise why not just pick up a book on paintings or a storybook with fictional images of wildlife and landscapes etc.
People buy photography magazines because the images are real and authentic, and created by human beings.
The human element of photography can't be overstated.
If magazines forgot about cameras then they aren't photography magazines. It'd just be digital art and that would be as boring as watching paint dry. I want to see photographs of animals or landscapes which actually exist, not something that's been generated by a computer.
And whether it's a hobby or work, I cannot and never will understand the satisfaction in typing a fictional image up and sitting there looking at it with any sense of pride. Pride comes from knowing you made the image, not some algorithm of 1s and 0s.
What I want is an AI that does the washing up and cleaning so that I have more time to do the photography
I am reasonably confident that we will have some AI software which will readily recognise AI generated images soon.
Dave
They were following a fine old tradition.But this has always been the case in films. While practical effects were much more common before CGI, scenes in Star Wars were famously matte painted to be full of stormtroopers, for example.
I think you are correct. You only have to look at what camera phones are doing now. Putting people into the photograph. it isn't "real" but in a years time when they look back at that holiday, no one will say "how come Jenny (or whoever) is in the photo. It will be look at us all at X or Y.You speak as a photographer (and a very good one at that) but you have to remember that most USERS of photographs don't give a **** about whether an image is real or generated by AI. They just want something that looks nice and/or illustrates the point they want to make, or fills in a gap in the page. If they genuinely cared about the quality of the images they are using they would use quality images and pay for them, rather than pick up something from shutterstock or wherever for £1 or whatever the going rate is now.
I used to think the same but many people just don't seem to care although I partially blame media coverage of AI which is frequently very positive and lacks the criticism that should be there. The recent announcement of OpenAI approaching AGI (Artificial General Intelligence) was clearly utter rubbish and slated heavily on many tech sites but I noticed in general coverage it was often praised that they were moving to the next step and so forth. It gives this impression that these LLM AI systems are just getting started but the already significant problems are only going to keep getting worse, they need more and more training data but they can't afford to do that legally.I used to get a decent amount of work shooting products, and food for restaurants. Both of these areas have netted me nothing these past six months, and AI is definitely to blame. Same as my stock photography. I suspect AI is using my old work to generate free crap for businesses.
Some areas will always want real photos of the actual product.
Events and weddings will never want AI.
Humans will always want to create their own art.
I'm quietly hopeful that AI will come to an end once people realise how many resources are being used to generate bad plagiarised art. but I think we're a long way from that point.
It certainly does say it all.I am still hopeful though that copyright law is going to bring these AI companies down, the expression on the OpenAI CTO's face when she's asked if they're using Youtube to train Sora says it all (at 4:30):
I suspect the first legal case will make things interesting, if there is a source but it can't be identified when requested by the legal process, who is being prosecuted the person who set the parameters in Ai or the coders/owners of the Ai tool.I would be interested to know how closely the images apparently created actually matched the source images that the software was trained on. It would not surprise me if you could actually identify parts on the generated pictures that came directly from the source material. And if that were the case, the whole area of copying, plagiarism and copyright should start to apply a strong magnifying glass to generated images, with royalties being paid to copyright owners.
I wonder if the software would tell you which pictures it used as source material.
Its a bit like driverless vehicles, if they crash and cause a death who is at fault, the owners of the car or the vehicle manufacturers?