Is there enough transparency on how artificial intelligence and machine learning editing tools actually work?

Messages
1,368
Name
Simon
Edit My Images
Yes
I've been testing out different sharpening and denoise tools, including the AI tools from Topaz. I've also been watching and reading lots of reviews, and while the results are incredible, I'm quite concerned by how those results are achieved. I couldn't find much / any details or conversations on this (so this is based on my limited knowledge of machine learning) but my understanding is the results aren't from uncovering detail I captured, but using detail from similar images/pixel patterns to guess what would be there if I had captured it.

If I'm correct - and I'm happy to be corrected if I'm not - shouldn't this be a concern for the photography community? Without going too far down the rabbit hole of what is photography and what is a photograph, if editing tools are guessing information (even if those guesses appear to the viewer to be natural), isn't this the same as swapping out a sky in photoshop? I know the latter is frowned upon in photography competitions and there seems to be evidence of some now banning AI editing processing. I've been worked up enough about this to even fire up my unloved blog, so some more thoughts here if anyone is interested (and trust that's ok mods, but if I need to copy/paste here, let me know), including this video review of Topaz where butterfly wing detail is magically 'recovered'.

As mentioned, I'm very happy to be corrected if I've got this completely wrong. Alternatively very interested in your views on this.
 
I don't see any problem, we all choose what tools we want to use to achieve the results we need, both in terms of the camera body itself, lens choice, lighting and then any post processing. Competitions, agencies and others will set whatever rules they want that we need to follow if we want to participate in those. The new generation of tools is only going to get better and more prevalent so better to embrace the positives they bring in my view.
 
Morning Simon,

You've covered some very good points in your OP and it's interesting to hear your views and opinions on this. I am currently writing my dissertation for University, the topic: Bias in Machine Learning. Researching this topic uncovers the minefield of algorithm accountability, it's exactly as you mentioned. Should we be concerned about the 'black box' nature of machine learning in our lives, without knowing how these tools have reached their conclusions nor, in some cases, being able to question the outcome.

You are correct in your interpretation of how these tools work, they don't uncover details, rather the AI tool has identified the image and made an intelligent assumption as to what should be added to make a 'pleasing' image. Of course, the issues come in when we open the conversation as to what is and isn't a 'pleasing' image and this is where bias can be a real problem.

In the context (and that's a big thing with algorithm accountability) of image editing for personal use, should we be concerned? - In my opinion, no, it doesn't have a huge impact on everyday life and I understand your concerns in this application, however, the lines are blurred between photography and digital image creation these days, and I for one, am pleased to see more and more people creating imagery with technology.

The wider issue of algorithm accountability is something we should all be concerned about, especially in sociotechnical systems. - If any one is interested in reading more on this subject, this paper: https://dl.acm.org/doi/10.1145/3287560.3287598 (hope you can access this) by Selbst et al. - Fairness and Abstraction in Sociotechnical Systems is an insightful read into the complexities of integrating machine learning in society.
 
I don't see any problem, we all choose what tools we want to use to achieve the results we need, both in terms of the camera body itself, lens choice, lighting and then any post processing. Competitions, agencies and others will set whatever rules they want that we need to follow if we want to participate in those. The new generation of tools is only going to get better and more prevalent so better to embrace the positives they bring in my view.
Yes, I appreciate I sound like an old fart stuck in my ways (actually only 38 :LOL:). I don’t see a problem with people embracing them, but I think people need to realise what the tools are actually doing. Today I’m not sure that’s the case. AI tools are fundamentally different to anything that’s gone before. All post processing, be it physical or digital has involved the manipulation of the original data captured. Tools that go further than this (e.g. blending multiple images such as sky replacement - possible in physical and digital pp) have been unavoidably clear that this is what they’re doing. And of course, typically it will be a blend of multiple images you yourself have captured.

The difference here is that now the tool is saying “ah, I know what you were trying to capture but failed to do so. Let me draw it for you from work of people who could”.

I’m not sure if I have a problem with the technology or not, but I do have a problem with the fact that this isn’t really being discussed and so awareness is low. In the very long term, I’m not sure where photography lands in this context. How much AI means the image is no longer yours…
 
Morning Simon,

You've covered some very good points in your OP and it's interesting to hear your views and opinions on this. I am currently writing my dissertation for University, the topic: Bias in Machine Learning. Researching this topic uncovers the minefield of algorithm accountability, it's exactly as you mentioned. Should we be concerned about the 'black box' nature of machine learning in our lives, without knowing how these tools have reached their conclusions nor, in some cases, being able to question the outcome.

You are correct in your interpretation of how these tools work, they don't uncover details, rather the AI tool has identified the image and made an intelligent assumption as to what should be added to make a 'pleasing' image. Of course, the issues come in when we open the conversation as to what is and isn't a 'pleasing' image and this is where bias can be a real problem.

In the context (and that's a big thing with algorithm accountability) of image editing for personal use, should we be concerned? - In my opinion, no, it doesn't have a huge impact on everyday life and I understand your concerns in this application, however, the lines are blurred between photography and digital image creation these days, and I for one, am pleased to see more and more people creating imagery with technology.

The wider issue of algorithm accountability is something we should all be concerned about, especially in sociotechnical systems. - If any one is interested in reading more on this subject, this paper: https://dl.acm.org/doi/10.1145/3287560.3287598 (hope you can access this) by Selbst et al. - Fairness and Abstraction in Sociotechnical Systems is an insightful read into the complexities of integrating machine learning in society.
Good to have some knowledge on the topic involved, and to know at least some of my assumptions may be correct. The bias angle is an interesting one and I find the challenges in autonomous vehicles fascinating (do you program to hit the old man or the baby in the pram?).

I agree about people creating imagery with technology, but is it creating or is it copying? If AI is using existing bodies of work, guessing what you were trying to capture and filling in the gaps, where does optimising your image end and plagiarism begin? And as per my post above, the main issue is awareness and transparency. You can’t have an opinion on any of these questions we’re discussing unless you’re aware of what is happening to your image, and how much of it is the data you captured.

Good luck with your dissertation!
 
but I do have a problem with the fact that this isn’t really being discussed and so awareness is low.
Without wishing to be facetious, this begins to sound like discussions as to whether painters who didn't grind and mix their own paints were real painters.

If you want to use the technology, you are at the mercy of the many people who made that technology available. The image you see when using a digital camera is purely the result of firmware which controls the functions of that camera. You have very little chance of knowing what decisions were made as to how the programs work and so you start out with an image file containing data over which you have no detailed control.

When you copy the image file into a desktop for editing, yet more decisions have been taken for you. How does your editing program decide which pixels to add or discard when you alter the size of the image? What pixels are affected when you alter the brightness or contrast? What numerical value is assigned to each pixel in the file?

I think that this is a non-issue for anyone who wishes to create images using digital tools, unless they are also programmers creating those tools.
 
Good to have some knowledge on the topic involved, and to know at least some of my assumptions may be correct. The bias angle is an interesting one and I find the challenges in autonomous vehicles fascinating (do you program to hit the old man or the baby in the pram?).

I agree about people creating imagery with technology, but is it creating or is it copying? If AI is using existing bodies of work, guessing what you were trying to capture and filling in the gaps, where does optimising your image end and plagiarism begin? And as per my post above, the main issue is awareness and transparency. You can’t have an opinion on any of these questions we’re discussing unless you’re aware of what is happening to your image, and how much of it is the data you captured.

Good luck with your dissertation!
Thank you :) - I need it :coat:

That's not really how the technology works... The AI engine used to 'fill in' the gaps is not really plagiarising the work, more drawing inspiration from previous bodies of 'good' images and creating another 'good' image from you original. It's pretty complex, but it's similar technology that's used in face generating algorithms - check out this: https://thispersondoesnotexist.com/ every time you refresh the page, it generates a new 'person' having learnt a huge number of features and characteristics of human faces, from simple things from eyes, noses, eye brows etc. to more nuanced things like eye colour, hair colour and the relationship between facial structure and hair colour, such as associating grey hair with wrinkles etc. etc.
The technology is able to mimic styles and characteristics and generate new data from previous data - it's really very clever, however must be used with caution and it's capabilities should not be over exaggerated.

I've placed good in inverted commas as this is where things start to go wrong in machine learning..... What metric do we define as 'good' ? How do we define this? Is it from number of instagram likes? - If so, what is the demographic of people liking this type of image, what/who else do they follow. Is the population that was exposed to this image representative of the environment that you will be presenting your 'good' image?

This is just touching the surface, however, it all comes down to the issue of abstraction of our human world to the computer world. In doing so, we must define a finite number of things, however, what makes a good image is infinitely large and varies from person to person. When creating this technology, the developer must choose sides and set a point in the world that makes this fluid definition of 'good' to one that is static and that, in my opinion is the problem of applying technology in social contexts.

I waffled a bit - but context is very important ;)
 
Yes, I appreciate I sound like an old fart stuck in my ways (actually only 38 :LOL:). I don’t see a problem with people embracing them, but I think people need to realise what the tools are actually doing. Today I’m not sure that’s the case. AI tools are fundamentally different to anything that’s gone before. All post processing, be it physical or digital has involved the manipulation of the original data captured. Tools that go further than this (e.g. blending multiple images such as sky replacement - possible in physical and digital pp) have been unavoidably clear that this is what they’re doing. And of course, typically it will be a blend of multiple images you yourself have captured.

The difference here is that now the tool is saying “ah, I know what you were trying to capture but failed to do so. Let me draw it for you from work of people who could”.

I’m not sure if I have a problem with the technology or not, but I do have a problem with the fact that this isn’t really being discussed and so awareness is low. In the very long term, I’m not sure where photography lands in this context. How much AI means the image is no longer yours…
In this specific case of sharpening and denoise, I really don't think there is anything to worry about. We either like the results of the tool or we don't. The use of AI in other fields and applications that could cause harm is another matter.
 
isn't this the same as swapping out a sky in photoshop?
Pretty much.

I don't have a problem with that either.
The wider issue of algorithm accountability is something we should all be concerned about, especially in sociotechnical systems. - If any one is interested in reading more on this subject, this paper: https://dl.acm.org/doi/10.1145/3287560.3287598 (hope you can access this) by Selbst et al. - Fairness and Abstraction in Sociotechnical Systems is an insightful read into the complexities of integrating machine learning in society.

Interesting, I'll save that for later. I do occasionally use ML in the day job but it's usually a tool of last resort. Traditional numerical analysis or image processing is considerably more transparent and reliable for my purposes.
 
Pretty much.

I don't have a problem with that either.
But I think the problem I have is that there is quite a large divide in the community with whether one does or does not have a problem with swapping out a sky and whether they choose to do it or not.

Now we have AI tools that are doing similar and perhaps not everyone would use them if they understood them (or maybe people would continue to use them and change their stance on swapping out skies!)
 
Just to add my tuppence - I my youth I often printed a 'large' copy of a photograph (B&W) and then used an airbrush to enhanced and add detail before re-photographing and reprinting to give an improved image. I used my knowledge and brain to add the detail based on experience, sometimes it worked well other times it looked to 'false'. I guess the new software is not so different.

That being said I am also concerned that a new genre of edited photographs is upon us. Pictures taken on smart phones and published on the likes of Facebook all seem to look similar....

PS I do use Topaz and try to keep it to a 'light' touch !
 
This is a good point @Tricky01 and I think we should be discussing it. At the very least machine learning is likely to end up with a fairly standardised and uniform way of processing images because it is driven by what pleases the majority and that will feed into other algorithms which present images based on popularity, e.g. search engines, which in turn may make it harder for creative and challenging work to get seen.

I have half-jokingly suggested on here before that soon we won't need to actually go out and photograph things, all the good/pretty/interesting stuff will already have been photographed and to create our own "photograph" of a given scene we will just need to gather all the available images and use software to create the photo that we want from the available material. Suppose you want to sharpen bird feathers it's probably already possible to use bird feather texture from a sharp photo of feathers and the colours and feather placement from the photo you took and combine them to get an image with apparently very sharp feathers, it is like replacing a sky.

I think the ethics of this sort of technology depends on the intended use of an image, if you are selling a product you presumably want the best image of it that you can get and the means used to produce the image don't really matter. On the other hand if you are producing journalistic photos or natural history, documentary, etc. then the photo needs to be made from light captured at the scene at the given time.
 
o create our own "photograph" of a given scene we will just need to gather all the available images and use software to create the photo that we want from the available material.
It's interesting that you make such a point.

In the early days of illustrated newspapers and magazines, that was pretty well what was done. The line drawings that illustrated articles were often assembled from a "library" of images that the artist had on hand, sometimes augmented by his/her own sketches around the subject. This changed gradually as the use of photographs became common. Even then, most photographs were posed because of the limitations of the time.

The American Civil War photographers such as Brady, Gardner and Reekie, seem to have experimented with "live action" photography but the results were more like Robert Capa's D-Day images, after the lab got through with them, than the sharp, detailed images produced by the likes of David Douglas Duncan or Don McCullin.

Now, having gone through a period during which photographs of events were trusted, we are now entering another time where images are often viewed (rightly) with suspicion as to their truthfulness.
 
Last edited:
What digital sensor actually captures colour? The software creates the colour by guestimating what it should be. These advanced AI programs are just an extention of that.
 
What digital sensor actually captures colour? The software creates the colour by guestimating what it should be. These advanced AI programs are just an extention of that.
Well not quite, each photosite on the sensor is covered by one of either a red, green or blue filter which means that the photosite captures the intensity of a given colour. AFAIK this is also true of colour film. This is then analogous to the RGB pixels on your screen and it is actually your eye that is synthesising a range of colours from the RGB pixels, not the software.

The machine learning is not an extension of that if it adds information. In a straight rendering, a photon reflected off the scene hits a photosite on the sensor, it's energy level is digitised and stored, this is then rendered on a screen or by a printer. If an ML algorithm intervenes and replaces some noise with data which was not part of the original scene then it seems ethically different.

We can expect the whole process to be lossy - data will be lost in the air, in the lens, in the sensor, etc. and to compensate we can, presumably, permit some degree of data manipulation and interpolation, e.g. making dark areas lighter, sharpening edges, averaging noise because the data is still the data captured at the scene. However replacing parts of the original data with data not originally present in the scene is a different ball game and gets into "replacing the sky" territory.
 
If I'm correct - and I'm happy to be corrected if I'm not - shouldn't this be a concern for the photography community? Without going too far down the rabbit hole of what is photography and what is a photograph, if editing tools are guessing information (even if those guesses appear to the viewer to be natural), isn't this the same as swapping out a sky in photoshop?
I don't think it's the same as swapping out a sky. With swapping a sky, even if the latest tools help a bit with the technical aspects, the photographer still has full control over the process, and it's just an extension of their creative tool box.

With programs like Topaz AI, and DXO (even with Prime, and more so with Deep Prime) the programs "invent" things, in an unpredictable way. And, I'm surpised there isn't more discussion on it.

With denoising, I've had four-paned windows turned into six-paned windows, tree branches added and removed, and spurious small blocks of texture added into an otherwise accurate (but less noisy and sharper) rendition of feathers. Other have had letters changed e.g. from an "o" into an "a", and at least initially Topaz gigapixel had a reputation for adding spurious content when re-sizing (not sure if that's still the case anymore).

Different programs affect the same image in different ways (and with Topaz, there is still a fair amount of control over how the AI works). Most of the time, I don't notice any issues, but I may be just missing them.

Overall, for me, the results from the AI de-noise etc is too useful to ignore, but I use it with care. Following on from your butterfly link, as many animal and plant species often rely on the precise shape and patterns of their markings, an AI enhanced photograph could well be problematic in confirming an ID.

But, then again, from the very beginning, photographs have never lived up to the myth of the camera never lying, and AI programs just add another layer of caution.

Most of the time I'm not sure it matters all that much, and is probably offset by the benefits to photographers who work in conditions that need high ISOs.
 
There is an interesting thread here on this https://forum.luminous-landscape.com/index.php?topic=140336.0

Below is the content from the post that summarises some findings compared to the original and the DXO Pure Raw de-noised version:

"Just comparing the output (right) with the noisy unprocessed capture (centre), the result is awesome, but when we look at the original scene some considerations have to be made:
  • Text masked by noise is cleaned, but its lines and traces are not recovered, as expected.
  • This is the most interesting spot: the neural network interprets and creates non existent edges and shaded facets, which are feasible looking at the noisy image but didn't exist in the scene
  • The neural network tends to simplify complex structures: the carvings on the leather mask are a series of curves but are interpreted more like linear shapes
  • Fine detail in the scene, completely lost in the noisy shot, is lost and interpreted as a plain colour area (with some fine gaussian-like grain at pixel level)
  • Flat colours are very well recovered, as would be with most noise reduction procedures

In the real world, this kind of software will be used without being able to compare with the real detail, so most photographers will consider valid all the fake detail recreated by the NN."

The article the post is based on is here:


It needs a google translate
 
But I think the problem I have is that there is quite a large divide in the community with whether one does or does not have a problem with swapping out a sky and whether they choose to do it or not.

Now we have AI tools that are doing similar and perhaps not everyone would use them if they understood them (or maybe people would continue to use them and change their stance on swapping out skies!)
Which community?

If you're making photos for yourself - do whatever you like. Just be honest about it.
If you're making photos for someone else - do what they specify.
 
Which community?

If you're making photos for yourself - do whatever you like. Just be honest about it.
If you're making photos for someone else - do what they specify.
+1
 
Which community?

If you're making photos for yourself - do whatever you like. Just be honest about it.
If you're making photos for someone else - do what they specify.
What about the broader 'community' of anyone who might view our images with no particular expectations? If something appears in the TP gallery or on Flickr or on social media or on a gallery wall or in a print sale or inside a magazine that doesn't require journalistic standards of image integrity, what are we to make of it? Most images come with no commentary on how they were made. And while it's always been possible to manipulate things, even if you had to get out an airbrush or a retouching knife or an exotic darkroom potion to do it, major but hard to detect alterations are now just a mouse click away. The image might be an 'honest' representation of reality (with all the usual caveats about the choices we make, from the point when we decide exactly where to aim the camera onwards). Or it might be a heavily manipulated collage with seamless joins. Or it might be a complete confection by an AI, like this one from https://thispersondoesnotexist.com :

Clipboard01.jpg
 
Last edited:
I've been testing out different sharpening and denoise tools, including the AI tools from Topaz. I've also been watching and reading lots of reviews, and while the results are incredible, I'm quite concerned by how those results are achieved. I couldn't find much / any details or conversations on this (so this is based on my limited knowledge of machine learning) but my understanding is the results aren't from uncovering detail I captured, but using detail from similar images/pixel patterns to guess what would be there if I had captured it.

If I'm correct - and I'm happy to be corrected if I'm not - shouldn't this be a concern for the photography community? Without going too far down the rabbit hole of what is photography and what is a photograph, if editing tools are guessing information (even if those guesses appear to the viewer to be natural), isn't this the same as swapping out a sky in photoshop? I know the latter is frowned upon in photography competitions and there seems to be evidence of some now banning AI editing processing. I've been worked up enough about this to even fire up my unloved blog, so some more thoughts here if anyone is interested (and trust that's ok mods, but if I need to copy/paste here, let me know), including this video review of Topaz where butterfly wing detail is magically 'recovered'.

As mentioned, I'm very happy to be corrected if I've got this completely wrong. Alternatively very interested in your views on this.
A good point which I had also raised in this forum recently. I am a member of the small team which look after the competition rules for my club and a member of the International Salon Committee. We have not banned to use of AI software or ever been asked to . As you say, we really need to know how it works. Is it really doing more than cloning which is allowed in many competition sections. The main reason for rules is so we have a level laying field for all competitors. If they are all generally happy with AI, I believe that we will probably continue to allow it.

For sky replacement it depend where the sky comes from; if it is one of your own skies then fine. Sky replacement was done in the 1920's.

Dave
 
A good point which I had also raised in this forum recently. I am a member of the small team which look after the competition rules for my club and a member of the International Salon Committee. We have not banned to use of AI software or ever been asked to . As you say, we really need to know how it works. Is it really doing more than cloning which is allowed in many competition sections. The main reason for rules is so we have a level laying field for all competitors. If they are all generally happy with AI, I believe that we will probably continue to allow it.

For sky replacement it depend where the sky comes from; if it is one of your own skies then fine. Sky replacement was done in the 1920's.

Dave
Some of the AI tools - not Topaz, but I can't remember the exact make now - have models which have been trained on real human retouching and do a very, very good job of replicating it.

If skin retouching skill is what is being judged in the competition I can't imagine those tools would be allowed.
 
If something appears in the TP gallery or on Flickr or on social media or on a gallery wall or in a print sale or inside a magazine that doesn't require journalistic standards of image integrity, what are we to make of it?

Just the same sorts of things as for any other creative endeavour: How does it make you feel? What does it convey? What do you think the creator was trying to say? Does it do the job it was intended to? How does it relate to your experience of the world?
 
Just the same sorts of things as for any other creative endeavour: How does it make you feel? What does it convey? What do you think the creator was trying to say? Does it do the job it was intended to? How does it relate to your experience of the world?
There is, however, one difference between photography and many other forms of art. Most photographs appear to be direct representations of reality in a way in which a painting or a sculpture is not. It's a little like the difference between a novel and a literary work of non-fiction. Sometimes the author of a book that is represented as, or is presumed to be, non-fiction embroiders the facts to make them more interesting, without this being obvious to the reader. If this manipulation is subsequently brought to light, the reader may quite rightly feel deceived. Sometimes a supposedly factual work is entirely fabricated. If this emerges, then the reader may understandably feel betrayed. Occasionally manipulation or fabrication is discovered in a 'non-fiction' work that has previously been praised for its significant artistic and literary merit, like A Million Little Pieces or Fragments. I suppose you could argue (and some people do) that the artistic value of the work ought not to be diminished, but in practice a lot of readers don't see it that way and want their money back. If I buy a nice print of a dramatic landscape in the Lake District, then subsequently discover that the photographer has cloned out a prominent hilltop wind farm, an ugly line of pylons and a caravan park, and the sky looks suspiciously similar to the same artist's shot of a sunset in California, ought I to feel aggrieved, or just be grateful that I can experience their Creative Vision?
 
Having just done some reprocessing of images and removing some graffiti on a pavement via C1’s healing tool, I realised my apparent hypocrisy. But as I pondered it more (and perhaps making excuses for myself) I feel a distinction between tools that tidy a background to help the viewer focus on the main subject of the image, and artificially creating that main subject itself. My interest in starting this discussion started around the sharpening ‘magic’ of the likes of Topaz Sharpen AI, and while perhaps I am making excuses for myself, I do think there’s a difference when the hero subject of your image is the thing you’ve artificially created.
 
Having just done some reprocessing of images and removing some graffiti on a pavement via C1’s healing tool, I realised my apparent hypocrisy. But as I pondered it more (and perhaps making excuses for myself) I feel a distinction between tools that tidy a background to help the viewer focus on the main subject of the image, and artificially creating that main subject itself. My interest in starting this discussion started around the sharpening ‘magic’ of the likes of Topaz Sharpen AI, and while perhaps I am making excuses for myself, I do think there’s a difference when the hero subject of your image is the thing you’ve artificially created.
There's no need to make excuses for yourself, just be honest with what you're doing and why you're doing it, and if it is important to the client or viewer, inform them too. In some cases even minor cloning is a no-go and that needs to be respected. I see problems when there is a miscommunication, especially if a viewer thinks they are looking at a faithful record of a scene, but it's not and has been manipulated in some way.
 
Having just done some reprocessing of images and removing some graffiti on a pavement via C1’s healing tool, I realised my apparent hypocrisy. But as I pondered it more (and perhaps making excuses for myself) I feel a distinction between tools that tidy a background to help the viewer focus on the main subject of the image, and artificially creating that main subject itself. My interest in starting this discussion started around the sharpening ‘magic’ of the likes of Topaz Sharpen AI, and while perhaps I am making excuses for myself, I do think there’s a difference when the hero subject of your image is the thing you’ve artificially created.
I think one of the key distinctions is additive vs subtractive changes. Visual art is an additive process, the painter adds elements to build a composition; photography is a subtractive process, we frame, crop, use light and shade, shallow DoF, etc to remove or make elements less distinct in the process of building a composition. This means that things like cloning are true to the medium, they are subtractive, they remove information. The "AI" approach is, or can be, additive, it adds in things that were not there, it also does this without the photographer having any control over exactly what happens, you chose to clone particular things.
 
There is, however, one difference between photography and many other forms of art. Most photographs appear to be direct representations of reality in a way in which a painting or a sculpture is not. It's a little like the difference between a novel and a literary work of non-fiction.
Photography has been about the manipulation of a view of reality since the days of Julia Margaret Cameron. I won't go so far as to say most but a large proportion of the photographs consumed today have been manipulated (filtered) in some way.

Sometimes the author of a book that is represented as, or is presumed to be, non-fiction embroiders the facts to make them more interesting, without this being obvious to the reader. If this manipulation is subsequently brought to light, the reader may quite rightly feel deceived.
A vast amount of academic brainpower is spent discussing how [an author's] personal perspective affects the story they are telling, particularly in history - which you might expect to be simply a record of fact.

Sometimes a supposedly factual work is entirely fabricated. If this emerges, then the reader may understandably feel betrayed. Occasionally manipulation or fabrication is discovered in a 'non-fiction' work that has previously been praised for its significant artistic and literary merit, like A Million Little Pieces or Fragments. I suppose you could argue (and some people do) that the artistic value of the work ought not to be diminished, but in practice a lot of readers don't see it that way and want their money back. If I buy a nice print of a dramatic landscape in the Lake District, then subsequently discover that the photographer has cloned out a prominent hilltop wind farm, an ugly line of pylons and a caravan park, and the sky looks suspiciously similar to the same artist's shot of a sunset in California, ought I to feel aggrieved, or just be grateful that I can experience their Creative Vision?
I don't think anyone who buys a landscape print to put on their wall seriously expects it to be a documentary image. Personally, I don't how replacing a sky is any different to using a graduated filter to create two different exposures in a single frame.


However: that's all detail. My point really is reality in photography lies on a subtle sliding scale. Any attempt to be prescriptive about what is and what isn't real is bound to fail. As soon as we choose a subject, a viewpoint, a focal, length, an exposure, a crop in an enlarger or an image from a sequence then we're presenting a single interpretation of that reality. We just need to be honest about what we've done.

Steve McCurry and Tom Hunter got into trouble by presenting images as documentary when they really weren't - and by attempting to deny that they were manipulated.

Ansel Adams manipulated his images to within an inch of their lives but he was always perfectly open about it.
 
I think one of the key distinctions is additive vs subtractive changes. Visual art is an additive process, the painter adds elements to build a composition; photography is a subtractive process, we frame, crop, use light and shade, shallow DoF, etc to remove or make elements less distinct in the process of building a composition. This means that things like cloning are true to the medium, they are subtractive, they remove information. The "AI" approach is, or can be, additive, it adds in things that were not there, it also does this without the photographer having any control over exactly what happens, you chose to clone particular things.
I wish I had your way with words, very eloquently and succinctly put, thank you. I hadn’t been able to draw it out but you’re absolutely right, it’s a distinction between additive and subtractive edits that sits at the heart of this discussion (for me anyway). Do you mind if I quote you on my blog post?
 
I wish I had your way with words, very eloquently and succinctly put, thank you. I hadn’t been able to draw it out but you’re absolutely right, it’s a distinction between additive and subtractive edits that sits at the heart of this discussion (for me anyway). Do you mind if I quote you on my blog post?
Thank you for saying that. I don't mind you quoting me, a link back to talkphotography.co.uk would be appreciated.
 
There is an interesting thread here on this https://forum.luminous-landscape.com/index.php?topic=140336.0

Below is the content from the post that summarises some findings compared to the original and the DXO Pure Raw de-noised version:

"Just comparing the output (right) with the noisy unprocessed capture (centre), the result is awesome, but when we look at the original scene some considerations have to be made:
  • Text masked by noise is cleaned, but its lines and traces are not recovered, as expected.
  • This is the most interesting spot: the neural network interprets and creates non existent edges and shaded facets, which are feasible looking at the noisy image but didn't exist in the scene
  • The neural network tends to simplify complex structures: the carvings on the leather mask are a series of curves but are interpreted more like linear shapes
  • Fine detail in the scene, completely lost in the noisy shot, is lost and interpreted as a plain colour area (with some fine gaussian-like grain at pixel level)
  • Flat colours are very well recovered, as would be with most noise reduction procedures

In the real world, this kind of software will be used without being able to compare with the real detail, so most photographers will consider valid all the fake detail recreated by the NN."

The article the post is based on is here:


It needs a google translate
I did find similar with some text using Denise ai. It was very extreme example and I was just pushing it. There are other similar cases where gigapixel may struggle with trees or some rocks. I guess their ai model still have some place to evolve which I'm sure they will.
This is a very low level endeavour which only changes very small and highly defined image parameters and in the grand scheme of things should be seen as a minor aid that will either improve noise by 1-2 stops or resolution by about 50%. Unless image is cropped heavily these are barely distinguishable at regular sizes
 
I don't think anyone who buys a landscape print to put on their wall seriously expects it to be a documentary image. Personally, I don't how replacing a sky is any different to using a graduated filter to create two different exposures in a single frame.


That feels like a red rag to a bull! However I'm too busy to charge at the moment.........
 
isn't this the same as swapping out a sky in photoshop?

I don't think this is binary where its either perfectly fine or unacceptable.
In my mind the scale goes somewhat like this:

RAW image - lens corrections - basic exposure, contrast etc corrections - simple NR (like in lightroom) - more advanced localised corrections - Panos - HDR - blending (eg: astro, long exposures, focus stacks etc) - more advanced noise reduction using AI - cloning - advanced AI sharpening - sky replacements - etc etc etc

Where would you draw the line?

including this video review of Topaz where butterfly wing detail is magically 'recovered'.

I have seen that video before and I still wouldn't use that picture because you can see the body of the bufferfly and its eye is messed up.
So far in my experience topaz AI has managed to give my pictures a little edge, it has helped in making already good pictures even better. It has never made a bad picture good.
Same for denoise, it provides one stop or may be 2 stops at a push advantage. But its not like I am now able to go out and shoot handheld astro images at ISO3,280,000 and make it look good.
I still have to go through all the pain and effort go get a good astro image and denoise just helps it look better when printed big.
 
Last edited:
I don't think this is binary where its either perfectly fine or unacceptable.
In my mind the scale goes somewhat like this:

RAW image - lens corrections - basic exposure, contrast etc corrections - simple NR (like in lightroom) - more advanced localised corrections - Panos - HDR - blending (eg: astro, long exposures, focus stacks etc) - more advanced noise reduction using AI - cloning - advanced AI sharpening - sky replacements - etc etc etc

Where would you draw the line?



I have seen that video before and I still wouldn't use that picture because you can see the body of the bufferfly and its eye is messed up.
So far in my experience topaz AI has managed to give my pictures a little edge, it has helped in making already good pictures even better. It has never made a bad picture good.
Same for denoise, it provides one stop or may be 2 stops at a push advantage. But its not like I am now able to go out and shoot handheld astro images at ISO3,280,000 and make it look good.
I still have to go through all the pain and effort go get a good astro image and denoise just helps it look better when printed big.


All of a sudden I'm not quite as busy........

Most aspects of the scale you suggest are ways that photographers have of overcoming the limitations of their equipment, which are absolutely fine with me. I'm also fine with most of the digital equivalents of what film photographers were able to do in the darkroom. I'm not including swapping skies (etc) in this last category because although I know that film photographers were able to do it, it was something that only the most skilled could do. Now it can be done at the drop of a hat.

There may be some grey areas on that list, some of which I hadn't thought of before. I don't have a problem with panos, hdr (within reason), focus stacks, etc because they all fall within the category of " overcoming the limitations of equipment ". But I sometimes see astro shots ( which look so far removed from what the eye is able to see that I have my doubts about them. On the surface I don't have problems with denoising or sharpening, AI or not, but I'm willing to listen to any further discussion on that. But as for the cloning out major elements of the scene in front of the camera that is another ball game altogether, a quantum leap, if you like. The same goes for sky replacement, which is just a tool that lazy photographers can use to produce something that isn't really a photograph....although on the surface it might look like one.
 
All of a sudden I'm not quite as busy........

Most aspects of the scale you suggest are ways that photographers have of overcoming the limitations of their equipment, which are absolutely fine with me. I'm also fine with most of the digital equivalents of what film photographers were able to do in the darkroom. I'm not including swapping skies (etc) in this last category because although I know that film photographers were able to do it, it was something that only the most skilled could do. Now it can be done at the drop of a hat.

There may be some grey areas on that list, some of which I hadn't thought of before. I don't have a problem with panos, hdr (within reason), focus stacks, etc because they all fall within the category of " overcoming the limitations of equipment ". But I sometimes see astro shots ( which look so far removed from what the eye is able to see that I have my doubts about them. On the surface I don't have problems with denoising or sharpening, AI or not, but I'm willing to listen to any further discussion on that. But as for the cloning out major elements of the scene in front of the camera that is another ball game altogether, a quantum leap, if you like. The same goes for sky replacement, which is just a tool that lazy photographers can use to produce something that isn't really a photograph....although on the surface it might look like one.
I used "sky replacement" + "etc etc" as a "wide brush" for all sorts of things people achieve with post processing. A lot of it is quite involved and requires artistic vision to achieve a good result out of it. Most of them are quite obviously photoshopped but it takes a lot effort to produce and they don't hide the fact that its photoshoped, so I wouldn't' brand them all as "lazy". for example I have seen some crazy abstracts that people produce with blend of various images. I am not sure I'd call all that photography but they start with photographs. they are far skilled in PS than I am.

But sky replacement doesn't require a lot of vision or skill in post processing. But its a filter down from that.

Personally like I said I haven't been able to take a bad picture or picture executed badly and make it good using these AI tools OP mention. when that happens I'll think about it more deeply as to whether its really photography or not. Sky replacement does just that or is done with aim of doing just that i.e. turn a bland image into something nice.
 
Last edited:
All of a sudden I'm not quite as busy........

Most aspects of the scale you suggest are ways that photographers have of overcoming the limitations of their equipment, which are absolutely fine with me. I'm also fine with most of the digital equivalents of what film photographers were able to do in the darkroom. I'm not including swapping skies (etc) in this last category because although I know that film photographers were able to do it, it was something that only the most skilled could do. Now it can be done at the drop of a hat.

Does the amount of technical skill involved add value to the end result? I think that sometimes it does; sometimes it doesn't.

There may be some grey areas on that list, some of which I hadn't thought of before. I don't have a problem with panos, hdr (within reason), focus stacks, etc because they all fall within the category of " overcoming the limitations of equipment ". But I sometimes see astro shots ( which look so far removed from what the eye is able to see that I have my doubts about them. On the surface I don't have problems with denoising or sharpening, AI or not, but I'm willing to listen to any further discussion on that. But as for the cloning out major elements of the scene in front of the camera that is another ball game altogether, a quantum leap, if you like.

That's an interesting perspective.
I started using a camera because my drawing abilities are woeful. Camera equipment was a way of overcoming my limitations, not the other way round.

The same goes for sky replacement, which is just a tool that lazy photographers can use to produce something that isn't really a photograph....although on the surface it might look like one.

fwiw I only replace skies when circumstances dictate. At a photoshoot around an immobile vehicle I had to either include the local housing estate or replace it with something more appropriate.
 
Ansel Adams manipulated his images to within an inch of their lives but he was always perfectly open about it.

I'm not aware that Ansel Adams did do that other than by using traditional darkroom techniques. Did he actually combine negatives, for example?

I'm sure he would have been a master digital photographer and photoshop user, but I'd like to think he would have the integrity not to create digital composites which misrepresent what was in front of the camera.
 
Back
Top