12 Comments

Your statement that film cameras "preport to preserving [sic] what they see without any modifications" is naïve. (What a wonderful typographical error, like a glimpse into a writer's unconscious: "preport" means forebode; "purport" means profess, usually falsely.) Whatever we produce with image-making devices will be mediated by our technology (pencil, pen, camera lucida, camera obscura, camera, scanner, etc.). In other words, there is no image without modification inherent in whatever document, fantasy, or emotion we hope to reflect or reproduce. Film is not the degree zero of photography. Reicsh's remark, "Photography is undead, a zombie...." is straight out of Barthes. Analog processes have not disappeared, but have been revitalized by artists and socio-economically privileged amateurs. Ironically, your statement that "digital cameras do tremendous amounts of digital manipulation to what is captured" is true only for those countless souls who cannot escape the tyranny of auto- (focus, aperture, ISO, shutter, etc., ad nauseam).

As for Boris Eldagsen's award winning "The Electrician," the controversy is not that the image was created by AI, but that such a contrite cliché would even make Sony's short list.

Like photography, writing inspires discovery. Bonne continuation.

Expand full comment

Ah, this is complicated. The image in the competition isn't exactly a render i.e. a completely synthetic image but something in the fringes of a collage i.e. an artificial image created from sources which include actual photographs blended together. It is so close to the edge that I wouldn't exactly call it a photograph and would say it is a digital collage. My own sense is that trying to coin a new term probably won't work for the same reason that people might say they will dial you up on their cell phone. I knew a photographer who occasionally had to do photography for evidence in court; he explained to me that a photo was never evidence in an of itself so the photographer had to testify and explain how the image was taken. Digital cameras have long left the realm of directly mirroring an unvarnished reality as evidenced by https://www.theverge.com/2023/3/13/23637401/samsung-fake-moon-photos-ai-galaxy-s21-s23-ultra. It is complicated. I have had to generate false color images for publications and constantly remind myself that everything is an interpretation.

Expand full comment

I'll leave the more qualified readers to discuss, but thank you for your nicely written post. You make some cool observations on the current topic of AI-generated imagery and that which went before. As an amateur who is acutely aware of his limitations with chemical and digital photography, I appreciate the potential for advances in technology to disrupt things. This is not just good art, but also good science. Nick Cave recently wrote that because the machine cannot feel or suffer, it cannot create art. For me the art is in the appreciation, and the science in the understanding. If I see an image, I can still feel the response to it emotionally and intellectually without knowing how it was made -- there's another response to be had when one is surprised by the process. Take Emma Towers' pencil portraits, for example: these are beautiful in their own right, and just mind-blowing when you learn how they are created. I enjoy Boris Eldagsen's award-winning image "the electrician" without knowing or caring how it was made; that it is AI generated at Boris' hand merely adds to that.

Richard Feynman talked about a discussion he had with an artist about how they see a flower. The artist said that a scientist reduces the flower by taking it apart but Feynman challenged this. The scientist has available the same aesthetic interpretation of the flower but also, the knowledge of its evolution, how it is seen by insects in ultra-violet and so on. The science only adds the that of the artist. For me, knowledge of the process of creation can only add to the appreciation of the image in its own right.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023Liked by David Young

(please try to take the following far-too-long reply {sorry!} as coming mainly from a place of excited nerdiness and eagerness for fun-but-spirited dialogue rather than mean-spirited fault-finding -- this is one of the most interesting articles on this area of discussion I've read even if I disagree with a lot)

It's worth asking from the start: "does it need to be printed to be a photograph?", especially given that, at a foundational level, both digital photographs and film photographs are measurements of photons causing electrons to move around. It's hard to really gloss over that on the basis that a means of representing such data is not the same means by which it was acquired. The raw binary code of your (*extremely* cool) camera and the distribution of individual silver particles in a gelatine substrate are not so far apart when you view one or the other in their rawest, micro-form; we wouldn't typically count every grain of silver or every spot of ink either, right? Yet ultimately both are, in the case of images produced at some point using measured light, distributions of information. It just happens that binary information is distributed in a very different way to negative film, and that printers for both `read` and reproduce that information in a way that corresponds to the structure of that distribution.

We won't intuitively think a crayon-drawing photocopy as a photograph, yet it's a representation of a graph of photons. Likewise consider intaglio processes like the photogravure -- the plate's a relief of the exposed photograph, but the actual print you see is made of ink. You could make an intaglio print without an original photographic source, and that that is neither causatively or intuitively the same is its own testament that the print or presentation process itself is not the final determiner of what photography is or isn't.

The photo academics, curators etc have rarely seriously thought about causation and that has negatively impacted discussion about photography --- the fact that, were it not for the measuring of `photons` that were `graphed`, this ink print could never exist, seems to be taken for granted to the extent that causality itself is apparently irrelevant. Our desire to verify how it was altered is less significant from a causative standpoint except that said alteration would be a stage in that process of becoming -- it always already was photographic in its origins, and in much the same way, we only care that objects in museums are fakes when we know they are -- the uncaught fakes always were fakes, and contrariwise the conservation team's (sometimes extensive) touchups to a true original don't negate the original object's historicity, right? Carbon dating doesn't care about how things *seem* right now.

The real distinguisher then should be less about terminology, and more like a question -- "how distant from the present reproduction of this image before me are its causative photographic processes?" and the answer in the case of AI images is significantly further than, say, a RAW file viewed on an LCD on the back of a just-shot DSLR. Likewise, a cyanotype blueprint of old is definitely a photograph, yet its origin... is less photographic than a cyanotype contact print made with a 5x4 negative.

If we talk just about the kinds of cameras photographers usually use, there is e.g. base-level sharpening, colour processing in DSLR chips at time of capture; chemical processes of all varieties have also received many of the same optimisations sans the weird filters crap phones are loaded with. But, chemical or electrical, the design and refinement of these image-making technologies involved huge numbers of scientists' calculations, be they algorithms or formulas. Naming things by e.g. distinguishing one as "computational" when referring to tools like autofocus and white balance adjustment is only going to add confusion to debates among people many of whom seem to be being encouraged to believe that chatGPT is going to rise up and do an un-sexy Terminator impression.

I'm not sure about the merits of seeking a new term like this as a result, but I also think that you could easily produce a really strong, lasting commentary on this issue that might, with the right press, help "lead the way" on how people think & talk about AI & Photographs -- I'd like to see it

Expand full comment
Apr 24, 2023·edited Apr 24, 2023Liked by David Young

I haven't had a chance to get though all of this, but I kinda feel the framing of this particular aspect is a little misleading...

*"When digital cameras were invented in the 1980s, the photographic (ie. chemical) process involved in making images began to disappear."*

While the first digital camera might have been invented in 80s, they weren't a viable technology until the early 00s, very late 90s at earliest. And it wasn't until 2010s that digital photography became the default it is today.

As someone who was literally working in a darkroom in 2001 [digital cameras were still very new, not that great], I feel like it's safe to say that chemical processing began to disappear during the 00s — whereas the paragraph in question makes it seem like it happened 20 years earlier.

Expand full comment

Wonderful post! Thanks for describing this situation so eloquently.

Expand full comment

I guess like McClintock below I really wonder about the statement: unlike film cameras, which purport to preserving what they see without any modifications, digital cameras do tremendous amounts of digital manipulation to what is captured. They adjust white balance, reduce shake, improve focus, simulate depth of field and, increasingly do things to make us look better, such as skin smoothing. People took photos through gauze, oil, vaseline, and darkroom manipulation started day one. Seems as if you are unfamiliar with chemicals, paper or projectors.

Expand full comment