Article:
How to Read an AI Image: Toward a Media Studies Methodology for the Analysis of Synthetic Images

dc.creatorSalvaggio, Eryk
dc.date.accessioned2024-06-14T11:09:38Z
dc.date.available2024-06-14T11:09:38Z
dc.date.issued2023
dc.description.abstractImage-generating approaches in machine learning, such as GANs and Diffusion, are actually not generative but predictive. AI images are data patterns inscribed into pictures, and they reveal aspects of these image-text datasets and the human decisions behind them. Examining AI-generated images as ‘info- graphics’ informs a methodology, as described in this paper, for the analysis of these images within a media studies framework of discourse analysis. This paper proposes a methodological framework for analyzing the content of these images, applying tools from media theory to machine learning. Using two case studies, the paper applies an analytical methodology to determine how information patterns manifest through visual representations. This methodology consists of generating a series of images of interest, following Roland Barthes’ advice that “what is noted is by definition notable” (BArThEs 1977: 89). It then examines this sample of images as a non-linear sequence. The paper offers examples of certain patterns, gaps, absences, strengths, and weaknesses and what they might suggest about the underlying dataset. The methodology considers two frames of intervention for explaining these gaps and distortions: Either the model imposes a restriction (content policies), or else the training data has included or excluded certain images, through conscious or unconscious bias. The hypothesis is then extended to a more randomized sample of images. The method is illustrated by two examples. First, it is applied to images of faces produced by the StyleGAN2 model. Second, it is applied to images of humans kissing created with DALL·E 2. This allows us to compare GAN and Diffusion models, and to test whether the method might be generalizable. The paper draws some conclusions to the hypotheses generated by the method and presents a final comparison to an actu- al training dataset for StyleGAN2, finding that the hypotheses were accurate.en
dc.identifier.doihttp://dx.doi.org/10.25969/mediarep/22328
dc.identifier.urihttps://mediarep.org/handle/doc/23758
dc.languageeng
dc.publisherHerbert von Halem
dc.publisher.placeKöln
dc.relation.isPartOfissn:1614-0885
dc.relation.ispartofseriesIMAGE. Zeitschrift für interdisziplinäre Bildwissenschaft
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectimage-text datasetsen
dc.subjectAIen
dc.subjectAI-generated imagesen
dc.subjectdiscourse analysisen
dc.subject.ddcddc:700
dc.titleHow to Read an AI Image: Toward a Media Studies Methodology for the Analysis of Synthetic Imagesen
dc.typearticle
dc.type.statuspublishedVersion
dspace.entity.typeArticle
local.coverpage2024-06-16T03:08:06
local.source.epage99
local.source.issue1
local.source.issueTitleGenerative Imagery: Towards a ‘New Paradigm’ of Machine Learning-Based Image Production
local.source.spage83
local.source.volume19

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
IMAGE_37_2023_83-99_Salvaggio_How-to_.pdf
Size:
1.16 MB
Format:
Adobe Portable Document Format
Description:
Original PDF with additional cover page.

Collections