Show simple item record

dc.contributor.authorSrdarov, Suzanne
dc.contributor.authorLeaver, Tama
dc.date.accessioned2024-11-27T03:02:27Z
dc.date.available2024-11-27T03:02:27Z
dc.date.issued2024
dc.identifier.citationSrdarov, S. and Leaver, T. 2024. Generative AI Glitches: The Artificial Everything. M/C Journal. 27 (6): 6.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/96425
dc.identifier.doi10.5204/mcj.3123
dc.description.abstract

Artificial Intelligence (AI) has existed in popular culture far longer than any particular technological tools that carry that name today (Leaver), and in part, for that reason, fantasies of AI being or becoming sentient subjects in their own right form current imaginaries of what AI is today, or is about to become. Yet ‘the artificial’ does not just mark something as not human, or not natural, but rather provokes an exploration of the blurred lines between supposedly different domains, such as the tensions provoked where the lines between people and technology blur (Haraway). The big technology corporations who are selling the idea that their AI tools will be able to revolutionise workforces and solve immense numbers of human challenges are capitalising on these fantasies, suggesting that they are only a few iterations away from creating self-directing machine intelligences that will dwarf the limitations of human minds (Leaver and Srdarov). At this moment, though, Artificial General Intelligence (AGI)—AI that equals or surpasses humans across a wide range of cognitive endeavours—does not and may never exist. However, given the immense commercial and societal interest in the current generation of Generative AI (GenAI) tools, examining their actual capabilities and limitations is vital.

The current GenAI tools operate using Large Language Models (LLMs) where sophisticated algorithms are trained on vast datasets, which increase in complexity based on the amount of data absorbed. These models are then harnessed to create novel outputs to prompts based on statistical likelihoods derived from training data. However, the exact way these LLMs are operating is not disclosed to users, and GenAI tools perpetuate the ‘black box’ problem insomuch as the way they are working is only made visible by examining the inputs and outputs rather than being able to see the processes themselves (Ajunwa). There have been many articles and explainers written about the mechanics of LLMs and AI image generators (Coldewey; Guinness; Jungco; Long and Magerko); however, the specific datasets used to build AI engines, and the weighing or importance assigned within the corpus of training data to each image is still guesswork. Manipulating the inputs and observing the outputs of these engines is still the most accurate lens by which to gain insight into the specifics of each system.

This article is part of a larger study, where in early 2024 we prompted a range of outputs from six popular GenAI tools—Midjourney, Adobe Firefly, DreamStudio (a commercial front-end for the Stable Diffusion model), OpenAI’s DALL-E 3, Google Gemini, and Meta’s AI (hereafter Meta)—although we should note there are no outputs from Gemini in our dataset since Gemini was refusing to generate any images with human figures at all due to a settings change after bad publicity relating to persistent inaccuracies in their generated content (Robertson). Our prompts explored the way these tools visualise children, childhoods, families, Australianness, and Aboriginal Australianness, using 55 different prompts on each of these tools, generating just over 800 images. Apart from entering the prompts, we did not change any settings of the GenAI tools, attempting to collect as raw a response as possible. Where the tools defaulted to producing one image (such as Dall-E 3), we collected one image, whilst where other tools defaulted to producing four different images, we collected all four. For the most part, the data collected from our prompt sampling was consistent with other studies and showed a clear tendency to produce images that reproduced classed, raced, and sexed ideals: chiefly, white, middle-class, heteronormative bodies and families (Bianchi et al.; Gillespie; Weidinger et al.).

dc.publisherM/C - Media and Culture
dc.relation.sponsoredbyhttp://purl.org/au-research/grants/arc/CE200100022
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectgenerative AI
dc.subjectartificial intelligence
dc.titleGenerative AI Glitches: The Artificial Everything
dc.typeJournal Article
dcterms.source.volume27
dcterms.source.number6
dcterms.source.issn1441-2616
dcterms.source.titleM/C Journal
dc.date.updated2024-11-27T03:02:23Z
curtin.departmentSchool of Media, Creative Arts and Social Inquiry
curtin.accessStatusOpen access
curtin.facultyFaculty of Humanities
curtin.contributor.orcidLeaver, Tama [0000-0002-4065-4725]
curtin.contributor.orcidSrdarov, Suzanne [0009-0001-7051-1661]
curtin.contributor.researcheridLeaver, Tama [K-2697-2014]
curtin.identifier.article-number6
curtin.contributor.scopusauthoridLeaver, Tama [39963062500]
curtin.repositoryagreementV3


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

http://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-nd/4.0/