Generative AI Glitches: The Artificial Everything
Citation
Source Title
DOI
ISSN
Faculty
School
Funding and Sponsorship
Collection
Abstract
Artificial Intelligence (AI) has existed in popular culture far longer than any particular technological tools that carry that name today (Leaver), and in part, for that reason, fantasies of AI being or becoming sentient subjects in their own right form current imaginaries of what AI is today, or is about to become. Yet ‘the artificial’ does not just mark something as not human, or not natural, but rather provokes an exploration of the blurred lines between supposedly different domains, such as the tensions provoked where the lines between people and technology blur (Haraway). The big technology corporations who are selling the idea that their AI tools will be able to revolutionise workforces and solve immense numbers of human challenges are capitalising on these fantasies, suggesting that they are only a few iterations away from creating self-directing machine intelligences that will dwarf the limitations of human minds (Leaver and Srdarov). At this moment, though, Artificial General Intelligence (AGI)—AI that equals or surpasses humans across a wide range of cognitive endeavours—does not and may never exist. However, given the immense commercial and societal interest in the current generation of Generative AI (GenAI) tools, examining their actual capabilities and limitations is vital.
The current GenAI tools operate using Large Language Models (LLMs) where sophisticated algorithms are trained on vast datasets, which increase in complexity based on the amount of data absorbed. These models are then harnessed to create novel outputs to prompts based on statistical likelihoods derived from training data. However, the exact way these LLMs are operating is not disclosed to users, and GenAI tools perpetuate the ‘black box’ problem insomuch as the way they are working is only made visible by examining the inputs and outputs rather than being able to see the processes themselves (Ajunwa). There have been many articles and explainers written about the mechanics of LLMs and AI image generators (Coldewey; Guinness; Jungco; Long and Magerko); however, the specific datasets used to build AI engines, and the weighing or importance assigned within the corpus of training data to each image is still guesswork. Manipulating the inputs and observing the outputs of these engines is still the most accurate lens by which to gain insight into the specifics of each system.
This article is part of a larger study, where in early 2024 we prompted a range of outputs from six popular GenAI tools—Midjourney, Adobe Firefly, DreamStudio (a commercial front-end for the Stable Diffusion model), OpenAI’s DALL-E 3, Google Gemini, and Meta’s AI (hereafter Meta)—although we should note there are no outputs from Gemini in our dataset since Gemini was refusing to generate any images with human figures at all due to a settings change after bad publicity relating to persistent inaccuracies in their generated content (Robertson). Our prompts explored the way these tools visualise children, childhoods, families, Australianness, and Aboriginal Australianness, using 55 different prompts on each of these tools, generating just over 800 images. Apart from entering the prompts, we did not change any settings of the GenAI tools, attempting to collect as raw a response as possible. Where the tools defaulted to producing one image (such as Dall-E 3), we collected one image, whilst where other tools defaulted to producing four different images, we collected all four. For the most part, the data collected from our prompt sampling was consistent with other studies and showed a clear tendency to produce images that reproduced classed, raced, and sexed ideals: chiefly, white, middle-class, heteronormative bodies and families (Bianchi et al.; Gillespie; Weidinger et al.).
Related items
Showing items related by title, author, creator and subject.
-
Tang, Kok Sing ; Cooper, Grant (2024)The introduction of generative artificial intelligence (GenAI) tools like ChatGPT has raised many challenging questions about the nature of teaching, learning, and assessment in every subject area, including science. ...
-
Leaver, Tama ; Srdarov, Suzanne (2023)Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released ...
-
Cooper, Grant ; Tang, Kok-Sing (2024)The proliferation of generative artificial intelligence (GenAI) means we are witnessing transformative change in education. While GenAI offers exciting possibilities for personalised learning and innovative teaching ...