Show simple item record

dc.contributor.authorMarinovich, Luke
dc.contributor.authorWylie, Elizabeth
dc.contributor.authorLotter, William
dc.contributor.authorLund, Helen
dc.contributor.authorWaddell, Andrew
dc.contributor.authorMadeley, Carolyn
dc.contributor.authorPereira, Gavin
dc.contributor.authorHoussami, Nehmat
dc.date.accessioned2023-09-07T04:23:19Z
dc.date.available2023-09-07T04:23:19Z
dc.date.issued2023
dc.identifier.citationMarinovich, M. and Wylie, E. and Lotter, W. and Lund, H. and Waddell, A. and Madeley, C. and Pereira, G. et al. 2023. Artificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection. EBioMedicine. 90: pp. 104498-.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/93238
dc.identifier.doi10.1016/j.ebiom.2023.104498
dc.description.abstract

Background: Artificial intelligence (AI) has been proposed to reduce false-positive screens, increase cancer detection rates (CDRs), and address resourcing challenges faced by breast screening programs. We compared the accuracy of AI versus radiologists in real-world population breast cancer screening, and estimated potential impacts on CDR, recall and workload for simulated AI-radiologist reading. Methods: External validation of a commercially-available AI algorithm in a retrospective cohort of 108,970 consecutive mammograms from a population-based screening program, with ascertained outcomes (including interval cancers by registry linkage). Area under the ROC curve (AUC), sensitivity and specificity for AI were compared with radiologists who interpreted the screens in practice. CDR and recall were estimated for simulated AI-radiologist reading (with arbitration) and compared with program metrics. Findings: The AUC for AI was 0.83 compared with 0.93 for radiologists. At a prospective threshold, sensitivity for AI (0.67; 95% CI: 0.64–0.70) was comparable to radiologists (0.68; 95% CI: 0.66–0.71) with lower specificity (0.81 [95% CI: 0.81–0.81] versus 0.97 [95% CI: 0.97–0.97]). Recall rate for AI-radiologist reading (3.14%) was significantly lower than for the BSWA program (3.38%) (−0.25%; 95% CI: −0.31 to −0.18; P < 0.001). CDR was also lower (6.37 versus 6.97 per 1000) (−0.61; 95% CI: −0.77 to −0.44; P < 0.001); however, AI detected interval cancers that were not found by radiologists (0.72 per 1000; 95% CI: 0.57–0.90). AI-radiologist reading increased arbitration but decreased overall screen-reading volume by 41.4% (95% CI: 41.2–41.6). Interpretation: Replacement of one radiologist by AI (with arbitration) resulted in lower recall and overall screen-reading volume. There was a small reduction in CDR for AI-radiologist reading. AI detected interval cases that were not identified by radiologists, suggesting potentially higher CDR if radiologists were unblinded to AI findings. These results indicate AI's potential role as a screen-reader of mammograms, but prospective trials are required to determine whether CDR could improve if AI detection was actioned in double-reading with arbitration. Funding: National Breast Cancer Foundation (NBCF), National Health and Medical Research Council (NHMRC).

dc.languageeng
dc.publisherElsevier
dc.relation.sponsoredbyhttp://purl.org/au-research/grants/nhmrc/1099655
dc.relation.sponsoredbyhttp://purl.org/au-research/grants/nhmrc/1173991
dc.relation.sponsoredbyhttp://purl.org/au-research/grants/nhmrc/1194410
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectArtificial intelligence
dc.subjectBreast neoplasms
dc.subjectDiagnostic screening programs
dc.subjectSensitivity and specificity
dc.subjectHumans
dc.subjectFemale
dc.subjectBreast Neoplasms
dc.subjectArtificial Intelligence
dc.subjectRetrospective Studies
dc.subjectProspective Studies
dc.subjectCohort Studies
dc.subjectMass Screening
dc.subjectEarly Detection of Cancer
dc.subjectMammography
dc.subjectHumans
dc.subjectBreast Neoplasms
dc.subjectMammography
dc.subjectMass Screening
dc.subjectRetrospective Studies
dc.subjectCohort Studies
dc.subjectProspective Studies
dc.subjectArtificial Intelligence
dc.subjectFemale
dc.subjectEarly Detection of Cancer
dc.titleArtificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection
dc.typeJournal Article
dcterms.source.volume90
dcterms.source.startPage104498
dcterms.source.issn2352-3964
dcterms.source.titleEBioMedicine
dc.date.updated2023-09-07T04:23:19Z
curtin.departmentCurtin School of Population Health
curtin.departmentOffice of the Pro Vice Chancellor Health Sciences
curtin.accessStatusOpen access
curtin.facultyFaculty of Health Sciences
curtin.contributor.orcidMarinovich, Luke [0000-0002-3801-8180]
curtin.contributor.orcidPereira, Gavin [0000-0003-3740-8117]
curtin.contributor.researcheridPereira, Gavin [D-7136-2014]
curtin.identifier.article-number104498
dcterms.source.eissn2352-3964
curtin.contributor.scopusauthoridPereira, Gavin [35091486200]
curtin.repositoryagreementV3


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

http://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-nd/4.0/