Artificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection
dc.contributor.author | Marinovich, Luke | |
dc.contributor.author | Wylie, Elizabeth | |
dc.contributor.author | Lotter, William | |
dc.contributor.author | Lund, Helen | |
dc.contributor.author | Waddell, Andrew | |
dc.contributor.author | Madeley, Carolyn | |
dc.contributor.author | Pereira, Gavin | |
dc.contributor.author | Houssami, Nehmat | |
dc.date.accessioned | 2023-09-07T04:23:19Z | |
dc.date.available | 2023-09-07T04:23:19Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | Marinovich, M. and Wylie, E. and Lotter, W. and Lund, H. and Waddell, A. and Madeley, C. and Pereira, G. et al. 2023. Artificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection. EBioMedicine. 90: pp. 104498-. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/93238 | |
dc.identifier.doi | 10.1016/j.ebiom.2023.104498 | |
dc.description.abstract |
Background: Artificial intelligence (AI) has been proposed to reduce false-positive screens, increase cancer detection rates (CDRs), and address resourcing challenges faced by breast screening programs. We compared the accuracy of AI versus radiologists in real-world population breast cancer screening, and estimated potential impacts on CDR, recall and workload for simulated AI-radiologist reading. Methods: External validation of a commercially-available AI algorithm in a retrospective cohort of 108,970 consecutive mammograms from a population-based screening program, with ascertained outcomes (including interval cancers by registry linkage). Area under the ROC curve (AUC), sensitivity and specificity for AI were compared with radiologists who interpreted the screens in practice. CDR and recall were estimated for simulated AI-radiologist reading (with arbitration) and compared with program metrics. Findings: The AUC for AI was 0.83 compared with 0.93 for radiologists. At a prospective threshold, sensitivity for AI (0.67; 95% CI: 0.64–0.70) was comparable to radiologists (0.68; 95% CI: 0.66–0.71) with lower specificity (0.81 [95% CI: 0.81–0.81] versus 0.97 [95% CI: 0.97–0.97]). Recall rate for AI-radiologist reading (3.14%) was significantly lower than for the BSWA program (3.38%) (−0.25%; 95% CI: −0.31 to −0.18; P < 0.001). CDR was also lower (6.37 versus 6.97 per 1000) (−0.61; 95% CI: −0.77 to −0.44; P < 0.001); however, AI detected interval cancers that were not found by radiologists (0.72 per 1000; 95% CI: 0.57–0.90). AI-radiologist reading increased arbitration but decreased overall screen-reading volume by 41.4% (95% CI: 41.2–41.6). Interpretation: Replacement of one radiologist by AI (with arbitration) resulted in lower recall and overall screen-reading volume. There was a small reduction in CDR for AI-radiologist reading. AI detected interval cases that were not identified by radiologists, suggesting potentially higher CDR if radiologists were unblinded to AI findings. These results indicate AI's potential role as a screen-reader of mammograms, but prospective trials are required to determine whether CDR could improve if AI detection was actioned in double-reading with arbitration. Funding: National Breast Cancer Foundation (NBCF), National Health and Medical Research Council (NHMRC). | |
dc.language | eng | |
dc.publisher | Elsevier | |
dc.relation.sponsoredby | http://purl.org/au-research/grants/nhmrc/1099655 | |
dc.relation.sponsoredby | http://purl.org/au-research/grants/nhmrc/1173991 | |
dc.relation.sponsoredby | http://purl.org/au-research/grants/nhmrc/1194410 | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject | Artificial intelligence | |
dc.subject | Breast neoplasms | |
dc.subject | Diagnostic screening programs | |
dc.subject | Sensitivity and specificity | |
dc.subject | Humans | |
dc.subject | Female | |
dc.subject | Breast Neoplasms | |
dc.subject | Artificial Intelligence | |
dc.subject | Retrospective Studies | |
dc.subject | Prospective Studies | |
dc.subject | Cohort Studies | |
dc.subject | Mass Screening | |
dc.subject | Early Detection of Cancer | |
dc.subject | Mammography | |
dc.subject | Humans | |
dc.subject | Breast Neoplasms | |
dc.subject | Mammography | |
dc.subject | Mass Screening | |
dc.subject | Retrospective Studies | |
dc.subject | Cohort Studies | |
dc.subject | Prospective Studies | |
dc.subject | Artificial Intelligence | |
dc.subject | Female | |
dc.subject | Early Detection of Cancer | |
dc.title | Artificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection | |
dc.type | Journal Article | |
dcterms.source.volume | 90 | |
dcterms.source.startPage | 104498 | |
dcterms.source.issn | 2352-3964 | |
dcterms.source.title | EBioMedicine | |
dc.date.updated | 2023-09-07T04:23:19Z | |
curtin.department | Curtin School of Population Health | |
curtin.department | Office of the Pro Vice Chancellor Health Sciences | |
curtin.accessStatus | Open access | |
curtin.faculty | Faculty of Health Sciences | |
curtin.contributor.orcid | Marinovich, Luke [0000-0002-3801-8180] | |
curtin.contributor.orcid | Pereira, Gavin [0000-0003-3740-8117] | |
curtin.contributor.researcherid | Pereira, Gavin [D-7136-2014] | |
curtin.identifier.article-number | 104498 | |
dcterms.source.eissn | 2352-3964 | |
curtin.contributor.scopusauthorid | Pereira, Gavin [35091486200] | |
curtin.repositoryagreement | V3 |