Medbrief

Does AI Improve Paediatric Fracture Detection?

Edited by Anushree Chaphalkar

TOPLINE: 

Artificial intelligence (AI) accurately detected paediatric fractures and modestly improved the diagnostic accuracy of inexperienced physicians in the emergency department (ED), a study found; however, cost-benefit considerations were recommended.

METHODOLOGY: 

  • Researchers conducted a retrospective study involving 1672 radiographs of 1657 children (median age, 10.9 years; 59% boys) at a tertiary paediatric ED.
  • About 63% of the radiographs were of fractures of upper extremities.
  • The stand-alone performance of a commercially available, deep learning–based software was evaluated using consecutive radiographs of a real-life cohort. Additionally, selective, medicolegally relevant radiographs of three specific fractures, forming a medicolegal cohort, were used.
  • Similarly, the stand-alone performance of a commercially available, deep learning–based AI tool was determined in a real-life cohort using consecutive radiographs and in a selected cohort of three medicolegally relevant fracture types.
  • Three paediatric residents independently reviewed the radiographs before and after AI assistance to assess the impact of AI on diagnostic accuracy.

TAKEAWAY:

  • The AI tool showed high sensitivity (74%-98%), specificity (61%-100%), and accuracy (80%-94%) in the real-life cohort. The tool missed 6.4% of fractures, most of which were small ligamentous avulsions or torus fractures.
  • In the medicolegal cohort, the AI tool achieved a sensitivity of 100%, 96%, and 68% for proximal tibia, medial malleolus, and radial condyle fractures, respectively.
  • AI assistance improved resident physicians' patient-wise sensitivity (from 83.7% to 87.3%), specificity (from 90.7% to 92.4%), and accuracy (from 87.3% to 89.9%).
  • The rate of missed fractures decreased by 22% using AI assistance. AI assistance corrected 4.1% of initial errors by residents but led to 1.5% incorrect rejections of their accurate diagnoses.

IN PRACTICE:

"The AI software showed good stand-alone performance in a paediatric real-life cohort. In our university hospital setting of a tertiary centre, inexperienced residents benefit in a limited proportion of cases. This benefit must be weighed against the economic cost," the authors wrote.

SOURCE:

This study was led by Maria Ziegner, University Hospital, Leipzig, Germany, and Johanna Pape, Institute for Medical Informatics, Statistics and Epidemiology, Leipzig University, Leipzig, Germany. It was published online on April 07, 2025, in European Radiology.

LIMITATIONS:

This study was limited by its retrospective and single-centre design. Software updates between study initiation and publication may have affected the results. The real-life design, in which the diagnosis was established first without AI and immediately followed by AI assistance, may have made readers being less willing to change their initial diagnosis. Additionally, the expert consensus used as ground truth may not have been perfect, despite being standard practice.

DISCLOSURES: 

This study received open access funding that was enabled and organised by Projekt DEAL. The AI tool described in this study was provided free of charge to the authors by ImageBiopsy Lab. The authors declared having no relationships with any sources.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

References
TOP PICKS FOR YOU

3090D553-9492-4563-8681-AD288FA52ACE