BMC Oral Health, cilt.26, sa.1, 2026 (SCI-Expanded, Scopus)
Background: Artificial Intelligence (AI) is reshaping diagnostics and disease prevention in the dental domain. Panoramic X-ray imaging is central to this progress but demands large, high-quality annotated datasets. We therefore present AKUDENTAL, a new dataset for instance segmentation of dental radiographs, to serve as a resource for model development and to assess the challenges of generalizability. Methods: We annotated 333 panoramic images, labeling 9,956 structures across 32 individual teeth and three restorative categories: implants, bridges, and crown–filling. We established semantic segmentation, object detection, and instance-segmentation baselines using UNet, DeepLabV3 +, YOLOv11, and Mask R-CNN models. Generalizability was assessed via 5-fold cross-validation and a cross-dataset evaluation on the Tufts, DENTEX, and Dual-labeled datasets. Results: A cross-dataset evaluation on the Tufts, DENTEX, and Dual-labeled datasets revealed that variations in annotation protocols are a significant factor contributing to performance differences. The cross-dataset evaluation demonstrated widely varying performance, with mean Average Precision (mAP) scores for multiclass detection ranging from a low of 0.34 on the DENTEX dataset to 0.71 on the Dual-labeled dataset Our analysis illustrates how such discrepancies can impact the interpretation of model performance. Conclusions: The AKUDENTAL dataset provides a robust new resource for the field. The performance disparities revealed in our cross-dataset analysis are not model limitations but instead strengthen the argument that annotation inconsistencies are a critical barrier to developing universally applicable AI. This highlights the imperative for broader standardization in data annotation, extending beyond tooth identification to encompass common dental procedures and restorations.