SERGIOURI.BE
  • HOME
  • Conferences
  • Books
  • AI

Books

Picture
Schwendicke, F., Chaudhari, P. K., Dhingra, K., Uribe, S. E., & Hamdan, M. (Eds.). (2025). Artificial Intelligence for Oral Health Care: Applications and Future Prospects. Springer Nature.



Download Full Chapter

Selected AI papers

​Core outcome measures in dental computer-vision studies (DentalCOMS)
Question: What core outcome measures should be standardized for reporting dental computer vision studies to improve comparability and reduce bias?
Answer: Eight core outcome measures were identified through expert consensus: confusion matrix, accuracy, sensitivity, specificity, precision, F-1 score, area-under-the-receiver-operating-characteristic-curve, and area-under-the-precision-recall-curve. Human expert assessment and diagnostic accuracy considerations are essential for clinically meaningful evaluation.
For: Researchers and educators
Citation: Büttner M, Rokhshad R, Brinz J, Uribe SE, et al. Core outcomes measures in dental computer vision studies (DentalCOMS). J Dent. 2024;150:105318. doi: 10.1016/j.jdent.2024.105318. https://doi.org/10.1016/j.jdent.2024.105318

Influence of a deep-learning tool on detecting apical radiolucencies
Question: Does a deep learning model improve oral radiologists' ability to detect periapical radiolucencies on periapical radiographs?
Answer: No. AI did not significantly enhance radiologists' overall diagnostic accuracy (AFROC-AUC, sensitivity, specificity, ROC-AUC showed no statistical differences). However, AI reduced diagnostic time (β = 12, 95% CI, 11-13, P < 0.001) and may benefit non-expert clinicians. Radiologist expertise remained the primary factor for diagnostic accuracy.
For: Clinicians and researchers
Citation: Hamdan MH, Uribe SE, Tuzova L, et al. The influence of a deep learning tool on the performance of oral and maxillofacial radiologists in the detection of apical radiolucencies. Dentomaxillofac Radiol. 2025;54(2):118-124. doi: 10.1093/dmfr/twae054. https://doi.org/10.1093/dmfr/twae054

Deep learning for caries detection: systematic review
Question: What is the diagnostic accuracy of deep learning models for detecting caries lesions across different dental imaging modalities?
Answer: Deep learning models showed promising accuracy across imaging types: 71-96% on intraoral photographs, 82-99.2% on periapical radiographs, 87.6-95.4% on bitewing radiographs, 68-78% on near-infrared transillumination, 88.7-95.2% on optical coherence tomography, and 86.1-96.1% on panoramic radiographs. However, study quality was generally low with only 26% showing low risk of bias across all domains.
For: Clinicians and researchers
Citation: Mohammad-Rahimi H, Motamedian SR, Rohban MH, Uribe SE, et al. Deep learning for caries detection: A systematic review. J Dent. 2022;122:104115. doi: 10.1016/j.jdent.2022.104115. https://doi.org/10.1016/j.jdent.2022.104115

Federated vs local vs central learning for tooth segmentation on panoramics
Question: How does federated learning compare to local and central learning approaches for training tooth segmentation models on panoramic radiographs across multiple centers?
Answer: Federated learning outperformed local learning in 8 of 9 centers (p<0.05) and showed superior generalizability across all centers. Central learning achieved the highest performance and generalizability. When data pooling is not feasible due to privacy constraints, federated learning provides a viable alternative to improve model generalizability compared to isolated local training.
For: Researchers and educators
Citation: Schneider L, Rischke R, Krois J, Uribe SE, et al. Federated vs Local vs Central Deep Learning of Tooth Segmentation on Panoramic Radiographs. J Dent. 2023;135:104556. doi: 10.1016/j.jdent.2023.104556. https://doi.org/10.1016/j.jdent.2023.104556

Core education curriculum for AI in oral and dental healthcare
Answer: Four learning domains were identified: (1) Basic AI definitions, machine learning principles, training/validation concepts, and explainability requirements; (2) Use cases and typical AI software setups for dental purposes; (3) Evaluation metrics, interpretation, and health outcome impacts; (4) Issues of generalizability, representativeness, explainability, autonomy, accountability, and governance needs. Most outcomes target the "knowledge" level of learning.
For: Educators
Citation: Schwendicke F, Chaurasia A, Wiegand T, Uribe SE, et al. Artificial intelligence for oral and dental healthcare: Core education curriculum. J Dent. 2023;128:104363. doi: 10.1016/j.jdent.2022.104363. https://doi.org/10.1016/j.jdent.2022.104363

Estimating the use of ChatGPT in dental research publications
Answer: Evidence shows widespread ChatGPT adoption in dental research. Frequency of signaling words rose from 47.1 per 10,000 papers before release to 224.2 per 10,000 papers after release (increase of 177.2 per 10,000 papers, p = 0.014). The word "delve" showed the highest usage increase (ratio=17.0), indicating extensive integration of generative AI in scientific writing.
For: Researchers and educators
Citation: Uribe SE, Maldupa I. Estimating the use of ChatGPT in dental research publications. J Dent. 2024;149:105275. doi: 10.1016/j.jdent.2024.105275. https://doi.org/10.1016/j.jdent.2024.105275

AI chatbots and large language models in dental education: worldwide survey
Question: What are dental educators' perceptions of AI chatbots and large language models for dental education globally?
Answer: Among 428 dental educators, 31% already use AI tools and 64% recognize their educational potential. Educators believe AI chatbots could enhance knowledge acquisition (74.3%), research (68.5%), and clinical decision-making (63.6%). However, 53.9% worry about reduced human interaction. Chief concerns center on absence of clear guidelines and training for AI chatbot use.
For: Educators
Citation: Uribe SE, Maldupa I, Kavadella A, et al. Artificial intelligence chatbots and large language models in dental education: Worldwide survey of educators. Eur J Dent Educ. 2024. doi: 10.1111/eje.13009. https://doi.org/10.1111/eje.13009

Publicly available dental image datasets for AI
Question: What publicly available dental imaging datasets exist for AI development, and what are their characteristics and limitations?
Answer: Only 16 unique dental imaging datasets were identified from 131,028 screened records. Most focused on tooth segmentation (62.5%) and labeling (56.2%) using panoramic radiography (58.8%). China contributed the most images (2,413). Critical limitations include: only 31.2% reported ethical approval, 56.25% lacked licensing information, 75% contained annotations but with unclear labeling methods, and poor FAIRness scores overall.
For: Researchers and educators
Citation: Uribe SE, Issa J, Brinz J, et al. Publicly Available Dental Image Datasets for Artificial Intelligence. J Dent Res. 2024;103(13). doi: 10.1177/00220345241272052. https://doi.org/10.1177/00220345241272052

Artificial intelligence in dental research: Checklist for authors, reviewers, readers
Question: How can authors, reviewers, and readers improve the quality and reproducibility of AI studies in dentistry?
Answer: A 31-item checklist was developed covering study goals, data sampling, reference tests, model training, evaluation, uncertainty, explainability, and performance metrics, aiming to standardize planning, reporting, and review.
For: Researchers, Reviewers, Educators
Citation: Schwendicke, F., Singh, T., Wiegand, T., Uribe, S. E., Krois, J. et al. Journal of Dentistry, 2021; 107:103610. https://doi.org/10.1016/j.jdent.2021.103610

Evaluating dental AI research papers: Key considerations for editors and reviewers
Question: What should editors and reviewers prioritize when evaluating dental AI research papers?
Answer: Four critical indicators were identified: relevance to clinical or methodological problems, methodological rigor, reproducibility via open data/code or demos, and adherence to ethical and transparent reporting. Common rejection reasons include limited novelty, weak methodology, lack of external validation, and exaggerated claims.
For: Editors, Reviewers, Researchers
Citation: Uribe, S. E., Hamdan, M. H., Valente, N. A., Yamaguchi, S., Umer, F., Tichy, A., Pauwels, R., Schwendicke, F. et al. Journal of Dentistry, 2025; 160:105867. https://doi.org/10.1016/j.jdent.2025.105867

FDI White Paper: Artificial Intelligence for Dentistry
Question: What are the current applications, opportunities, and governance requirements for artificial intelligence in dental practice, education, research, and public health?
Answer: AI applications in dentistry span four domains: individual patient care (image analysis, data synthesis, evidence-supported treatment planning, patient interaction), community and public health (surveillance, accessibility), education and workforce (planning, digital literacy, simulation-based learning), and research (pattern identification, data pooling). Key governance principles include human autonomy protection, safety assurance, transparency, accountability, inclusiveness, and sustainability. Major challenges include bias, limited generalizability, data accessibility, and interoperability issues.
For: Clinicians, researchers, and educators
Citation: Schwendicke F, Blatz M, Uribe S, et al. Artificial Intelligence for dentistry. FDI World Dental Federation White Paper. 2023. Available at: https://www.fdiworlddental.org/sites/default/files/2023-01/FDI%20ARTIFICIAL%20INTELLIGENCE%20WORKING%20GROUP%20WHITE%20PAPER_0.pdf
AI Research Publications

​

Address

Dzirciema iela 20, Rīga, LV-1007, Latvia

Contact 

[email protected]
  • HOME
  • Conferences
  • Books
  • AI