Issue |
J Oral Med Oral Surg
Volume 31, Number 1, 2025
|
|
---|---|---|
Article Number | 7 | |
Number of page(s) | 10 | |
DOI | https://doi.org/10.1051/mbcb/2025008 | |
Published online | 24 March 2025 |
Original Research Article
Fully automated deep learning framework for detection and classification of impacted mandibular third molars in panoramic radiographs
1
Department of Oral Diagnostic Sciences, Faculty of Dentistry, SEGi University, No. 9 Jalan Teknologi, Taman Sains, Petaling Jaya, Kota Damansara, Selangor 47810, Malaysia
2
School of Computing, Faculty of Computing and Engineering Technology, Asia Pacific University of Technology and Innovation (APU), Lot 6, Technology Park Malaysia, Bukit Jalil, Kuala Lumpur 57000, Malaysia
3
Oxford Internet Institute, University of Oxford, 41 St Giles, Oxford OX1 3JS, UK
4
Preventive Dental Science Department, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
5
King Abdullah International Medical Research Center, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
6
Department of Periodontics and Implantology, Faculty of Dentistry, SEGi University, No. 9 Jalan Teknologi, Taman Sains, Petaling Jaya, Kota Damansara, Selangor 47810, Malaysia
* Correspondence: dr.suri88@gmail.com
Received:
20
November
2024
Accepted:
3
February
2025
Introduction: Mandibular third molars (MTMs) are the most frequently impacted teeth, making their detection and classification essential before surgical extraction. This study aims to develop and assess the accuracy of a deep learning model for detecting and classifying impacted mandibular third molars (IMTMs) using panoramic radiographs (PRs). Materials and methods: The study utilized a dataset of 1100 PRs with 1200 IMTMs and 711 PRs without MTMs. An oral radiologist validated the annotations, and the data were split into training, validation, and testing sets. The Sobel Third Molar Detection Model (STMD), built on the VGG16 architecture, identified MTMs. Detected MTMs were located using the YOLOv7 model and classified per Winter’s classification via a ResNet50-based prediction model. Results: The VGG16-based detection model achieved a testing accuracy of 93.51%, with a precision of 94.64, recall of 89.47, and an F1 score of 91.97. The ResNet50-based classification model attained a testing accuracy of 92.17%, precision of 92.1, recall of 92.17, and an AUC of 98.28. These findings demonstrate the high accuracy and reliability of both models. Conclusion: VGG16 and ResNet50 integrated with YOLOv7, demonstrated high accuracy suggesting that the automatic detection and classification of IMTMs can be significantly improved using these models.
Key words: Classification / deep learning / impacted teeth / mandible / radiographs / third molar
© The authors, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Mandibular third molars (MTMs) are the most frequently impacted teeth among young adults, and their extraction is one of the most commonly performed minor oral surgical procedures by dentists [1]. Its prevalence ranges from 16.7% to 68.6% among different populations [2]. In Malaysia, the prevalence is relatively high, varying between 67.6% and 73.5%, with the highest rates observed among the Chinese, followed by Malays and Malay Indians [3]. The impacted teeth are considered to be the teeth that cannot completely erupt due to obstruction from the soft tissues, bones or both in some cases [4]. Coupled with the fact that third molars are the last teeth to grow and erupt, they often face much more severe challenges compared to other teeth, as there is less space within the mandibular arch [5].
Impacted mandibular third molars (IMTMs) are associated with various odontogenic pathologies, including pericoronitis, cystic lesions, neoplasms, dental caries, and root resorption [6]. Therefore, surgical extraction is often necessary which is associated with varying degrees of difficulty, primarily influenced by the tooth’s position. Common types of impacted mandibular third molars (IMTMs) include mesioangular, vertical, distoangular, and horizontal, classified based on their position, with each type presenting increasing levels of extraction difficulty [7]. Therefore, a thorough preoperative evaluation of the third molar’s position is crucial in determining the specific surgical plan [6].
Panoramic radiographs (PRs) are the most commonly utilized imaging modality for detecting and classifying IMTMs, assessing their difficulty level, and evaluating their association with adjacent vital structures [8]. However, the conventional manual analysis of radiographs by dental professionals significantly increases their daily workload and is susceptible to subjective factors such as experience level, human error, and fatigue, leading to numerous uncertainties [9]. Computer-assisted diagnostic systems can enhance dentists' diagnostic and therapeutic capabilities by mitigating these challenges. Therefore, there is a need for an automated, accurate, and rapid model for the detection and classification of IMTMs. The artificial intelligence-based approaches have effectively addressed the aforementioned limitations of traditional methods by automatically identifying optimal representations and learning features from raw data, eliminating the need for hand-crafted features [6].
Artificial intelligence (AI) encompasses computer technology designed to emulate critical thinking, decision-making, and intelligent behaviour akin to human cognition to address real-world problems [10]. Machine learning (ML), a subset of AI, comprises algorithms that learn from extensive datasets, enabling computers to solve problems by executing specific tasks. These algorithms improve their performance as they process more data [11]. In the healthcare sector, AI and ML have been extensively utilized over the past two decades [12,13]. In dental radiology, deep learning models have effectively detected and classified dental caries in intraoral radiographs, outperforming traditional diagnostic methods in terms of accuracy and efficiency [14]. AI has also been applied to automate the detection of periodontal disease and impacted teeth in panoramic radiographs [15]. Additionally, AI models have demonstrated superior performance in periodontology, orthodontics, and endodontics [12,13].
In medical imaging, deep learning has been widely adopted to identify abnormalities in radiographs, CT scans, and MRI images, such as lung nodules in chest CT scans and breast cancer in mammograms [16,17]. Beyond imaging, machine learning has been employed to predict patient outcomes and has shown high accuracy in diagnosing pneumonia from chest X-rays. AI-powered systems have also advanced histopathological image analysis, enabling faster and more precise diagnoses [18].
Machine learning (ML) comprises supervised, semi-supervised, and unsupervised learning. Supervised learning uses labelled data to train algorithms for feature recognition and predictions, while unsupervised learning autonomously identifies patterns in unlabelled data. Semi-supervised learning combines both labelled and unlabelled data [8]. Deep learning (DL), a subset of ML, employs artificial neural networks (ANNs) modelled on neural processes, with convolutional neural networks (CNNs) being particularly effective for tasks like object detection, classification, and segmentation [8]. In healthcare, DL systems enhance diagnostics, reduce errors, improve efficiency, and support radiologists by managing workloads, streamlining reporting, and aiding training addressing the shortage of radiologists [4].
The utilization of deep learning algorithms for various dental applications has been well-documented in previous studies. These applications include quantifying the number of teeth [19–22], recognizing root morphology [23,24], detecting third molars [25], automatically identifying cephalometric landmarks on radiographs [26], diagnosing periodontitis [27,28], predicting the likelihood of third molar eruption, forecasting facial swelling [29], and estimating the time required to extract third molars [9,30]. However, a limited number of investigations have focused on the detection, classification, and segmentation of IMTMs. Recognizing the prevalence of IMTMs and the challenges in detection and classification within dental radiology, we aimed to develop a deep learning algorithm with CNNs using PRs. Previous studies utilized object detection algorithms, such as YOLOv3 [6] and Faster R-CNN [7], to detect and classify impacted third molars in dental radiographs. These studies found that YOLOv3 outperformed Faster R-CNN in detection speed while maintaining high accuracy, making it particularly suitable for real-time applications like automated dental radiograph analysis. YOLOv3 is a real-time object detection algorithm that frames object detection as a single regression problem. It divides an image into a grid and simultaneously predicts bounding boxes and class probabilities for each grid cell, resulting in high efficiency. In contrast, Faster R-CNN is a two-stage object detection algorithm. It first generates region proposals likely to contain objects and then classifies these regions while refining their bounding boxes. While renowned for its accuracy, Faster R-CNN is computationally more intensive than YOLOv3. Celik et al. employed YOLOv3 [6], and Fukuda et al. used Faster R-CNN [7] for the detection and classification of impacted third molars, focusing exclusively on mesioangular and horizontal types of impactions. However, other common positions, such as vertical and distoangular impactions, were overlooked. Furthermore, their models were trained on small datasets and did not account for true negatives or cases without third molars, limiting the robustness of the models. Currently, no unified deep learning framework exists for the detection and classification of IMTMs. Our framework uniquely integrates three models VGG16, YOLOv7, and ResNet50 to identify and classify IMTMs following Winter’s classification. This classification is based on the positional relationship of impacted third molars relative to the adjacent second molars, categorizing impactions into four types: mesioangular, distoangular, horizontal, and vertical, depending on the angle of the impacted tooth in relation to the long axis of the second molar [31]. Designed as a supportive diagnostic tool for oral health professionals, this system aims to alleviate workload, reduce diagnostic errors, and allow a greater focus on clinical care. Additionally, it facilitates more accurate and expedited decision-making, ultimately enhancing patient care. The algorithm’s detection and classification performance was assessed using metrics such as accuracy, recall, F1 score, and precision.
Materials and methods
Subjects
A total of 1100 PRs, representing 1200 IMTMs from individuals aged 18 to 65, were selected from a radiology database spanning January 2013 to December 2023 (Tab. I). Inclusion criteria required fully developed roots in impacted teeth, while radiographs with artifacts, motion-induced distortions, positional distortions, or incomplete root formation were excluded. Additionally, 711 PRs without MTMs were included as true negatives for training. The dataset was divided into training, testing, and validation sets using a 70:20:10 ratio. Two experienced radiologists validated the datasets. The Winter classification system was applied to categorize IMTMs on PRs according to their angulation relative to the second molar’s long axis, grouping them as mesioangular, horizontal, distoangular, or vertically impacted.
The distribution of datasets.
Dataset preprocessing: annotation protocol and augmentation techniques
The original DICOM files, initially at a resolution of 2943 × 1435, were downscaled to 224 × 224 to optimize model training and computational efficiency. Two oral radiologists, each with over a decade of expertise, annotated the dataset using LabelImg software, enclosing IMTMs crowns and roots within bounding boxes. To enhance the training set, data augmentation techniques including rotations (45°), width/height shifts (0–10%), shearing (0–10%), zooming (0–10%), and horizontal flipping were employed.
Deep learning framework
The framework illustrated in Figure 1 was designed to identify the presence of a third molar on a PRs, determine the presence and side (right and/or left) of an impaction, and classify the specific type of impaction in the mandible. The framework redacts patient data from PRs using Keras OCR, a pre-trained model that extracts text as rectangular coordinates, which is then masked with NumPy and OpenCV. After redaction, the PRs undergoes edge detection via the Sobel Operator, creating a bordered image and subsequently analyzed by a fine-tuned VGG16 model to predict the presence of MTMs. If MTM was detected, the redacted image was processed by a custom Yolov7 model to locate the MTM. The Yolov7 output identifies left or right positioning, producing cropped images focused on the MTMs which are then classified by a ResNet50 model. Figure 2A displays the input image, while Figure 2B illustrates the framework’s output used in the study. The outputs consist of color-coded and annotated images, showcasing predictions with labeled boxes and class names positioned in the top-left corner of the output dataset.
![]() |
Fig. 1 Framework design for detection and classification of IMTMs. |
![]() |
Fig. 2 A displays the input image, while B illustrates the framework’s output used in the study. |
Results
VGG16 MTM detection model
The first model processes Sobel edge-detected images, classifying them as “Third Molar” or “No Third Molar”. Using a VGG base model with frozen layers and pre-trained ImageNet weights. It has two fully connected layers (1000 and 500 neurons) with ReLU activation (Rectified Linear Unit) and L2 regularization (0.01) to prevent overfitting. The output layer consists of one neuron with a Sigmoid activation, with a threshold of 0.25 (indicating “Third Molar” if below and “No Third Molar” if above). A batch size of 16, EarlyStopping® and ReduceLROnPlateau® keras callbacks are used. This threshold adjustment addresses class imbalance, reducing the risk of false negatives.
Figures 3 and 4 illustrate the smooth convergence of the loss function, ultimately reaching its optimal value. The model attained a training accuracy of 96%, with a validation accuracy of 91%, and a testing accuracy of 93.51% for Sobel Third Molar Detection. Additionally, the model demonstrated a precision of 94.64%, a recall of 89.47%, and an F1 score of 91.97%.
![]() |
Fig. 3 Training accuracy versus validation accuracy for the VGG16 MTM model. |
![]() |
Fig. 4 Loss curve comparing training and validation loss for the VGG16 MTM model. |
Region of Interest (ROI) extraction
After detecting the MTMs, the framework extracts the ROI. YOLOv7 was custom-trained for single-class classification, labelling all IMTMs as positive. The learning rate was set to 0.75, and the Intersection over Union (IOU) threshold was 0.65, determining the overlap between the predicted and actual boxes. Training tested image resolutions of 640, 448, and 224, with the 224 and 448 resolutions yielding a similar accuracy of 77%. As bounding box precision affects model accuracy, our study prioritized high recall to capture all instances. The 224 × 224 model, achieving a recall of 1 and an F1 Score of 83%, was selected for the framework. Figures 5A and Figures 5B displays ROI samples of Mesioangular (5A), Distoangular (5B), Horizontal (5C), and Vertical (5D) angulations.
![]() |
Fig 5 ROI samples of Mesioangular (5A), Distoangular (5B), Horizontal (5C), and Vertical (5D) angulations. |
ROI based ResNet50 IMTMs detection model
The final model in the framework, focused on detecting IMTMs, used ResNet50 as a backbone, with cropped 224 × 224 ROI images of third molars as input to classify impaction type (Figs. 5A–5D). Built with ImageNet-pretrained ResNet50, it included two dense layers with 1000 and 500 neurons, using L2 regularization of 0.03 and dropout rates of 40% and 30%, respectively. The output layer had 4 neurons with Softmax activation. EarlyStopping() and ReduceLROnPlateau() Keras callbacks were used, with training in batches of 32.
The training accuracy of the model was 95.93 % and validation accuracy was 80.54 %. Although the validation accuracy seemed low, the loss curve showed that the model had, near perfectly converged, pointing towards full model and data capacity utilisation. The testing accuracy of the model was 92.17, with a precision of 92.1, recall of 92.17 and AUC of 98.28 (Figs. 6 and 7).
![]() |
Fig. 6 Training accuracy versus validation accuracy for the ResNet50 IMTMs classification model. |
![]() |
Fig. 7 Loss curve comparing training and validation loss for the ResNet50 IMTMs classification model. |
Base model selection and analysis
Seven model were compared for MTM detection, VGG models performed best, with VGG16 achieving the highest accuracy and F1 Score at 87.65 and 89.23, respectively (Figs. 8 and 9). There was a notable gap between ResNet50, ResNet152, and the VGG models, with Xception, InceptionV3, and ResNet101 performing the worst. Therefore, VGG16 was chosen as the base model for MTM Detection. For IMTMs classification using ROI-cropped images, ResNet50 excelled, achieving a testing accuracy of 92.17 and an F1 Score of 91.7 (Figs. 10 and 11), followed closely by ResNet101. VGG19, InceptionV3, and Xception performed poorly, therefore, ResNet50 was selected.
![]() |
Fig. 8 Binary accuracy of the base models. |
![]() |
Fig. 9 The F1 score for the base models. |
![]() |
Fig 10 The accuracy of base models for IMTM classification. |
![]() |
Fig. 11 The F1 score for base models for IMTM classification. |
Discussion
Our deep learning framework utilized a fine-tuned VGG16 model to detect the presence of MTMs, followed by a YOLOv7 model that precisely localized the MTM, resulting in cropped IMTMs images. Finally, a ResNet50 model was employed to classify the IMTMs. VGG16 was chosen as the base model for its superior performance, achieving an accuracy of 87.65% and an F1 score of 89.23%. In comparison, alternative models including VGG19, Xception, ResNet50, ResNet101, ResNet152, and InceptionV3 demonstrated lower accuracy and F1 scores. Notably, Xception and InceptionV3 yielded the lowest accuracies, while InceptionV3 and ResNet101 showed lower F1 scores. For IMTM classification, ResNet50 was selected due to its high accuracy of 92.17% and an F1 score of 91.7%.
Few authors have explored the application of deep learning models for the detection and classification of IMTMs. Earlier, researchers used a U-Net model to detect IMTMs, achieving an average Dice coefficient score of 93.6% and a Jaccard index of 88.1% [25]. The Dice coefficient quantifies the concordance between the manually annotated IMTMs, serving as the ground truth, and the segmentation outcomes generated by deep learning algorithms. Whereas Jaccard index, or IOU, assesses the precision of bounding box predictions, expressed as the proportion of overlap between the ground truth region and the predicted bounding box area [6]. Notably, Vinayahalingam et al. [25] utilized only 81 datasets for this study.
Alternative deep learning algorithms frequently applied for IMTM classification include VGG-16 [32,33], ResNet-34 [34] and YOLOv3 [6]. Yoo et al [34] reported an accuracy of 90.23% for Winter’s classification on PRs using a limited dataset of 600 PRs. In a separate study, Sukegawa et al. [33] achieved 86.63% accuracy using the VGG-16 model for Winter’s classification and recorded accuracies of 85.41% and 88.95% for the Pell & Gregory classification of Class and Position of IMTMs, respectively, on a dataset of 1,330 images. Maruta et al. [32] also employed VGG-16, reporting an accuracy of 79.59% for Winter’s classification, 86.09% for Pell & Gregory Class, and 84.32% for Position.
Our findings demonstrate high-performance metrics in IMTMs detection using the VGG16 model, achieving an accuracy of 93.51%, precision of 94.64%, recall of 89.47%, and an F1 score of 91.97%. This robust performance surpasses comparable studies and highlights the model’s reliability. Notably, we achieved a balanced precision-recall ratio, underscoring its efficacy in reducing both false positives and negatives an essential attribute for clinical applicability. Additionally, our framework incorporated Keras OCR for text detection and removal, as well as the Sobel operator for edge detection, improving preprocessing to yield cleaner and more accurate input images. Furthermore, data augmentation techniques such as flipping, rotation, and adjustments in brightness, sharpness, and contrast were employed, contributing to increased model accuracy.
Previous studies used deep learning models like Faster R-CNN with ResNet50, AlexNet, VGG16 backbones, and YOLOv3 for IMTMs detection [6,33–35]. Our use of YOLOv7, however, marks a notable improvement, offering more precise localization and enhanced accuracy. As a single-stage detector, YOLOv7 directly predicts bounding boxes and labels in one step, eliminating the need for region proposals and refinement seen in two-stage detectors. This efficiency is crucial for identifying IMTMs, as it increases detection likelihood in a single pass. YOLOv7’s architectural upgrades, advanced feature aggregation, and refined backbones enable better feature extraction, handling of small objects and enhanced overall performance.
Sukegawa et al. [33] assessed the single-stage YOLOv3 model against the two-stage Faster R-CNN with ResNet50, AlexNet and VGG16 backbones [6], finding YOLOv3 86% more accurate for classifying IMTMs by Winter’s classification. They also compared a single-task model, classifying each IMTM individually, with a Multi-3Task model, which simultaneously predicts classifications by Winter’s and Pell & Gregory’s systems. Though the Multi-3Task model reduces computational costs, its accuracy was lower, likely due to differences between the classifications. Winter’s classification focuses on inclination and angulation, whereas Pell & Gregory’s emphasizes mandibular area, leading to decreased performance when combined in a multitask model [6,33].
For classification of IMTMs, various architectures have been utilized, including ResNet-34 [34], VGG-16 [32,33], and YOLOv3 [6]. Yoo et al. [34] performed classifications by class, position, and Winter’s classification for the IMTMs, achieving accuracies of 78.1%, 82.0%, and 90.2%, respectively. Deep learning models applied by Sukegawa et al. [33] reached accuracies of 86% for Winter’s classification and 82.03% and 78.91% for Pell & Gregory classifications based on space and position. In our study, ResNet50 was employed for IMTM classification, achieving a testing accuracy of 92.17%, with precision and recall of 92.1% and an AUC of 98.28%, outperforming prior studies.
AI-driven deep learning can assist dentists in making swift, accurate diagnoses, reduce diagnostic errors from high workloads, and accelerate interpretation. It will be particularly beneficial in areas with a shortage of radiologists. In the future, AI-based detection, classification, and evaluation of ITMs from PRs could enhance diagnostic accuracy and support dental practice. However, diagnosis remains the responsibility of the attending dentist, and AI cannot fully replace dentists or radiologists due to the risk of misdiagnosis, including underdiagnosis or overdiagnosis [35].
The study’s limitations include a modest dataset size and the exclusion of certain Winter’s classification categories, specifically buccolingual and inverted IMTMs due to their rarity and limited representation in the dataset. Their inclusion could have caused significant class imbalance, affecting the model’s generalization. In Winter’s classification, the second molar typically serves as a reference point for classifying impacted teeth; however, we did not include the second molar in the annotation process for third molar angulation assessment. Future deep learning models should incorporate odontogenic pathologies linked to impacted teeth, the incidence of distal caries on second molars, the relationship between the roots of impacted teeth and the inferior alveolar canal, and should also consider including the second molar in annotations.
Conclusion
Our results, derived from the proposed framework utilizing VGG16, YOLOv7, and ResNet50, demonstrated exceptional performance in the detection and classification of IMTMs on PRs. This significantly contributes to the advancement of automated systems for detecting and classifying MTM from individual PRs, providing a potential clinical value.
Funding
The authors report no funding for this article.
Conflicts of interest
The authors declare no competing interests with regards to the authorship and/or publication of this article.
Data availability statement
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Author contribution statement
Conception and design: SKV. Analysis and interpretation of the data: SKV, SP. Drafting of the article: SKV, SV, SP, SY, KI. Critical revision of the article for important intellectual content: SKV, SP. Final approval of the article: SKV, SY, KI. Provision of study materials or patients: SKV, SV, SP. Statistical expertise: SKV. Collection and assembly of data: SKV, SP.
Ethics approval
This study was approved by the Ethical Review Board at SEGI University (Ethics Approval Number: SEGIEC/SR/FOD/13/2023-2024) and conducted in accordance with the ethical standards of the Helsinki Declaration.
References
- Veerabhadrappa SK, Hesarghatta Ramamurthy P, Yadav S, Bin Zamzuri AT. Analysis of clinical characteristics and management of ectopic third molars in the mandibular jaw: a systematic review of clinical cases. Acta Odontol Scand 2021;79:514–522. [Google Scholar]
- Hashemipour MA, Tahmasbi-Arashlow M, Fahimi-Hanzaei F. Incidence of impacted mandibular and maxillary third molars: a radiographic study in a Southeast Iran population. Med Oral Patol Oral Cir Bucal 2013; 18: e140–e145. [CrossRef] [PubMed] [Google Scholar]
- Mahdey HM, Arora S, Wei M. Prevalence and difficulty index associated with the 3(rd) mandibular molar impaction among malaysian ethnicities: a clinico-radiographic study. J Clin Diagn Res 2015; 9: ZC65–8. [Google Scholar]
- Alfadil L, Almajed E. Prevalence of impacted third molars and the reason for extraction in Saudi Arabia. Saudi Dent J 2020; 32: 262–268. [Google Scholar]
- Swift JQ, Nelson WJ. The nature of third molars: are third molars different than other teeth? Atlas Oral Maxillofac Surg Clin North Am 2012; 20: 159–162. [PubMed] [Google Scholar]
- Celik ME. Deep learning based detection tool for impacted mandibular third molar teeth. Diagnostics (Basel) 2022 9; 12: 942. [Google Scholar]
- Fukuda M, Ariji Y, Kise Y, Nozawa M, Kuwada C, Funakoshi T, et al. Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130: 336–343. [Google Scholar]
- Jing Q, Dai X, Wang Z, Zhou Y, Shi Y, Yang S, et al. Fully automated deep learning model for detecting proximity of mandibular third molar root to inferior alveolar canal using panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137: 671–6782024. [Google Scholar]
- Kwon D, Ahn J, Kim CS, et al. A deep learning model based on concatenation approach to predict the time to extract a mandibular third molar tooth BMC Oral Health 2022; 22: 571. [Google Scholar]
- Ding H, Wu J, Zhao W, Matinlinna JP, Burrow MF, Tsoi JKH. Artificial intelligence in dentistry—a review. Front Dent Med 2023; 4: 1085251. [CrossRef] [PubMed] [Google Scholar]
- Agrawal P, Nikhade P. Artificial intelligence in dentistry: past, present, and future. Cureus 2022; 14: e27405. [Google Scholar]
- Leite AF, Van Gerven A, Willems H, et al. Artificial intelligence driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin Oral Investig 2021; 25: 2257–2267. [Google Scholar]
- Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent 2019; 91: 103226. [CrossRef] [PubMed] [Google Scholar]
- Mohammad-Rahimi H, Motamedian SR, Rohban MH, Krois J, Uribe SE, Mahmoudinia E, et al. Deep learning for caries detection: a systematic review. J Dent 2022; 122: 104115. [Google Scholar]
- Sivari E, Senirkentli GB, Bostanci E, Guzel MS, Acici K, Asuroglu T. Deep learning in diagnosis of dental anomalies and diseases: a systematic review. Diagnostics (Basel) 2023; 13: 2512. [Google Scholar]
- López Alcolea J, Fernández Alfonso A, Cano Alonso R, Álvarez Vázquez A, Díaz Moreno A, García Castellanos D, et al. Diagnostic performance of artificial intelligence in chest radiographs referred from the emergency department. Diagnostics 2024; 14: 2592. [Google Scholar]
- Nabulsi Z, Sellergren A, Jamshy S. Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19. Sci Rep 2021; 11: 15523. [Google Scholar]
- Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, Brogi E. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 2019; 25: 1301–1309. [Google Scholar]
- Chen H, Zhang K, Lyu P, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep 2019; 9: 1–11. [PubMed] [Google Scholar]
- Tuzoff DV, Tuzova LN, Bornstein MM, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48: 20180051. [Google Scholar]
- Kim C, Kim D, Jeong H, et al. Automatic tooth detection and numbering using a combination of a CNN and heuristic algorithm. Appl Sci 2020; 10: 5624. [Google Scholar]
- Parvez MF, Kota M, Syoji K. Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs. Sci Rep 2020; 10: 19261. [Google Scholar]
- Hiraiwa T, Ariji Y, Fukuda M, et al. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol 2019; 48: 20180218. [Google Scholar]
- Zheng Z, Yan H, Setzer FC, et al. Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection. IEEE Trans Autom Sci Eng 2020; 18: 603–614. [Google Scholar]
- Vinayahalingam S, Xi T, Bergé S, et al. Automated detection of third molars and mandibular nerve by deep learning. Sci Rep 2019; 9: 9007. [Google Scholar]
- Schwendicke F, Chaurasia A, Arsiwala L, Lee JH, Elhennawy K, Jost-Brinkmann PG. Deep learning for cephalometric landmark detection: systematic review and meta-analysis. Clin Oral Investig 2021; 25: 4299–4309. [Google Scholar]
- Chang HJ, Lee SJ, Yong TH, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep 2020; 10: 7531. [Google Scholar]
- Li X, Zhao D, Xie J. Deep learning for classifying the stages of periodontitis on dental images: a systematic review and meta-analysis. BMC Oral Health 2023; 23: 1017. [Google Scholar]
- Zhang W, Li J, Li ZB, Li Z. Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci Rep 2018; 8: 12281. [Google Scholar]
- Vranckx M, Van Gerven A, Willems H, et al. Artificial intelligence (AI)-driven molar angulation measurements to predict third molar eruption on panoramic radiographs. Int J Environ Res Public Health 2020; 17: 3716. [Google Scholar]
- Lee J, Park J, Moon SY, Lee K. Automated prediction of extraction difficulty and inferior alveolar nerve injury for mandibular third molar using a deep neural network. Appl Sci 2022; 12: 475. [Google Scholar]
- Maruta N, Morita KI, Harazono Y, et al. Automatic machine learning-based classification of mandibular third molar impaction status. J Oral Maxillofac Surg Med Pathol 2023; 35: 327–334. [Google Scholar]
- Sukegawa S, Tanaka F, Hara T. Deep learning model for analyzing the relationship between mandibular third molar and inferior alveolar nerve in panoramic radiography. Sci Rep 2022; 12: 16925. [Google Scholar]
- Yoo JH, Yeom HG, Shin WS, et al. Deep learning based prediction of extraction difficulty for mandibular third molars. Sci Rep 2021; 11: 1954. [Google Scholar]
- Faadiya AN, Widyaningrum R, Arindra PK, Diba SF. The diagnostic performance of impacted third molars in the mandible: a review of deep learning on panoramic radiographs. Saudi Dent J 2024; 36: 404–412. [Google Scholar]
Cite this article as: Veerabhadrappa S. K, Vengusamy S, Padarh S, Iyer K, Yadav S. 2025. Fully automated deep learning framework for detection and classification of impacted mandibular third molars in panoramic radiographs. J Oral Med Oral Surg. 31, 7: https://doi.org/10.1051/mbcb/2025008
All Tables
All Figures
![]() |
Fig. 1 Framework design for detection and classification of IMTMs. |
In the text |
![]() |
Fig. 2 A displays the input image, while B illustrates the framework’s output used in the study. |
In the text |
![]() |
Fig. 3 Training accuracy versus validation accuracy for the VGG16 MTM model. |
In the text |
![]() |
Fig. 4 Loss curve comparing training and validation loss for the VGG16 MTM model. |
In the text |
![]() |
Fig 5 ROI samples of Mesioangular (5A), Distoangular (5B), Horizontal (5C), and Vertical (5D) angulations. |
In the text |
![]() |
Fig. 6 Training accuracy versus validation accuracy for the ResNet50 IMTMs classification model. |
In the text |
![]() |
Fig. 7 Loss curve comparing training and validation loss for the ResNet50 IMTMs classification model. |
In the text |
![]() |
Fig. 8 Binary accuracy of the base models. |
In the text |
![]() |
Fig. 9 The F1 score for the base models. |
In the text |
![]() |
Fig 10 The accuracy of base models for IMTM classification. |
In the text |
![]() |
Fig. 11 The F1 score for base models for IMTM classification. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.