Review Article

Implantology. 30 September 2020. 148-181
https://doi.org/10.32542/implantology.202015

ABSTRACT


MAIN

  • Ⅰ. Introduction

  • Ⅱ. Materials and Methods

  •   1. Literature Search

  •   2. Selection of Papers

  •   3. Data Extraction

  • Ⅲ. Results

  •   1. Literature Search and Selection of Studies

  •   2. Data Extraction

  • Ⅳ. Discussion

  •   1. Principle of Machine Learning

  •   2. Development of Deep Learning

  •   3. Characteristics and Applicability of the Selected Studies

Ⅰ. Introduction

The Go competition between the artificial intelligence (AI) AlphaGo of Google DeepMind and the legendary Go player Lee Sedol-in which AlphaGo won 4:1-straightforwardly shows the development of AI. At present, AI is being widely used in

various fields including SPAM mail filters, search algorithms and ranking system of web search engines, facial recognition algorithms of social networking services, and personalized curation algorithms of contents or products (Fig. 1).11 Alibaba achieved daily sales $38 billion during the Singles Day in 2019, 26% higher than the previous year, by launching an AI fashion assistant which has been trained about hundreds of millions of clothes. Amazon is automating most of their logistics except packing and is managing “Amazon Go” checkout-free convenience store chain in the US. In the Amazon Go, the payment is automatically processed when customers exit with their products based on real-time location tracking of them using multiple cameras, weight measuring sensors, and deep learning algorithms.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F1.jpg
Fig. 1.

Hype cycle for artificial intelligence 2019. Reprinted from “Gartner Hype Cycle for Artificial Intelligence 2019” by Kenneth Brant, Jim Hare, Svetlana Sicular, Copyright 2019 by Gartner, Inc. and/or its affiliates. https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner-hype-cycle-forartificial-intelligence-2019/

AI is expected to have huge impact on the healthcare industry. Currently, more than 40 and 10 deep learning algorithms have been approved as medical devices by the US Food and Drug Administration (FDA) and Ministry of Food and Drug Safety (MFDS) in South Korea, respectively. For example, fourth-generation Apple Watch and AliveCor KardiaMobile with deep learning algorithm have been approved by the US FDA as over-the-counter medical devices for detecting atrial fibrillation. These algorithms show an accuracy for detecting abnormal findings comparable to that of humans by training hundreds of thousands of data.

Dentistry is a field of study that requires a high level of accuracy; it is expected that AI and deep learning algorithms will be introduced in the near future and provide great assistance to clinical practices. In South Korea, an algorithm that estimates bone age from a hand-wrist radiograph has been approved by the MFDS; however, not many other cases have been reported yet. Therefore, this study aims to examine the global trends of deep learning technologies applied to dentistry and to forecast the future of dentistry.

Ⅱ. Materials and Methods

1. Literature Search

To select literature on the application of deep learning algorithms in dentistry, we searched the MEDLINE and IEEE Xplore databases for papers in all languages that were published before October 24, 2019. The search formula was set up by combining free-text term and entry term about the deep learning, neural network, and dentistry (Table 1).

Table 1.

Search strategy

MEDLINE
1. "Deep learning" [tiab] OR "Neural network" [tiab] OR "Neural networks" [tiab] OR "Neural Net" [tiab] OR "Neural Nets" [tiab]
2. "Neural Networks (computer)" [Mesh]
3. 1 OR 2
4. "Dental" [tiab] OR "Dentistry" [tiab]
5. "Dentistry" [mesh] OR "Radiography, Dental" [mesh] OR "Dental implants" [mesh] OR "Stomatognathic Diseases" [mesh]
NOT "Pharyngeal Diseases" [mesh]
6. 4 OR 5
7. 3 AND 6

IEEE Xplore database
1. "All Metadata" : Deep learning OR "All Metadata" : Neural Network OR "All Metadata" : Neural Networks OR
"All Metadata" : Neural Net OR "All Metadata" : Neural Nets
2. "Mesh_Terms" : Neural networks (computer)
3. 1 OR 2
4. "All Metadata" : Dental OR "All Metadata" : Dentistry
5. "Mesh_Terms" : Dentistry OR "Mesh_Terms" : Radiography, dental OR "Mesh_Terms" : Dental implants OR
"Mesh_Terms" : Stomatognathic Diseases NOT "Mesh_Terms" : Pharyngeal Diseases
6. 4 OR 5
7. 3 AND 6

2. Selection of Papers

Papers were selected in two steps: first, papers were selected based on their title and abstract; second, their full text was evaluated. The criteria for selecting papers were as follows: (1) papers for clinical purpose rather than data mining or statistical analysis and (2) papers based on studies using deep neural networks such as convolution neural networks (CNNs), recurrent neural networks (RNNs), or generative adversarial networks (GANs), rather than machine learning among the AI fields.

3. Data Extraction

From the selected studies, we extracted information of the authors, publication years, deep learning architectures that were used, input data, output data, and performance metrics of the algorithm. We examine these data in detail below.

1) Deep learning architectures

(1) CNNs

CNNs attracted attention after they won the ImageNet Challenge from 2012–2017, which is a largescale image recognition contest for classifying 50,000 high-resolution color images into 1,000 categories after training 1.2 million images, held every year since 2010 (Fig. 2). In 2012, AlexNet2 decreased the top-5 error rate by 10% to 16.4%, and SENet achieved 2.3% in 2017.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F2.jpg
Fig. 2.

Algorithms that won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2010– 2017. The top-5 error refers to the probability that all top-5 classifications proposed by the algorithm for the image are wrong. The algorithms with blue graph are convolutional neural network. Although VGGNet took second place in 2014, it is widely used in studies as its concise structure. Adapted from “A fully-automated deep learning pipeline for cervical cancer classification” by Alyafeai Z., Ghouti L., Expert Systems with Applications Proceedings of the IEEE 2019;141;112951. Copyright 2019 by Elsevier Ltd.

The origin of the CNN is the Neocognitron Model,3 which applied a neurophysiological theory to an artificial neural network based on the principle that only certain neurons in the visual cortex are activated according to the shape of target object.4 CNNs largely comprise three layers: convolutional layer, pooling layer, and fully connected layer. The convolutional layer creates a feature map by arranging the outputs of convolution operation at each position of square filter while the filter is sliding over the input data. It has the advantage of preserving horizontal and vertical information among pixels compared to the fully connected neural network, which converts images to one-dimensional vector. The pooling layer downsamples size of the feature map and summarizes important information in the feature map; the classification value is then output through the fully connected layer. For example, LeNet, which was the first CNN that classified hand-written numbers with an error rate of 0.95%, comprises three convolutional layers, three pooling layers, and one fully connected layer (Fig. 3).5

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F3.jpg
Fig. 3.

Architecture of LeNet-5. Reprinted from “Gradient-based learning applied to document recognition” by LeCun Y., Bottou L, Bengio Y. et al., Proceedings of the IEEE 1998;86;2278–2323. Copyright 1998 by IEEE.

Meanwhile, U-Net, which is used for region segmentation of medical images, does not have a fully connected layer. It comprises an encoder part, which extracts a feature map by convolution and pooling, and a decoder part, which restores the segmented images from the feature map by “up-convolution” (Fig. 4).6

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F4.jpg
Fig. 4.

Architecture of U-net. Reprinted from “U-net: Convolutional networks for biomedical image segmentation” by Ronneberger O., Fischer P., Bottou L, Brox T., Lecture Notes in Computer Science 2015;9351:234-241. Copyright 2015 by Elsevier Ltd.

When detecting multiple objects in a single image, a region-based CNN (R-CNN) is used, which includes a region proposal network for the recognition of objects and their positions (Fig. 5).7 The region proposal network suggests anchor boxes of various ratios and sizes for the input image, and those that have a high intersection-over-union (IOU) with the previously trained images are selected.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F5.jpg
Fig. 5.

Multiple object recognition in region-based convolutional neural network. Reprinted from “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” by Ren S., He, K., Girshick, R., Sun, J., IEEE Transactions on Pattern Analysis and Machine intelligence 2017;39(6):1137– 1149.

(2) RNNs

RNNs can analyze time-series data that are arranged in chronological sequence such as voice signals. Therefore, they are utilized to predict indices such as stocks, for voice recognition, text translation, adding image captions, and image or music generation. A video of former US president Barack Obama appearing to give a speech has been published, in which the algorithm synthesize lip motion synchronized with his original voice.8 This neural network receives the input values from not only the previous layer (X t ) but also the recurrent neurons of the previous time step, transforms, and delivers them to the next layer and recurrent neurons of the next time step, unlike the feed-forward neural network that only delivers signals from the input layer to the output layer (Fig. 6).

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F6.jpg
Fig. 6.

Structure of recurrent neural network. Right illustrates unfold structure from left to right over time. Reprinted from http://colah.github.io/posts/2015-08-Understanding-LSTMs/

When an RNN—in a pure sense (also referred to as a “vanilla RNN”)—with the above characteristics is configured with deep layers, there are problems such as gradient vanishing/exploding and the longterm dependency. To solve these problems, changes in connections among cells (the units of neural networks) including skip connection or leaky units, long short-term memory cells,9 and gated recurrent unit cells10 using gates inside the cells have been proposed.

(3) GANs

GANs are unsupervised learning algorithms,11 which have a neural network generating an answer inside a neural network (the generator) competes with a neural network that evaluates it (the discriminator). The fake answers proposed by the generator are gradually similar to the ground truth with the aid of the feedback from the discriminator.

2) Output data of deep learning

The results of deep learning image analysis can be largely divided into five types as follows (Fig. 7). (1) Classification: The objects in image are classified as the most likely option to be ground truth among predetermined options. One example is LeNet-5, which classified hand-written numbers into 10 types, from 0 to 9. (2) Object localization: This is to indicate the locations of objects in image by bounding boxes. When object localization and classification are performed simultaneously, it is called object detection. (3) Semantic segmentation: This means to segment whole image according to the pixel-based classification without object recognition. (4) Instance segmentation: This recognizes each object and delineates its outline in an image. (5) Image reconstruction: Examples include image quality enhancement by super-resolution or artifact reduction, and class activation maps overlap heat map, which changes the color depending on the contribution of the classification, to the input image. This allows visual confirmation based on which areas of the image are classified using the deep learning algorithm.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F7.jpg
Fig. 7.

Computer vision tasks. Adapted from “https://www.slideshare.net/darian_f/introduction-to-theartificial-intelligence-and-computer-vision-revolution.”

3) Performance metrics of deep learning algorithms

The representative performance metrics for classification algorithms are accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve (AUC). Other metrics except AUC can be calculated using the confusion matrix illustrating whether the predicted classification matches the ground truth (Fig. 8).

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F8.jpg
Fig. 8.

Confusion matrix to calculate accuracy (A) and to calculate precision recall, and F1 score (B).

For example, when we evaluate the accuracy of a deep learning model that classifies images into three types, we can calculate the accuracy simply by dividing the number of cases which classify A as A, B as B, or C as C by the total number of cases.

Accuracy=TPA+TPb+TPcTotal

(TP=True Positive, FP=False Positive, TN=True Negative, FN=False Negative)

Furthermore, the F1 score can be calculated by determining the precisions (PrecisionA, PrecisionB, and PrecisionC) and recalls (RecallA, RecallB, and RecallC) for classifying A, B, and C, and calculating the mean precision (Precisionmean) and mean recall (Recallmean), and then calculating the harmonic mean of these two.

PrecisionA=TPATPA+FPA,RecallA=TPATPA+FNA,F1score=21Precisionmean+1Recallmeanharmonicmean

The evaluation indices for object localization and segmentation include the IOU and the dice similarity coefficient, in addition to the above-mentioned indices (Fig. 9). IOU is also called Jaccard index and is calculated by dividing the overlapping area between the ground truth and the predicted areas by the union area. The dice similarity coefficient is calculated by dividing the double of the overlapping area by the sum of each area.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F9.jpg
Fig. 9.

Evaluation of object localization (A) and object segmentation (B).

IOU=ABAB=TPTP+FP+FN,DSC=2ABA+B=2TP2TP+FP+FN

Ⅲ. Results

1. Literature Search and Selection of Studies

We found 340 papers by searching MEDLINE and the IEEE Xplore Library, excluding 7 duplicates. After evaluating the titles and abstracts, we excluded 272 papers and evaluated the full texts of 68 papers. A total of 62 papers were included in the study (Fig. 10). The excluded papers and the reasons for their exclusion are outlined in Suppl. 1.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F10.jpg
Fig. 10.

Flow chart showing literature search and selection.

2. Data Extraction

The characteristics of the selected studies and the extracted data are listed in Table 2.

Table 2.

Characteristics of included studies

Author
Year
Architecture Input Output Performance
metrics
1. Detection and segmentation of tooth and oral anatomy
1.1 Tooth localization and numbering
Eun
201674
Sliding window
technique with
CNN
7,662 backgrounds,
651 single-root teeth,
and 484 multi-root
teeth images cropped
from 500 periapical
radiographs
Multiple object
localization
(tooth)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-1.jpg Mean average
best
overlap=0.714
Classification
(background,
single-root tooth,
multi-root tooth)
None
Oktay
201775
Sliding window
technique with
CNN(AlexNet)
100 panoramic
radiographs
Classification
(anterior, premolar,
molar)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-2.jpg Accuracy(anterior)=0.9247,
Accuracy
(premolar)
=0.9174,
Accuracy(molar)
=0.9432
Miki
201776
CNN
(AlexNet)
Cropped CBCT slices
from 52 participants
(227×227 pixel)
Classification
(central incisor,
lateral incisor, canine,
1st premolar, 2nd
premolar, 1st molar,
2nd molar)
- Dataset D
(rotation &
gamma
correction)
Accuracy=0.888
Zhang
201814
Cascade network
= 3
CNNs(ImageNet
pre-trained
VGG16) + 1 logic
refine module
1,000 periapical
radiographs
Multiple object
localization
(tooth)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-3.jpg Precision=0.980,
Recall=0.983,
F1 score=0.981
Classification
(tooth numbering)
Precision=0.958,
Recall=0.961,
F1 score=0.959
Chen
201912
Faster
R-CNN with
inception ResNet v2
1,250 periapical
radiographs
[(300-500)×
(300-4
00] pixel)
Multiple object
localization
(tooth)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-4.jpg Precision=0.900,
Recall=0.985,
IOU=0.91
Classification
(tooth numbering)
Precision=0.715,
Recall=0.782
Tuzoff
201913
Faster R-CNN
with
VGG16
1,574 panoramic
radiographs
Multiple object
localization
(tooth)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-5.jpg Precision=0.9945,
Recall=0.9941
Classification
(tooth numbering)
Specificity
=0.9994,
Recall=0.9800
Expert Multiple object
localization
(tooth)
Precision=0.9998,
Recall=0.9980
Classification
(tooth numbering)
Specificity
=0.9997,
Recall=0.9893
Koch
201977
6 modifications of
U-Net
1,500 panoramic
radiographs
Classification(1-4:
32, teeth with/without
restoration and
with/without
orthodontic
appliance, 5: implant,
6: >32 teeth , 7-10:
<32 teeth
with/without
restoration and
with/without
orthodontic appliance)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-6.jpg Ensemble of
U-Net
modification
1 and 4
Accuracy=0.952,
Precision=0.933,
Recall=0.944,
Specificity=0.961,
DSC=0.936
Mask R-CNN Accuracy=0.98,
Precision=0.94,
Recall=0.84,
Specificity=0.99,
DSC=0.88
Hiraiwa
201978
CNN(AlexNet) 760 cropped images
of mandibular 1st
molar from 400
panoramic
radiographs
CBCT images from
400 participants
(ground truth)
Prediction(number of
distal root of
mandibular 1st molar)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-7.jpg Accuracy=0.874,
Recall=0.773,
Specificity=0.971,
Precision=0.963,
NPV=0.818,
AUC=0.87,
Training time
=51 minutes,
Testing time
=9 seconds
CNN(GoogleNet) Accuracy=0.853,
Recall=0.742,
Specificity=0.959,
Precision=0.947,
NPV=0.800, AUC
=0.85,
Training time
=3 hours,
Testing time
=11 seconds
Expert radiologist Accuracy=0.812,
Recall=0.802,
Specificity=0.820
Precision=0.787,
NPV=0.834,
AUC=0.74
1.2. Tooth segmentation
Jader
201815
Mask R-CNN
with ResNet101
1,500 panoramic
radiographs
Instance
segmentation
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-8.jpg None
Classification(1-4:
32, teeth with/without
restoration and
with/without
orthodontic
appliance, 5: implant,
6: >32 teeth , 7-10
<32 teeth
with/without
restoration and
with/without
orthodontic
appliance)
Accuracy=0.98,
Precision=0.94,
Recall=0.84,
F1 score=0.88,
Specificity=0.99
Vinayahalingam
201979
CNN(U-Net) 81 panoramic
radiographs
Instance
segmentation

(3rd molar)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-9.jpg DSC=0.936,
IOU=0.881,
Recall=0.947,
Specificity=0.999
Segmentation
(mandibular canal)
DSC=0.805,
IOU=0.687,
Recall=0.847,
Specificity=0.967
De Tobel
201721
CNN(ImageNet
pre-trained
AlexNet)
20 cropped images of
lower left 3rd molar
(240×390 pixel) from
20 panoramic
radiographs
Classification
(modified Demirjian's
staging, 0-9)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-10.jpg Mean accuracy
=0.51,
Mean absolute
difference
=0.6 stages
Merdietio
201916
CNN(AlexNet) 400 panoramic
radiographs
Segmentation
(lower left 3rd molar)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-11.jpg Accuracy=0.61
Classification
(Modified
Demirjian's staging,
0-9)
Accuracy=0.61
Mean absolute
difference
=0.53 stages
Cohen's 𝜅 linear
=0.84
Tian
201917
CNN
(sparse octree
structure,
voxel-based)
3D scanned images of
600 dental models
Classification
(tooth numbering)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-12.jpg Recall=0.9800,
Specificity=0.9994
Instance segmentation(tooth)
Accuracy=0.8981
Expert Classification
(tooth numbering)
Recall=0.9893,
Specificity=0.9997
Xu
201918
CNN 3D scanned mesh
images from 1,200
dental models
Instance segmentation
(tooth-gingiva,
tooth-tooth)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-13.jpg Accuracy(maxilla)=0.9906
Accuracy
(mandible)
=0.9879
1.3. Bone segmentation
Duong
201920
CNN(U-Net) 50 intraoral ultrasonic
images on 8 lower
incisors from piglets
(128×128 pixel)
Segmentation
(alveolar bone)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-14.jpg DSC=75.0±12.7%,
Recall
=77.8±13.2%,
Specificity
=99.4±0.8%
Minnema
201919
CNN(MS-D Net) CBCT from 20
patients
Segmentation(bone) DSC=0.87±0.06,
Mean absolute
deviation
=0.44 mm
CNN(U-Net) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-15.jpg DSC=0.87±0.07,
Mean absolute
deviation
=0.43 mm
CNN(ResNet) DSC=0.86±0.05,
Mean absolute
deviation
=0.40 mm
Snake evolution
algorithm
DSC=0.78±0.07,
Mean absolute
deviation
=0.57 mm
2. Image quality enhancement
Du
201822
CNN Center-cropped
images from 5,166
panoramic
radiographs
(256×256 or 384×384
pixel)
Image reconstruction
(compensating
blurring from the
positioning error)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-16.jpg Model 1
Mean standard
error=0.339,
Mean absolute
error=0.749,
Maximum
absolute
error=1.499
Liang
201823
Hanning + filtered
back projection
CBCT from 3,872
patients
Image reconstruction
(noise and artifact
reduction)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-17.jpg Root mean
square
error=0.1180,
SSI=0.9670
Non-local mean
weighted least
square iterative
reconstruction
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-18.jpg Root mean
square error
=0.0862,
SSI=0.9839
Network
reconstruction
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-19.jpg Root mean
square error
=0.1015,
SSI=0.9800
Hu
201924
GAN Low-dose CBCT
images from 44
patients
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-20.jpg (180 180°scanned
images, 120 360°
scanned images)
Image reconstruction
(noise and artifact
reduction)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-21.jpg PSNR(360°)
=32.657,
SSI(360°)=0.925,
Noise suppression=5.52±0.25,
Artifact correction=6.98±0.35,
Detail restoration=5.56±0.31,
Comprehensive
quality
=6.52±0.34,
Training time per
batch=0.691,
Testing time per
batch=0.183
CNNhttp://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-22.jpg PSNR(360°)
=34.402,
SSI(360°)=0.934,
Noise
suppression
=8.95±0.36,
Artifact correction
=7.20±0.23,
Detail restoration
=5.35±0.28,
Comprehensive
quality
=7.54±0.32,
Training time per
batch=0.726,
Testing time
per batch
=0.183
m-WGAN Normal dose CBCT
images
(ground truth) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-23.jpg
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-24.jpg PSNR(360°)
=33.824,
SSI(360°)=0.975,
Noise
suppression
=8.20±0.35,
Artifact correction
=7.46±0.27,
Detail restoration=8.98±0.20,
Comprehensive
quality
=8.25±0.21,
Training time per
batch=0.798,
Testing time
per batch
=0.184
Hegazy
201925
CNN(U-Net) 1,000 projection
images (0-180°)
from 5 patients who
had different kinds of
metal implants and
dental fillings at
different tooth
positions
Segmentation
(metal)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-25.jpg Mean IOU=0.94
Mean DSC=0.96
Image reconstruction
(metal artifact
reduction)
Mean relative
error
=94.25%
Mean normalized absolute
difference
=93.25%
Mean sum of
square
difference
=91.83%
Conventional
segmentation
method
Original CBCT
images http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-26.jpg
Segmentation
(metal)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-27.jpg Mean IOU=0.75
Mean DSC=0.86
Image reconstruction
(metal artifact
reduction)
Mean relative
error
=91.71%
Mean normalized absolute
difference
=95.06%
Mean sum of
square
difference
=93.64%
Dinkla
201926
CNN(U-Net) 3D patch (48×48×48
voxel) from 34 head
and neck T2-weighted
MRI
CT(ground truth)
Image reconstruction
(synthetic computed
tomography)
Comparing
synthetic CT and
conventional CT, Mean
DSC=0.98±0.01,
Mean absolute
error
=75±9
HU
Mean error
=9±11 HU,
Mean voxel-wise
dose
differences
=-0.07±0.22%,
Mean gamma
pass
rate=95.6±2.9%.
Mean dose
difference=0.0±0.6%(body
volume),
-0.36±2.3%
(high-dose
volume)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-28.jpg
Hatvani
201927
CBCT(original) CBCT of 13 teeth
(linewidth
resolution=500μm,
voxel size
=80×80×80μm3),
Micro-CT of 13 teeth
(linewidth
resolution=50μm,
voxel
size=40×40×40μm3)
(ground truth)
Image reconstruction (super-resolution) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-29.jpg DSC=0.88
Mean of
difference -
Feret=176,
Mean of
difference -
Area=0.1139
LRTV DSC=0.89,
Time=6988
(maxillary
anterior teeth),
9059(mandibular premolar),
10301 seconds
(mandibular
molar), Mean of
difference -
Feret=113, Mean of
difference - Area
=0.1395
TF-SISR DSC=0.90,
Time=71
(maxillary
anterior
teeth),
92(mandibular
premolar),
104 seconds
(mandibular
molar),
Mean of
difference - Feret
=95,
Mean of
difference - Area
=0.0987
Hatvani 201928 CBCT(original) 5,680 CBCT slices of
13 teeth
5,680 micro-CT slices
of 13 teeth
(ground truth)
Image reconstruction (super-resolution) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-30.jpg DSC=0.8891,
Difference of the endodontic
volumes
(CBCT- μCT)
=12.39%
SRR:l2 DSC=0.8852
Difference of the endodontic
volumes
(SRR:l2- μCT)
=12.25%
SRR:TV DSC=0.8913
Difference of the endodontic
volumes
(SRR:l2- μCT)
=12.40%
CNN
(U-Net)
DSC=0.8998
Difference of the endodontic
volumes
(U-net- μCT)
=10.12%
CNN
(Subpixel
network)
DSC=0.9101
Difference of the endodontic
volumes
(Subpixel
network-μCT)
=6.07%
3. Disease detection
3.1. Detection of dental caries
Kumar
201829
CNN(U-Net) >6,000 bitewing
radiographs
Instance
segmentation

(dental caries)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-31.jpg Recall=0.70, Precision=0.53,
F1 score=0.603
CNN(U-Net) +
incremental
example mining
Recall=0.73, Precision=0.53,
F1 score=0.614
CNN
(U-Net) + hard
example mining
Recall=0.69, Precision=0.46,
F1 score=0.552
Lee
201930
CNN
(ImageNet
pre-trained
GoogleNet
Inception v3)
3,000 periapical
radiographs
Classification
(caries, non-caries)
- Accuracy=82.0,
Recall=81.0,
Specificity=83.0,
Precision=82.7,
NPV=81.4
in premolar and molar area
Casalegno
201931
CNN
(encoding path of
U-Net replaced
with
the ImageNet
Pre-trained
VGG16)
217 near-infrared
transillumination
images (256×320 pixel)
Semantic segmentation
(background, enamel,
dentin, proximal
caries, occlusal caries
Mean IOU=0.727,
IOU(proximal
caries)=0.495,
IOU(occlusal
caries)=0.490,
AUC(proximal
caries)=0.856,
AUC(occlusal
caries)=0.836
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-32.jpghttp://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-33.jpg
Moutselos
201932
Mask R-CNN with
ResNet101
79 clinical photos of
occlusal surface
Classification in superpixel level
(international caries detection and assessment system
0: sound tooth surface,
1: first visual change in enamel,
2: distinct visual change in enamel,
3: localized enamel breakdown,
4: underlying dark shadow from dentin,
5: distinct cavity with visible dentine,
6: extensive distinct cavity with visible dentin)
F1 score(mc)
=0.596,
F1 score(cpc)
=0.625,
F1 score(wc)
=0.684
3 indexes for the
reduction
back to
superpixels
(mc: most
common, cpc:
centroid pixel
class, wc:
worst class)
Classification in whole image level F1 score(mc)
=0.889,
F1 score (cpc)
=0.778,
F1 score (wc)
=0.667
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-34.jpg
Liu
201934
Mask R-CNN with
ResNet
12,600 clinical
photos(1 mega pixel)
Multiple object localization, classification
(dental caries, dental
fluorosis,
periodontitis
crack tooth, dental
plaque, dental
calculus, tooth loss)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-35.jpg Accuracy: 0.875
(tooth
fluorosis)-1
(tooth loss),
increase of the
number of
treated patients
by 18.4%,
mean diagnosis
time reduces
by 37.5% for
each patient
3.2. Detection of dental plaque and periodontal disease
Yauney
201780
CNN
(truncated
version
of VGG16)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-36.jpg 47(CD database) and
49(RD database) pairs
of white light mode
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-37.jpg and plaque mode
intraoral photo
(512×384 pixel)
Semantic segmentation
(plaque, non-plaque)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-38.jpghttp://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-39.jpg
Accuracy=0.8718,
AUC=0.8720
Bezruk
201735
CNN Malondialdehyde
concentration,
Gluthatione
concentration,
Sulcus bleeding index
Classification
(normal, gingivitis)
- Precision=0.80,
Recall=0.78,
F1 score=0.78
Aberin
201836
CNN(AlexNet) 1,000 grayscale
images with 600
magnification
(227×227 pixel)
Classification
(healthy, unhealthy)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-40.jpg Accuracy=0.76,
Mean square
error
=0.05,
Precision=0.68,
Recall=0.98
Joo
201937
CNN 1,843 clinical photos
of periodontal tissue
Classification
(healthy periodontal
status, mild
periodontitis, severe
periodontitis, not
periodontal image)
- Accuracy(healthy)=0.83,
Accuracy(mild
periodontitis)
=0.74,
Accuracy(severe
periodontitis)
=0.70,
Accuracy(not periodontal
image)=0.94
Krois
201938
CNN 2,001 cropped
panoramic
radiographs
Classification
(<20% bone loss,
≥20% bone loss)
- Accuracy=0.81,
Precision=0.76,
Recall=0.81,
Specificity=0.81,
NPV=0.85,
AUC=0.89,
F1 score=0.78
Dentist Accuracy=0.76,
Precision=0.68,
Recall=0.92,
Specificity=0.63,
NPV=0.90,
AUC=0.77,
F1 score=0.78
3.3. Detection of periapical diseases
Prajapati
201733
CNN
(2012 ImageNet
pre-trained
VGG16)
251 periapical
radiographs
(500×748 pixel)
Classification
(dental caries,
periapical infection,
periodontitis)
- Accuracy=0.8846
Ekert
201944
CNN 1,331 cropped
panoramic
radiographs of 85
patients (64×64 pixel)
Classification
(no attachment loss,
widened periodontal
ligament, clearly
detectable lesion)
Recall
=0.74±0.19,
Specificity
=0.94±0.04
Precision
=0.67±0.14,
NPV=0.95±0.04
AUC=0.95±0.02
In all teeth,
majority(6)
reference
test condition
Yang
201882
CNN(GoogLeNet
Inception v3)
196 pairs of periapical
radiograph before and
after the treatments
(96×192 pixel)
Classification
(getting better,
getting worse, have
no explicit change)
Precision=0.537,
Recall=0.490, F1
score=0.517
3.4. Detection of (pre)cancerous lesion
Uthoff
200839
CNN 170 pairs of
Autofluorescence
image and white light
image
Classification
(cancerous and
pre-cancerous lesion,
not suspicious)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-41.jpghttp://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-42.jpg Precision
=0.8767,
Recall
=0.8500,
Specificity
=0.8875,
NPV=0.8549,
AUC=0.908;
Remote specialist Precision
=0.9494,
Recall
=0.9259,
Specificity
=0.8667,
NPV=0.8125
Aubreville
201740
CNN
+Probability
fusion
165,774 patches
extracted from 7,894
grayscale confocal
laser endomicroscopy
video frames of the
inner lower labium,
the upper alveolar
ridge, the hard palate
(80×80 pixel)
Classification
(normal, cancerous)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-43.jpg Accuracy=0.883,
Recall=0.866,
Specificity=0.900,
AUC=0.955
Forslid
201741
CNN
(VGG16,
ResNet18)
Oral dataset 1
(15 microscopic cell
images taken at ×20
magnification)
Classification
(healthy, tumor) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-44.jpg
VGG16 ResNet18
Accuracy=80.66±3.00,
Precision=75.04±7.68,
Recall=80.68±3.05,
F1 score=77.68±5.28
Accuracy
=78.34±2.37,
Precision
=72.48±4.46,
Recall
=79.00±3.37,
F1 score
75.51±3.17
Oral dataset 2
(15 microscopic cell
images taken at ×20
magnification)
Accuracy=80.83±2.55,
Precision=82.41±2.55,
Recall=79.79±3.75,
F1 score=81.07±3.17
Accuracy
=82.39±2.05,
Precision
=82.45±2.38,
Rcall
=82.58±1.92,
F1 score
82.51±2.15
CerviSCAN dataset
(12,043 microscopic
cell images taken at
×40 magnification)
Accuracy=84.20±0.86,
Precision=84.35±0.97,
Recall=84.20±0.86,
F1 score=84.28±0.91
Accuracy
=84.45±0.46,
Precision
=84.64±0.38,
Rcall
=84.45±0.47,
F1 score
84.28±0.91
Herlev dataset
(917 microscopic cell
images)
Accuracy=86.56±3.18,
Precision=85.94±6.98,
Recall=79.04±3.81,
F1 score=82.16±3.85
Accuracy
=86.45±3.81,
Precision
=82.45±5.11,
Recall
=84.45±2.16,
F1 score
83.36±3.65
Das
201842
CNN 1,000,000 patches
from 80 microscopic
images taken at ×50
magnification
(2,048×1,536 pixel)
Semantic segmentation
(keratin, epithelial,
subepithelial,
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-45.jpg background)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-46.jpg Accuracy
(epithelial)
=0.984,
Recall
(epithelial)
=0.978,
IOU(epithelial)
=0.906,
DSC(epithelial)
=0.950,
Accuracy(keratin)
=0.981,
IOU(keratin)
=0.780,
DSC(keratin)
=0.752
Multiple object localization
(keratin pearl)
Accuracy=0.969
Jeyaraj
201943
SVM 1,300 image patches
from 3 databases
(BioGPS data
portal=100, TCIA
Archive=500, GDC
Dataset=700)
Classification
(normal, benign
tumor, cancerous
malignant)
- Accuracy=0.82,
Specificity=0.86,
Recall=0.76,
AUC=0.725
DBN Accuracy=0.85,
Specificity=0.89,
Recall=0.82,
AUC=0.85
CNN Accuracy=0.91,
Specificity=0.94,
Recall=0.91,
AUC=0.965
Song
201983
Central attention
residual
network in
CNN
48 tissue microarray
core images
(3,300×3,300 pixel)
Classification
(tumor, stroma)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-47.jpg F1 score(RGB)
=86.31%
DSC(RGB)
=82.16%
3.5. Detection of other disease
Murata
201945
CNN(AlexNet) 800 cropped
panoramic
radiographs of
maxillary sinus
(200×200 pixel)
Classification
(healthy, inflamed)
Accuracy=0.875,
Recall=0.867,
Specificity=0.883,
Precision=0.881,
NPV=0.869,
AUC=0.875
Radiologist Accuracy=0.896,
Recall=0.900,
Specificity=0.892,
Precision=0.893,
NPV=0.899,
AUC=0.896
Dental residents Accuracy=0.767,
Recall=0.783,
Specificity=0.750,
Precision=0.758,
NPV=0.776,
AUC=0.767
De Dumast
201846
NN 293 condyle images
from reconstructed
CBCT
Classification(close
to normal[control],
close to normal
[osteoarthritis],
degeneration
1,2,3,4-5)
In confusion
matrix,
Accuracy=0.441,
Accuracy
(including adjacent
1 cell around
true positive
cell)=0.912
Kise
201947
CNN(AlexNet) 500 cropped CT
images of parotid
gland in 25 patients
Classification
(normal, Sjögren
syndrome)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-48.jpg Accuracy=0.96,
Recall=1,
Specificity=0.92,
AUC=0.960
Experienced
radiologist
Accuracy=0.983,
Recall=0.993,
Specificity=0.779,
AUC=0.996
Inexperienced
radiologist
Accuracy=0.835,
Recall=0.779,
Specificity=0.892,
AUC=0.997
Chu
201848
Octuplet Siamese network with
2-stage fine
tuning
864 cropped
panoramic
radiographs
(50×50 pixel)
Classification
(normal, osteoporosis)
Accuracy=0.898
Kats
201949
Faster R-CNN
(ResNet101)
65 panoramic
radiographs
Multiple object localization
(atherosclerotic
carotid plaques)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-49.jpg Accuracy=0.83,
Recall=0.75
Specificity=0.80,
AUC=0.83
4. Evaluation of facial esthetics, detection of cephalometric landmarks
Murata
201750
CNN
(ImageNet
pre-trained
VGG19) + LSTM
352 patients' images
(304×224 pixel)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-50.jpg
Mouth/left
Classification
(mouth, jaw, face)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-51.jpg
Jaw/left
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-52.jpg
Face/remarkably distortion
Accuracy=0.648
Multiple CNNs Accuracy=0.630
Patcas
201951
CNN
(Internet Movie
database-
Wikipedia
pre-trained,
APPA-REAL and
Chicago Face
Dataset
fine-tuned
VGG16)
2,164 facial images of
pre-/post-operation
(Le Fort I osteotomy,
sagittal split ramus
osteotomy of the
mandible, chin
osteotomy, other
osteotomies, 600 dpi)
Prediction apparent age,
facial attractiveness
(0-100, 0: extremely
unattractive, 100:
extremely attractive)
Mean difference
between
apparent age and actual age
(pre-operation)
=1.75 year
Mean difference
between
apparent age and actual age
(post-operation)
=0.82 years
Facial
attractiveness
was
increased at
74.7% of
patients
Leonardi
201052
Cellular NN
(unsupervised
learning)
40 lateral
cephalometric
radiographs,
22 landmarks
Image reconstruction
(emboss
enhancement)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-53.jpg Euclidean
distance mean
errors: higher for the
embossed images than for the
unfiltered radiographs
Accuracy of the
cephalometric
landmark
detection
improved on the
embossed
radiograph but
only for a few
points,
without
statistical
significance
Qian
201953
Faster R-CNN
with
VOC 2012
pre-trained
ResNet50)
400 lateral
cephalometric
radiographs
Multiple object localization
(19 landmarks)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-54.jpg Accuracy(test 1)
=0.825,
Accuracy(test 2)
=0.724
Landmark can
be classified
as a 'accurate'
on only if the
distance
between
a detected
landmark and
its ground
truth
is less than 2 mm.
Torosdagli
201954
CNN
(U-Net, deep
geodesic
learning),
LSTM
25,600 slices from 50
3D reconstruced
CBCT
Segmentation
(mandible)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-55.jpg DSC=0.9386,
Hausdorff
distance=5.47
Multiple object localization
(landmarks on
mandible)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-56.jpg Mean error(mm):
coronoid process(left)=0,
coronoid
process(right)
=0.45,
condyle(left)
=0.33,
condyle(right)
=0.07,
menton=0.03,
gnathion=0.49,
pogonion=1.54,
B-point=0.33,
infradentale=0.52
5. Fabrication of prosthesis
Shen
201984
CNN(U-Net) 77 single crown
models http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-57.jpg
Prediction
(deformation,
translation, scaling
down, rotation)
F1 score
(deformation)
=0.9614,
F1 score
(scaling down)
=0.9408,
F1 score
(translation)
=0.9386,
F1 score
(rotation)
=0.9387
Image reconstruction
(compensating
deformation,
translation, scaling
down, rotation)
F1 score
(deformation)
=0.9699,
F1 score
(scaling down)
=0.9488,
F1 score
(translation)
=0.9517,
F1 score
(rotation)
=0.9417
Shen
201955
CNN(U-Net) 28,433 slices from 71
3D single crown
models
(256×256 pixel) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-58.jpg
Prediction
(deformation, scaling
down, rotation)
Recall
(deformation)
=1.000,
Recall
(scaling down)
=0.984,
Recall
(rotation)
=0.982,
Precision
(deformation)
=1.00
0, Precision
(scaling down)
=0.993,
Precision
(rotation)
=0.978,
F1 score
(deformation)
=1.000,
F1 score
(scaling down)
=0.989,
F1 score
(rotation)
=0.980
Image reconstruction
(compensating
deformation, scaling
down, rotation)
Recall
(deformation)
=1.000,
Recall
(scaling down)
=0.993,
Recall
(rotation)
=0.983,
Precision
(deformation)
=1.000,
Precision
(scalin down)
=0.991,
Precision
(rotation)
=0.980,
F1 score
(deformation)
=1.000,
F1 score
(scaling down)
=0.992,
F1 score
(rotation)
=0.982
Yamaguchi
201956
CNN 8,640 3D scan images
of study model
(100×100 pixel) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-59.jpg
Classification
(trouble-free,
debonding)
Accuracy=0.985,
Precision=0.970,
Recall=1,
F1 score=0.985,
AUC=0.098
Zhang
201957
CNN
(sparse octree
structure,
voxel-based)
From 380 preparation
models:
dataset A: no rotation http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-60.jpg
Segmentation(preparation line) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-61.jpg Accuracy=0.9062,
Recall=0.9318,
Specificity=0.9458
dataset B: 60°×5
rotations
Accuracy=0.9558,
Recall=0.9590,
Specificity=0.9521
dataset C: 30°×12
rotations
Accuracy=0.9743,
Recall=0.9759,
Specificity=0.9732
Zhao
201958
Convolutional
auto-encoder
39,424 models
augmented by rotating
77 dental crown
models
Prediction(nonlinear
deformation)
F1 score
(nonlinear
deformation,
resolution
64×64×64)
=0.9684
Image reconstruction
(compensation)
F1 score
(nonlinear
deformation,
resolution
64×64×64)
=0.9755
F1 score before
compensation
(0.7782) was
increased after
compensation
(0.9530)
6. Others
MiloŠević
201959
CNN(ImageNet
pre-trained
VGG16)
4,000 panoramic
radiographs(female=
58.8%, male=41.2%)
Classification
(female, male)
Accuracy=96.87±0.96%
(filter=256,
unit=128,
without attention
mechanism)
Ilić
201960
CNN(Pre-trained
VGG16)
4,155 panoramic
radiographs
(512×512 pixel)
Classification
(female, male)
Accuracy=94.3% (over 80
years=50%),
Testing time
=0.018 seconds
Alarifi
201885
Radial basis NN Patient self-behavior,
health conditions,
attitude information
Prediction
(implant success)
Recall=0.8478,
Specificity=0.8678
General regression
NN
Recall=0.9216,
Specificity=0.9351
Associative NN Recall=0.9417,
Specificity=0.9482
Memetic search
optimization along
with genetic scale
RNN
Recall=0.9763,
Specificity=0.9828
Ali
201961
R-CNN(SSD-Mo
bileNet)
631 instrument
images or video
frames
(16:9 ratio, 0.5 or 0.67
megapixel)
Multiple object localization,
classification

(dental instruments)
http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-62.jpg Accuracy=0.87,
Precision=0.99,
Recall=1,
Specificity=0.99
Luo
201962
k-nearest
neighbor
3D motion signals
caused by the hand
movement using
wearable devices in
10 participants http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-63.jpg
Classification (1-15) http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_T2-64.jpg Accuracy=0.472
Support vector
machine
Accuracy=0.391
Decision tree Accuracy=0.394
RNN-based LSTM Accuracy=0.973

3D: three-dimensional; AUC: area under curve; CAD/CAM: computer-aided design/computer-aided manufacturing; CBCT: cone-beam computed tomography; CNN: convolutional neural network; CT: computed tomography; DSC: dice similarity coefficient; GAN: generative adversarial network; HU: Hounsfield unit; ICDAS: international caries detection and assessment system; IOU: intersection-over-union; LSTM: long short-term memory models; LRTV: low-rank and total variation regularizations; m-WGAN: modified-Wasserstein generative adversarial network; NN: neural network; NPV: negative predictive value; PSNR: peak signal-to-noise ratio; R-CNN: region-based convolutional neural network; SSI: structure similarity index; SRR: super-resolution method; TF-SISR: tensor factorization with 3D single image super-resolution.

*: refers to the data unanimously classified by 6 specialists

: sensitivity and positive predictive value were replaced by recall and precision, respectively.

1) Detection of teeth and adjacent anatomical structures

The selected studies in this category can be subdivided into three groups: tooth detection, tooth numbering, tooth segmentation, and bone segmentation. For tooth detection and tooth numbering, panoramic radiographs and cone-beam computed tomography (CBCT) images were used. For multiple object localization of teeth, the precision was reported to be in the range 0.90012 –0.99513 and the recall in the range 0.98314 –0.99413 . The precision of tooth numbering was reported to be in the range 0.71512 – 0.95814 and the recall in the range 0.78212 –0.98013.

Tooth segmentation has been attempted for teeth in panoramic radiographs15 , third molars16 , and dental model17, 18. Algorithms for segmenting bone appearances from CBCT19 and oral ultrasound images20 have been also proposed. Two studies that classified the developmental stages of third molars reported accuracies of 0.5121 and 0.61161.

2) Image quality enhancement

For image quality enhancement, studies on the reduction of blur, noise, and metal artifacts as well as on the super-resolution have been conducted. Du et al. corrected blurs in the center of images using an algorithm trained using 5,166 panoramic radiographs taken at positions ±20 mm from the ideal position and the misalignment length (mm), and they reported a maximum absolute error below 1.5 mm.22 Liang et al. reconstructed computed tomography (CT) images using three algorithms and reported improved root mean squared error and structure similarity index compared with the values measured in original CT images.23 Hu compared a GAN, CNN, and modified GAN using Wasserstein distance and reported that the latter was most effective in the noise reduction from CBCT images.24 Hegazy reported improved the relative error by 5.7%, and the standardized absolute difference by 8.2% using modified U-net algorithm compared to the conventional method.25 Dinkla et al. introduced a U-net-based algorithm that synthesizes CT images with no metal artifacts from T2-weighted magnetic resonance imaging (MRI).26 Hatvani et al. reported a dice similarity index of 0.90 and a mean difference of 9.87% comparing crosssectional area of root canal system of CBCT applied tensor factorization super-resolution algorithm with that of micro CT.2727 They also attempted super-resolution using a subpixel network, reporting a dice similarity index of 0.91 and a mean difference root volume of 6.07% compared with micro CT.28

3) Disease detection

Target diseases include tooth caries,29, 30, 31, 32, 33, 34 periodontal disease,34, 35, 36, 37, 38 precancerous lesions,39, 4041, 42, 43 periapical diseases,33, 44 dental fluorosis,34 maxillary sinusitis,45 osteoarthritis,46 Sjögren ' s syndrome,47 and osteoporosis.4848 An algorithm for detecting atherosclerotic carotid plaque in panoramic radiographs has been suggested.49

4) Evaluation of facial esthetics and localization of cephalometric landmarks

Algorithms for evaluating various images such as facial photographs, lateral cephalometric radiographs, and CBCT images have been proposed. Murata et al. developed an algorithm for classifying the asymmetry and/or discrepancy of crow’s feet, nose, lips, and chin from input frontal face image, and reported an mean accuracy of 64.8%.50 Patcas et al. trained CNN-based algorithm to estimate apparent age and facial attractiveness score (0–100 points) using various facial image datasets, and they reported the increased facial attractiveness score (mean difference = 1.22, 95% confidence interval: 0.81, 1.63) and decreased apparent age (mean difference = –0.93, 95% confidence interval: –1.50, –0.36) after orthognathic surgery.51 Leonardi synthesized embossed images from lateral cephalometric radiograph for enhanced visibility but reported insignificant improvement of reading accuracy.52 Qian et al. proposed faster R-CNN method for cephalometric landmark detection. After trained with 150 photographs of 19 types of cephalometric landmarks that were manually localized by expert orthodontist, the method localized landmarks within 2 mm of the landmark located by the orthodontist at 72.4–82.5%.5353 The cephalometric landmarks on the mandible (menton, gnathion, pogonion, B-point, infradentale, coronoid process, condyle head) were localized after segmenting the mandible on the 3D reconstructed CBCT, and an mean error was reported to less than 1 mm except for pogonion.54

5) Fabrication of prosthesis

Shen et al. proposed algorithm for predicting and compensating for errors in the cross section of single crown additively manufactured using the 3D printer, and they reported improved F1 scores (translation: 0.6894 → 0.9995, scaling down: 0.7188 → 0.9893, rotation: 0.8906 → 0.9671).5555 Yamaguchi developed an algorithm that evaluates the scan data of an abutment preparation model and classified models with a high possibility of debonding and a low possibility (trouble-free), and they reported an accuracy of 98.5%.56 In addition, an algorithm that predicts the crown margin in a 3D-scanned abutment preparation model57 and an algorithm for predicting the nonlinear deformation of 3D printed crowns from the scan data of an abutment preparation model have been suggested.58

6) Others

Milosevic and Ilic designed an algorithm for determining sex from panoramic radiographs and reported accuracies of 96.87±0.96%59 and 94.3%,60 respectively. Ali et al. reported on an algorithm for localizing and classifying dental instruments in image.61 Luo et al. collected 3D motion signals in daily life using wearable devices and tested four algorithms classifying tooth brushing time and 15 tooth brushing motions. In terms of classifying tooth brushing motions, they reported an mean classification accuracy of 97.3% using the RNN-based algorithm.62

Ⅳ. Discussion

1. Principle of Machine Learning

The basic approach of machine learning is to set a loss function for the difference between the predicted value (ŷ) and the groud truth (y) and determine the global minimum of the loss function based on the fact that accuracy improves as the loss function decreases (Fig. 11).63 One representative example is the least squares method, which squares the differences between each predicted value and the ground truth and minimizes the sum of these differences. The loss function is the mean of the squared differences between the predicted values and the ground truth.

lossfunction=1mi=1m(yi-y^i)2

A basic algorithm for determining the global minimum of the loss function is gradient descent. A smaller input value (χ) is substituted if the gradient of the loss function is positive, and a larger input value is substituted if the gradient is negative, thus converging to the minimum.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F11.jpg
Fig. 11.

Schematic diagram showing the training process of machine learning. Adapted from “Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek” by Chollet F., Copyright 2018 by MITP-Verlags GmbH & Co. KG.

2. Development of Deep Learning

Deep learning is a field of machine learning (Fig. 12) and refers to deep artificial neural networks with two or more hidden layers besides the output layer. An artificial neural network is an interconnected group of artificial neurons that perform computations by imitating the brain structure. It started with a simple neural network model with propositional logic,64 and artificial neurons called perceptrons65 were introduced. However, the exclusive OR operation could not be performed with a single-layer perceptron,66 and this problem was solved by developing the backpropagation algorithm for training the multi-layer perceptron.67, 68 Current deep learning has been improved in performance through the development of a new activation function69 to solve the problem of vanishing gradient, while passing through the deep layer of the neural network, optimization of the weighted initialization,70, 71, 72 and dropout2 for preventing overfitting.

http://static.apub.kr/journalsite/sites/kaomi/2020-024-03/N0880240305/images/kaomi_24_03_05_F12.jpg
Fig. 12.

Diagram showing the relationship between artificial intelligence, machine learning, and deep learning. Adapted from “Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek” by Chollet F., Copyrights 2018 by MITP-Verlags GmbH & Co. KG.

3. Characteristics and Applicability of the Selected Studies

For application of deep learning to dentistry, an algorithm for detecting multiple objects is required due to the nature of the dental anatomy that multiple teeth are distributed in a single image. For object localization, the sliding window technique was used in early research, which moves the windows of various ratios and sizes in the image in small increments. Later, the object localization became faster with the introduction of the region proposal network embedded in the neural network. Studies on teeth classification performance show slightly different results. To maintain high accuracy even in complex situations such as prosthesis, tooth defect, or mixed dentition, data in various situations need to be trained sufficiently.

If automatic tooth numbering algorithm is combined with classification algorithms for tooth caries, periodontal disease, and root apex disease, it can instantly provide useful clinical information by detecting abnormalities of each tooth. This can be highly beneficial in forensic dentistry as well. Although sex determination using panoramic radiographs shows lower accuracy (94.3%60 , 96.87%59 ) than using the total skeleton (100%); however, it has the additional advantages that people with partial bones can be analyzed and their dental records can be compared. Analyzing the development stage of third molars can be one of age estimation methods with analyzing hand-wrist radiograph or cervical vertebral maturation in panoramic radiograph.

Studies related to image quality enhancement of dental images mainly have been conducted for CBCT, and development in three directions is anticipated. First, removal of noise due to the scattering of low-energy X-rays, which is the problem of low-dose CT, could increase the number of allowable shots while lowering the radiation exposure of patients, while also maintaining the image quality similar to that of normal dose CT. Second, reducing metal artifacts in the images could be considerably helpful for reading the CT images of patients who have several implants or metal fixed prostheses. For extreme example, Dinkla et al. proposed CNN-based method that synthesizes CT images similar to the existing CT from T2-weighted MRI with no radiation exposure and metal artifacts. Third, high-resolution images can be obtained by applying a super-resolution algorithm to conventional CBCT images. The algorithm was trained by micro CT, which is used experimentally due to its very high exposure in spite of high resolution of µm unit. The super-resolution imaging of CBCT showed similar errors of root canal volume and length to those of micro CT. This is expected to provide great assistance to the diagnosis of teeth with complex root canal systems.

In selected studies, detecting various diseases and risk factors in field of dentistry has been attempted using deep learning, such as dental plaque, dental caries, periodontal disease, and periapical disease, as well as the diseases of adjacent anatomical structures observed in dental images such as the maxillary sinusitis, osteoarthritis of the temporomandibular joint, Sjögren syndrome of the parotid gland, and osteoporosis. Kats et al. reported 83% accuracy when detecting atherosclerotic carotid plaques in panoramic radiographs using an R-CNN.49 This algorithm is expected to help diagnose and treat ischemic brain disease because the plaque is known to be significantly associated with strokes.73

When deep learning was applied to the orthodontics, the automatic detection of cephalometric landmarks from radiographs, or facial asymmetry from photographs are expected to shorten the work time of dentists for developing problem list and establishing diagnosis of patients. Meanwhile, deep learning algorithms can improve the fitness, retention, and longevity of the prosthesis by compensating for the deformative errors of 3D printed crowns, detecting abutment margins, and predicting the possibility of the prosthesis falling off.

The study of Luo et al., which classified tooth brushing motions by analyzing 3D motion signals with an RNN-based algorithm collected from wearable devices62 shows the potential for personalized dentistry. Personalized feedbacks based on daily obtained data through wearable devices can help patients to reflect and improve oral hygiene themselves.

To summarize the above discussion, utilization of deep learning algorithms is expected to shorten the work time of dentists and ultimately improve treatment results by assisting dentists in almost all aspect of clinical practices such as tooth numbering, disease detection and classification, image quality enhancement, detection of cephalometric landmarks, and the evaluation of prostheses and reduction of errors. Therefore, it is considered that the interest and participation of many dental practitioners is necessary to successfully integrate these technologies into the dentistry.

Supplement 1.

Excluded study and reason for exclusion

Author Year Reason for exclusion
Colchester 1992 No information on algorithm performance evaluation
Economopoulos 2008 It is hard to consider that the enhanced hexagonal centre-based inner search algorithm proposed in this paper is included in deep learning.
Han 2017 No information on algorithm performance evaluation
Hu 2019 A study to evaluate the brain's perception of cold stimulation
Mendoca 2004 No information on algorithm performance evaluation
Yoon 2018 Research using deep learning for data mining

Acknowledgements

This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI20C0129).

References

1
Hype cycle. Gartner. https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner-hypecycle-for-artificial-intelligence-2019/ (2019).
2
Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p.1097-105.
3
Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 1980;36:193-202.
10.1007/BF003442517370364
4
Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat's striate cortex. J Physiol 1959;148:574-91.
10.1113/jphysiol.1959.sp00630814403679PMC1363130
5
LeCun Y, Bottou L, Bengio Y, Haffner, P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86:2278-323.
10.1109/5.726791
6
Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer; 2015. p.234-41.
10.1007/978-3-319-24574-4_28
7
Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017;39:1137-49.
10.1109/TPAMI.2016.257703127295650
8
Suwajanakorn S, Seitz SM, Kemelmacher-Shlizerman I. Synthesizing Obama: learning lip sync from audio. In: ACM Transactions on Graphics 2017;36:1-13.
10.1145/3072959.3073640
9
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997;9:1735-80.
10.1162/neco.1997.9.8.17359377276
10
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. 2014. p.1724-34.
10.3115/v1/D14-1179
11
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Advances in Neural Information Processing Systems. 2014. p.2672-80.
12
Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep 2019;9:1-11.
10.1038/s41598-019-40414-y30846758PMC6405755
13
Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019;48:20180051.
10.1259/dmfr.2018005130835551PMC6592580
14
Zhang K, Wu J, Chen H, Lyub P. An effective teeth recognition method using label tree with cascade network structure. Comput Med Imaging Graph 2018;68:61-70.
10.1016/j.compmedimag.2018.07.00130056291
15
Jader G, Fontineli J, Ruiz M, Abdalla K, Pithon M, Oliveira L. Deep instance segmentation of teeth in panoramic X-ray images. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). 2018;400-7.
10.1109/SIBGRAPI.2018.00058
16
Merdietio Boedi R, Banar N, De Tobel J, Bertels J, Vandermeulen D, Thevissen PW. Effect of lower third molar segmentations on automated tooth development staging using a convolutional neural network. J Forensic Sci 2020;65:481-6.
10.1111/1556-4029.1418231487052
17
Tian S, Dai N, Zhang B, Yuan F, Yu Q, Cheng X. Automatic classification and segmentation of teeth on 3D dental model using hierarchical deep learning networks. IEEE Access 2019;7:84817-28.
10.1109/ACCESS.2019.2924262
18
Xu X, Liu C, Zheng Y. 3D tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans Vis Comput Graph 2019;25:2336-48.
10.1109/TVCG.2018.283968529994311
19
Minnema J, Eijnatten M, Hendriksen AA, Liberton N, Pelt DM, Batenburg KJ, et al. Segmentation of dental cone‐beam CT scans affected by metal artifacts using a mixed‐scale dense convolutional neural network. Med Phys 2019;46:5027-35.
10.1002/mp.1379331463937PMC6900023
20
Duong DQ, Nguyen KT, Kaipatur NR, Lou EHM, Noga M, Major PW, et al. Fully automated segmentation of alveolar bone using deep convolutional neural networks from intraoral ultrasound images. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2019. p.6632-5.
10.1109/EMBC.2019.885706031947362
21
De Tobel J, Radesh P, Vandermeulen D, Thevissen PW. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study. J Forensic Odontostomatol 2017;35:42-54.
22
Du X, Chen Y, Zhao J, Xi Y. A convolutional neural network based auto-positioning method for dental arch in rotational panoramic radiography. In: 2018 40th Annu Int Conf IEEE Eng Med Biol Soc (EMBC). IEEE, 2018. p.2615-8.
10.1109/EMBC.2018.851273230440944
23
Liang K, Zhang L, Yang Y, Yang H, Xing Y. A self-supervised deep learning network for low-dose CT reconstruction. In: 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC). 2018. p.1-4.
10.1109/NSSMIC.2018.8824600
24
Hu Z, Jiang C, Sun F, Zhang Q, Ge Y, Yang Y, et al. Artifact correction in low-dose dental CT imaging using Wasserstein generative adversarial networks. Med Phys 2019;46:1686-96.
10.1002/mp.1341530697765
25
Hegazy MAA, Cho MH, Cho MH, Lee SY. U-net based metal segmentation on projection domain for metal artifact reduction in dental CT. Biomed Eng Lett 2019;9:375-85.
10.1007/s13534-019-00110-231456897PMC6694350
26
Dinkla AM, Florkow MC, Maspero M, Savenije MHF, Zijlstra F, Doornaert PAH, et al. Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based threedimensional convolutional neural network. Med Phys 2019;46:4095-104.
10.1002/mp.1366331206701
27
Hatvani J, Basarab A, Tourneret J, Gyöngy M, Kouamé D. A tensor factorization method for 3-D super resolution with application to dental CT. IEEE Trans Med Imaging 2019;38:1524-31.
10.1109/TMI.2018.288351730507496
28
Hatvani J, Horváth A, Michetti J, Basarab A, Kouamé D, Gyöngy M. Deep learning-based superresolution applied to dental computed tomography. IEEE Trans Radiat Plasma Med Sci 2019;3:120-8.
10.1109/TRPMS.2018.2827239
29
Kumar P, Srivastava MM. Example mining for incremental learning in medical imaging. In: 2018 IEEE Symposium Series on Computational Intelligence (SSCI). 2018. p.48-51.
10.1109/SSCI.2018.8628895
30
Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learningbased convolutional neural network algorithm. J Dent 2018;77:106-11.
10.1016/j.jdent.2018.07.01530056118
31
Casalegno F, Newton T, Daher R, Abdelaziz M, Lodi-Rizzini A, Schürmann F, et al. Caries detection with near-infrared transillumination using deep learning. J Dent Res 2019;98:1227-33.
10.1177/002203451987188431449759PMC6761787
32
Moutselos K, Berdouses E, Oulis C, Maglogiannis I. Recognizing occlusal caries in dental intraoral images using deep learning. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2019. p.1617-20.
10.1109/EMBC.2019.885655331946206
33
Prajapati SA, Nagaraj R, Mitra S. Classification of dental diseases using CNN and transfer learning. 2017 5th Int Symp Comput Bus Intell. IEEE, 2017. p.70-4.
10.1109/ISCBI.2017.8053547
34
Liu L, Xu J, Huan Y, Zou Z, Yeh S-C, Zheng L-R. A smart dental health-IoT platform based on intelligent hardware, deep learning and mobile terminal. IEEE J Biomed Heal Informatics 2019;24:898-906.
10.1109/JBHI.2019.291991631180873
35
Bezruk V, Krivenko S, Kryvenko L. Salivary lipid peroxidation and periodontal status detection in ukrainian atopic children with convolutional neural networks. In: 2017 4th International ScientificPractical Conference Problems of Infocommunications. Science and Technology (PIC S&T). 2017. p.122-4.
10.1109/INFOCOMMST.2017.8246364
36
Aberin STA, De Goma JC. Detecting periodontal disease using convolutional neural networks. In: 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM). 2018. p.1-6.
10.1109/HNICEM.2018.8666389
37
Joo J, Jeong S, Jin H, Lee U, Yoon JY, Kim SC. Periodontal disease detection using convolutional neural networks. In: 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 2019. p.360-2.
10.1109/ICAIIC.2019.866902130782509
38
Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep 2019;9:1-6.
10.1038/s41598-019-44839-331186466PMC6560098
39
Uthoff RD, Song B, Sunny S, Patrick S, Suresh A, Kolur T, et al. Point-of-care, smartphone-based, dual-modality, dual-view, oral cancer screening device with neural network classification for lowresource communities. PLoS One. 2018;13:e0207493.
10.1371/journal.pone.020749330517120PMC6281283
40
Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci Rep 2017;7:1-10.
10.1038/s41598-017-12320-828931888PMC5607286
41
Forslid G, Wieslander H, Bengtsson E, Wahlby C, Hirsch J-M, Stark CR, et al. Deep convolutional neural networks for detecting cellular changes due to malignancy. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). 2017. p.82-9.
10.1109/ICCVW.2017.18
42
Das DK, Bose S, Maiti AK, Mitra B, Mukherjee G, Dutta PK. Automatic identification of clinically relevant regions from oral tissue histological images for oral squamous cell carcinoma diagnosis. Tissue Cell 2018;53:111-9.
10.1016/j.tice.2018.06.00430060821
43
Jeyaraj PR, Samuel Nadar ER. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J Cancer Res Clin Oncol 2019;145:829-37.
10.1007/s00432-018-02834-730603908
44
Ekert T, Krois J, Meinhold L, Elhennawy K, Emara R, Golla T, et al. Deep learning for the radiographic detection of apical lesions. J Endod 2019;45:917-22.e5.
10.1016/j.joen.2019.03.01631160078
45
Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol 2019;35:301-7.
10.1007/s11282-018-0363-730539342
46
De Dumast P, Mirabel C, Cevidanes L, Ruellas A, Yatabe M, Ioshida M, et al. A web-based system for neural network based classification in temporomandibular joint osteoarthritis. Comput Med Imaging Graph 2018;67:45-54.
10.1016/j.compmedimag.2018.04.00929753964PMC5987251
47
Kise Y, Ikeda H, Fujii T, Fukuda M, Ariji Y, Fujita H, et al. Preliminary study on the application of deep learning system to diagnosis of Sjögren's syndrome on CT images. Dentomaxillofac Radiol 2019;48:20190019.
10.1259/dmfr.2019001931075042PMC6747436
48
Chu P, Bo C, Liang X, Yang J, Megalooikonomou V, Yang F, et al. Using octuplet siamese network for osteoporosis analysis on dental panoramic radiographs. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2018. p.2579-82.
10.1109/EMBC.2018.851275530440935
49
Kats L, Vered M, Zlotogorski-Hurvitz A, Harpaz I. Atherosclerotic carotid plaque on panoramic radiographs: neural network detection. Int J Comput Dent 2019;22:163-9.
50
Murata S, Lee C, Tanikawa C, Date S. Towards a fully automated diagnostic system for orthodontic treatment in dentistry. In: 2017 IEEE 13th International Conference on e-Science (e-Science). 2017. p.1-8.
10.1109/eScience.2017.12
51
Patcas R, Bernini DAJ, Volokitin A, Agustsson E, Rothe R, Timofte R. Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. Int J Oral Maxillofac Surg 2019;48:77-83.
10.1016/j.ijom.2018.07.01030087062
52
Leonardi RM, Giordano D, Maiorana F, Greco M. Accuracy of cephalometric landmarks on monitordisplayed radiographs with and without image emboss enhancement. Eur J Orthod 2010;32: 242-7.
10.1093/ejo/cjp12220022892
53
Qian J, Cheng M, Tao Y, Lin J, Lin H. CephaNet: an improved faster R-CNN for cephalometric landmark detection. 2019 IEEE 16th Int Symp Biomed Imaging (ISBI 2019). 2019. p.868-71.
10.1109/ISBI.2019.875943729846702
54
Torosdagli N, Liberton DK, Verma P, Sincan M, Lee JS, Bagci U. Deep geodesic learning for segmentation and anatomical landmarking. IEEE Trans Med Imaging 2019;38:919-31.
10.1109/TMI.2018.287581430334750PMC6475529
55
Shen Z, Shang X, Li Y, Bao Y, Zhang Xj, Dong X, et al. PredNet and CompNet: prediction and highprecision compensation of in-plane shape deformation for additive manufacturing. In: 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE). 2019. p.462-7.
10.1109/COASE.2019.8842894PMC6803195
56
Yamaguchi S, Lee C, Karaer O, Ban S, Mine A, Imazato S. Predicting the debonding of CAD/CAM composite resin crowns with AI. J Dent Res 2019;98:1234-8.
10.1177/002203451986764131379234
57
Zhang B, Dai N, Tian S, Yuan F, Yu Q. The extraction method of tooth preparation margin line based on S‐Octree CNN. Int J Numer Method Biomed Eng 2019;35:e3241.
10.1002/cnm.3241
58
Zhao M, Xiong G, Shang X, Liu C, Shen Z, Wu H. Nonlinear deformation prediction and compensation for 3D printing based on CAE neural networks. In: 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE). 2019. p.667-72.
10.1109/COASE.2019.8843210
59
Milošević D, Vodanović M, Galić I, Subašić M. Estimating biological gender from panoramic dental X-ray images. In: 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). 2019. p.105-10.
10.1109/ISPA.2019.8868804
60
Ilić I, Vodanović M, Subašić M. Gender estimation from panoramic dental X-ray images using deep convolutional networks. In: IEEE EUROCON 2019 -18th International Conference on Smart Technologies. 2019. p.1-5.
10.1109/EUROCON.2019.8861726
61
Ali H, Khursheed M, Fatima SK, Shuja SM, Noor S. Object recognition for dental instruments using SSD-MobileNet. In: 2019 International Conference on Information Science and Communication Technology (ICISCT). 2019. p.1-6.
10.1109/CISCT.2019.8777441
62
Luo C, Feng X, Chen J, Li J, Xu W, Li W, et al. Brush like a dentist: accurate monitoring of toothbrushing via wrist-worn gesture sensing. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. 2019. p.1234-42.
10.1109/INFOCOM.2019.8737513
63
Chollet F. Deep learning mit python und keras: das praxis-handbuch vom entwickler der kerasbibliothek. MITP-Verlags GmbH & Co. KG. 2018.
64
Mcculloch WS, Pitts W. A logical calculus nervous activity. Bull Math Biol 1990;52:99-115.
10.1016/S0092-8240(05)80006-0
65
Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 1958;65:386-408.
10.1037/h004251913602029
66
Minsky M, Papert S. Perceptron: an introduction to computational geometry. MIT press; 2017.
10.7551/mitpress/11301.001.0001
67
Werbos PJ. The roots of backpropagation: from ordered derivatives to neural networks and political forecasting. John Wiley & Sons; 1994.
68
Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533-6.
10.1038/323533a0
69
Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011. p.315-23.
70
Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006;18:1527-54.
10.1162/neco.2006.18.7.152716764513
71
Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010. p.249-56.
72
He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. 2015. p.1026-34.
10.1109/ICCV.2015.123
73
Takaya N, Yuan C, Chu B, Saam T, Underhill H, Cai J, et al. Association between carotid plaque characteristics and subsequent ischemic cerebrovascular events: a prospective assessment with MRI - initial results. Stroke 2006;37:818-23.
10.1161/01.STR.0000204638.91099.9116469957
74
Eun H, Kim C. Oriented tooth localization for periapical dental X-ray images via convolutional neural network. In: 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). 2016. p.1-7.
10.1109/APSIPA.2016.7820720
75
Oktay AB. Tooth detection with convolutional neural networks. 2017 Med Technol Natl Conf TIPTEKNO 2017. 2017. p.1-4.
76
Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A, et al. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput Biol Med 2017;80:24-9.
10.1016/j.compbiomed.2016.11.00327889430
77
Koch TL, Perslev M, Igel C, Brandt SS. Accurate segmentation of dental panoramic radiographs with U-nets. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). 2019. p.15-9.
10.1109/ISBI.2019.8759563
78
Hiraiwa T, Ariji Y, Fukuda M, Kise K, Nakata K, Katsumata A, et al. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol 2019;48:20180218.
10.1259/dmfr.2018021830379570PMC6476355
79
Vinayahalingam S, Xi T, Berge S, Maal T, de Jong G. Automated detection of third molars and mandibular nerve by deep learning. Sci Rep 2019;9:1-7.
10.1038/s41598-019-45487-331227772PMC6588560
80
Yauney G, Angelino K, Edlund D, Shah P. Convolutional neural network for combined classification of fluorescent biomarkers and expert annotations using white light images. In: 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE). 2017. p.303-9.
10.1109/BIBE.2017.00-37
81
Rana A, Yauney G, Wong LC, Gupta O, Muftu A, Shah P. Automated segmentation of gingival diseases from oral images. In: 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT). 2017. p.144-7.
10.1109/HIC.2017.822760529234688PMC5717502
82
Yang J, Xie Y, Liu L, Xia B, Cao Z, Guo C. Automated dental image analysis by deep learning on small dataset. In: 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC). 2018. p.492-7.
10.1109/COMPSAC.2018.00076
83
Song T, Landini G, Fouad S, Mehanna H. Epithelial segmentation from in situ hybridisation histological samples using a deep central attention learning approach. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). 2019. p.1527-31.
10.1109/ISBI.2019.875938431535160PMC6768892
84
Shen Z, Shang X, Zhao M, Dong X, Xiong G, Wang F-Y. A learning-based framework for error compensation in 3d printing. IEEE Trans Cybern 2019;49:4042-50.
10.1109/TCYB.2019.289855330843813
85
Alarifi A, AlZubi AA. Memetic search optimization along with genetic scale recurrent neural network for predictive rate of implant treatment. J Med Syst 2018;42:202.
10.1007/s10916-018-1051-130225666
페이지 상단으로 이동하기