METHOD: In the present study, three separate data cohorts containing 1288 breast lesions from three countries (Malaysia, Iran, and Turkey) were utilized for MLmodel development and external validation. The model was trained on ultrasound images of 725 breast lesions, and validation was done separately on the remaining data. An expert radiologist and a radiology resident classified the lesions based on the BI-RADS lexicon. Thirteen morphometric features were selected from a contour of the lesion and underwent a three-step feature selection process. Five features were chosen to be fed into the model separately and combined with the imaging signs mentioned in the BI-RADS reference guide. A support vector classifier was trained and optimized.
RESULTS: The diagnostic profile of the model with various input data was compared to the expert radiologist and radiology resident. The agreement of each approach with histopathologic specimens was also determined. Based on BI-RADS and morphometric features, the model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.885, which is higher than the expert radiologist and radiology resident performances with AUC of 0.814 and 0.632, respectively in all cohorts. DeLong's test also showed that the AUC of the ML protocol was significantly different from that of the expert radiologist (ΔAUCs = 0.071, 95%CI: (0.056, 0.086), P = 0.005).
CONCLUSIONS: These results support the possible role of morphometric features in enhancing the already well-excepted classification schemes.
MATERIALS AND METHODS: An electronic search was carried out using databases such as PubMed, Scopus, and the Web of Science Core Collection. Two reviewers searched the databases separately and concurrently. The initial search was conducted on 6 July 2021. The publishing period was unrestricted; however, the search was limited to articles involving human participants and published in English. Combinations of Medical Subject Headings (MeSH) phrases and free text terms were used as search keywords in each database. The following data was taken from the methods and results sections of the selected papers: The amount of AI training datasets utilized to train the intelligent system, as well as their conditional properties; Unilateral CLP, Bilateral CLP, Unilateral Cleft lip and alveolus, Unilateral cleft lip, Hypernasality, Dental characteristics, and sagittal jaw relationship in children with CLP are among the problems studied.
RESULTS: Based on the predefined search strings with accompanying database keywords, a total of 44 articles were found in Scopus, PubMed, and Web of Science search results. After reading the full articles, 12 papers were included for systematic analysis.
CONCLUSIONS: Artificial intelligence provides an advanced technology that can be employed in AI-enabled computerized programming software for accurate landmark detection, rapid digital cephalometric analysis, clinical decision-making, and treatment prediction. In children with corrected unilateral cleft lip and palate, ML can help detect cephalometric predictors of future need for orthognathic surgery.
METHODS: Frontal view intraoral photographs fulfilling selection criteria were collected. Along the gingival margin, the gingival conditions of individual sites were labelled as healthy, diseased, or questionable. Photographs were randomly assigned as training or validation datasets. Training datasets were input into a novel artificial intelligence system and its accuracy in detection of gingivitis including sensitivity, specificity, and mean intersection-over-union were analysed using validation dataset. The accuracy was reported according to STARD-2015 statement.
RESULTS: A total of 567 intraoral photographs were collected and labelled, of which 80% were used for training and 20% for validation. Regarding training datasets, there were total 113,745,208 pixels with 9,270,413; 5,711,027; and 4,596,612 pixels were labelled as healthy, diseased, and questionable respectively. Regarding validation datasets, there were 28,319,607 pixels with 1,732,031; 1,866,104; and 1,116,493 pixels were labelled as healthy, diseased, and questionable, respectively. AI correctly predicted 1,114,623 healthy and 1,183,718 diseased pixels with sensitivity of 0.92 and specificity of 0.94. The mean intersection-over-union of the system was 0.60 and above the commonly accepted threshold of 0.50.
CONCLUSIONS: Artificial intelligence could identify specific sites with and without gingival inflammation, with high sensitivity and high specificity that are on par with visual examination by human dentist. This system may be used for monitoring of the effectiveness of patients' plaque control.