Displaying publications 21 - 40 of 118 in total

Abstract:
Sort:
  1. Acharya UR, Mookiah MRK, Koh JEW, Tan JH, Bhandary SV, Rao AK, et al.
    Comput Biol Med, 2017 05 01;84:59-68.
    PMID: 28343061 DOI: 10.1016/j.compbiomed.2017.03.016
    The cause of diabetic macular edema (DME) is due to prolonged and uncontrolled diabetes mellitus (DM) which affects the vision of diabetic subjects. DME is graded based on the exudate location from the macula. It is clinically diagnosed using fundus images which is tedious and time-consuming. Regular eye screening and subsequent treatment may prevent the vision loss. Hence, in this work, a hybrid system based on Radon transform (RT), discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed for an automated detection of DME. The fundus images are subjected to RT to obtain sinograms and DWT is applied on these sinograms to extract wavelet coefficients (approximate, horizontal, vertical and diagonal). DCT is applied on approximate coefficients to obtain 2D-DCT coefficients. Further, these coefficients are converted into 1D vector by arranging the coefficients in zig-zag manner. This 1D signal is subjected to locality sensitive discriminant analysis (LSDA). Finally, various supervised classifiers are used to classify the three classes using significant features. Our proposed technique yielded a classification accuracy of 100% and 97.01% using two and seven significant features for private and public (MESSIDOR) databases respectively. Also, a maculopathy index is formulated with two significant parameters to discriminate the three groups distinctly using a single integer. Hence, our obtained results suggest that this system can be used as an eye screening tool for diabetic subjects for DME.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  2. Acharya UR, Bhat S, Koh JEW, Bhandary SV, Adeli H
    Comput Biol Med, 2017 Sep 01;88:72-83.
    PMID: 28700902 DOI: 10.1016/j.compbiomed.2017.06.022
    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  3. Maheshwari S, Pachori RB, Kanhangad V, Bhandary SV, Acharya UR
    Comput Biol Med, 2017 Sep 01;88:142-149.
    PMID: 28728059 DOI: 10.1016/j.compbiomed.2017.06.017
    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  4. Koh JEW, Acharya UR, Hagiwara Y, Raghavendra U, Tan JH, Sree SV, et al.
    Comput Biol Med, 2017 05 01;84:89-97.
    PMID: 28351716 DOI: 10.1016/j.compbiomed.2017.03.008
    Vision is paramount to humans to lead an active personal and professional life. The prevalence of ocular diseases is rising, and diseases such as glaucoma, Diabetic Retinopathy (DR) and Age-related Macular Degeneration (AMD) are the leading causes of blindness in developed countries. Identifying these diseases in mass screening programmes is time-consuming, labor-intensive and the diagnosis can be subjective. The use of an automated computer aided diagnosis system will reduce the time taken for analysis and will also reduce the inter-observer subjective variabilities in image interpretation. In this work, we propose one such system for the automatic classification of normal from abnormal (DR, AMD, glaucoma) images. We had a total of 404 normal and 1082 abnormal fundus images in our database. As the first step, 2D-Continuous Wavelet Transform (CWT) decomposition on the fundus images of two classes was performed. Subsequently, energy features and various entropies namely Yager, Renyi, Kapoor, Shannon, and Fuzzy were extracted from the decomposed images. Then, adaptive synthetic sampling approach was applied to balance the normal and abnormal datasets. Next, the extracted features were ranked according to the significances using Particle Swarm Optimization (PSO). Thereupon, the ranked and selected features were used to train the random forest classifier using stratified 10-fold cross validation. Overall, the proposed system presented a performance rate of 92.48%, and a sensitivity and specificity of 89.37% and 95.58% respectively using 15 features. This novel system shows promise in detecting abnormal fundus images, and hence, could be a valuable adjunct eye health screening tool that could be employed in polyclinics, and thereby reduce the workload of specialists at hospitals.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  5. Acharya UR, Ng WL, Rahmat K, Sudarshan VK, Koh JEW, Tan JH, et al.
    Comput Biol Med, 2017 12 01;91:13-20.
    PMID: 29031099 DOI: 10.1016/j.compbiomed.2017.10.001
    Shear wave elastography (SWE) examination using ultrasound elastography (USE) is a popular imaging procedure for obtaining elasticity information of breast lesions. Elasticity parameters obtained through SWE can be used as biomarkers that can distinguish malignant breast lesions from benign ones. Furthermore, the elasticity parameters extracted from SWE can speed up the diagnosis and possibly reduce human errors. In this paper, Shearlet transform and local binary pattern histograms (LBPH) are proposed as an original algorithm to differentiate malignant and benign breast lesions. First, Shearlet transform is applied on the SWE images to acquire low frequency, horizontal and vertical cone coefficients. Next, LBPH features are extracted from the Shearlet transform coefficients and subjected to dimensionality reduction using locality sensitivity discriminating analysis (LSDA). The reduced LSDA components are ranked and then fed to several classifiers for the automated classification of breast lesions. A probabilistic neural network classifier trained only with seven top ranked features performed best, and achieved 98.08% accuracy, 98.63% sensitivity, and 97.59% specificity in distinguishing malignant from benign breast lesions. The high sensitivity and specificity of our system indicates that it can be employed as a primary screening tool for faster diagnosis of breast malignancies, thereby possibly reducing the mortality rate due to breast cancer.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  6. Siddiqui MF, Reza AW, Shafique A, Omer H, Kanesan J
    Magn Reson Imaging, 2017 12;44:82-91.
    PMID: 28855113 DOI: 10.1016/j.mri.2017.08.005
    Sensitivity Encoding (SENSE) is a widely used technique in Parallel Magnetic Resonance Imaging (MRI) to reduce scan time. Reconfigurable hardware based architecture for SENSE can potentially provide image reconstruction with much less computation time. Application specific hardware platform for SENSE may dramatically increase the power efficiency of the system and can decrease the execution time to obtain MR images. A new implementation of SENSE on Field Programmable Gate Array (FPGA) is presented in this study, which provides real-time SENSE reconstruction right on the receiver coil data acquisition system with no need to transfer the raw data to the MRI server, thereby minimizing the transmission noise and memory usage. The proposed SENSE architecture can reconstruct MR images using receiver coil sensitivity maps obtained using pre-scan and eigenvector (E-maps) methods. The results show that the proposed system consumes remarkably less computation time for SENSE reconstruction, i.e., 0.164ms @ 200MHz, while maintaining the quality of the reconstructed images with good mean SNR (29+ dB), less RMSE (<5×10-2) and comparable artefact power (<9×10-4) to conventional SENSE reconstruction. A comparison of the center line profiles of the reconstructed and reference images also indicates a good quality of the reconstructed images. Furthermore, the results indicate that the proposed architectural design can prove to be a significant tool for SENSE reconstruction in modern MRI scanners and its low power consumption feature can be remarkable for portable MRI scanners.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  7. Gandhamal A, Talbar S, Gajre S, Hani AF, Kumar D
    Comput Biol Med, 2017 04 01;83:120-133.
    PMID: 28279861 DOI: 10.1016/j.compbiomed.2017.03.001
    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  8. Suriani NS, Hussain A, Zulkifley MA
    Sensors (Basel), 2013 Aug 05;13(8):9966-98.
    PMID: 23921828 DOI: 10.3390/s130809966
    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  9. Liu F, Wang H, Liang SN, Jin Z, Wei S, Li X, et al.
    Comput Biol Med, 2023 May;157:106790.
    PMID: 36958239 DOI: 10.1016/j.compbiomed.2023.106790
    Structural magnetic resonance imaging (sMRI) is a popular technique that is widely applied in Alzheimer's disease (AD) diagnosis. However, only a few structural atrophy areas in sMRI scans are highly associated with AD. The degree of atrophy in patients' brain tissues and the distribution of lesion areas differ among patients. Therefore, a key challenge in sMRI-based AD diagnosis is identifying discriminating atrophy features. Hence, we propose a multiplane and multiscale feature-level fusion attention (MPS-FFA) model. The model has three components, (1) A feature encoder uses a multiscale feature extractor with hybrid attention layers to simultaneously capture and fuse multiple pathological features in the sagittal, coronal, and axial planes. (2) A global attention classifier combines clinical scores and two global attention layers to evaluate the feature impact scores and balance the relative contributions of different feature blocks. (3) A feature similarity discriminator minimizes the feature similarities among heterogeneous labels to enhance the ability of the network to discriminate atrophy features. The MPS-FFA model provides improved interpretability for identifying discriminating features using feature visualization. The experimental results on the baseline sMRI scans from two databases confirm the effectiveness (e.g., accuracy and generalizability) of our method in locating pathological locations. The source code is available at https://github.com/LiuFei-AHU/MPSFFA.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  10. Dey A, Chattopadhyay S, Singh PK, Ahmadian A, Ferrara M, Senu N, et al.
    Sci Rep, 2021 Dec 15;11(1):24065.
    PMID: 34911977 DOI: 10.1038/s41598-021-02731-z
    COVID-19 is a respiratory disease that causes infection in both lungs and the upper respiratory tract. The World Health Organization (WHO) has declared it a global pandemic because of its rapid spread across the globe. The most common way for COVID-19 diagnosis is real-time reverse transcription-polymerase chain reaction (RT-PCR) which takes a significant amount of time to get the result. Computer based medical image analysis is more beneficial for the diagnosis of such disease as it can give better results in less time. Computed Tomography (CT) scans are used to monitor lung diseases including COVID-19. In this work, a hybrid model for COVID-19 detection has developed which has two key stages. In the first stage, we have fine-tuned the parameters of the pre-trained convolutional neural networks (CNNs) to extract some features from the COVID-19 affected lungs. As pre-trained CNNs, we have used two standard CNNs namely, GoogleNet and ResNet18. Then, we have proposed a hybrid meta-heuristic feature selection (FS) algorithm, named as Manta Ray Foraging based Golden Ratio Optimizer (MRFGRO) to select the most significant feature subset. The proposed model is implemented over three publicly available datasets, namely, COVID-CT dataset, SARS-COV-2 dataset, and MOSMED dataset, and attains state-of-the-art classification accuracies of 99.15%, 99.42% and 95.57% respectively. Obtained results confirm that the proposed approach is quite efficient when compared to the local texture descriptors used for COVID-19 detection from chest CT-scan images.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods*
  11. Ali HH, Sunar MS, Kolivand H
    PLoS One, 2017;12(6):e0178415.
    PMID: 28632740 DOI: 10.1371/journal.pone.0178415
    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  12. Rasel MA, Abdul Kareem S, Kwan Z, Yong SS, Obaidellah U
    Comput Biol Med, 2024 Aug;178:108758.
    PMID: 38905895 DOI: 10.1016/j.compbiomed.2024.108758
    Melanoma, one of the deadliest types of skin cancer, accounts for thousands of fatalities globally. The bluish, blue-whitish, or blue-white veil (BWV) is a critical feature for diagnosing melanoma, yet research into detecting BWV in dermatological images is limited. This study utilizes a non-annotated skin lesion dataset, which is converted into an annotated dataset using a proposed imaging algorithm (color threshold techniques) on lesion patches based on color palettes. A Deep Convolutional Neural Network (DCNN) is designed and trained separately on three individual and combined dermoscopic datasets, using custom layers instead of standard activation function layers. The model is developed to categorize skin lesions based on the presence of BWV. The proposed DCNN demonstrates superior performance compared to the conventional BWV detection models across different datasets. The model achieves a testing accuracy of 85.71 % on the augmented PH2 dataset, 95.00 % on the augmented ISIC archive dataset, 95.05 % on the combined augmented (PH2+ISIC archive) dataset, and 90.00 % on the Derm7pt dataset. An explainable artificial intelligence (XAI) algorithm is subsequently applied to interpret the DCNN's decision-making process about the BWV detection. The proposed approach, coupled with XAI, significantly improves the detection of BWV in skin lesions, outperforming existing models and providing a robust tool for early melanoma diagnosis.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  13. Rasel MA, Kareem SA, Obaidellah U
    Comput Biol Med, 2024 Dec;183:109250.
    PMID: 39395346 DOI: 10.1016/j.compbiomed.2024.109250
    The color of skin lesions is a crucial diagnostic feature for identifying malignant melanoma and other skin diseases. Typical colors associated with melanocytic lesions include tan, brown, black, red, white, and blue-gray. This study introduces a novel feature: the number of colors present in lesions, which can indicate the severity of skin diseases and help distinguish melanomas from benign lesions. We propose a color histogram analysis, a traditional image processing technique, to analyze the pixels of skin lesions from three publicly available datasets: PH2, ISIC2016, and Med-Node, which include dermoscopic and non-dermoscopic images. While the PH2 dataset contains ground truth about skin lesion colors, the ISIC2016 and Med-Node datasets lack such annotations; our algorithm establishes this ground truth using the color histogram analysis based on the PH2 dataset. We then design and train a 19-layer Convolutional Neural Network (CNN) with different skip connections of residual blocks to classify lesions into three categories based on the number of colors present. The DeepDream algorithm is utilized to visualize the learned features of different layers, and multiple configurations of the proposed CNN are tested, achieving the highest weighted F1-score of 75.00 % on the test set. LIME is subsequently applied to identify the most important features influencing the model's decision-making. The findings demonstrate that the number of colors in lesions is a significant feature for describing skin conditions. The proposed CNN, particularly with three skip connections, shows strong potential for clinical application in diagnosing melanoma, supporting its use alongside traditional diagnostic methods.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  14. Lim WX, Chen Z
    Med Biol Eng Comput, 2024 Aug;62(8):2571-2583.
    PMID: 38649629 DOI: 10.1007/s11517-024-03093-0
    Diabetic retinopathy disease contains lesions (e.g., exudates, hemorrhages, and microaneurysms) that are minute to the naked eye. Determining the lesions at pixel level poses a challenge as each pixel does not reflect any semantic entities. Furthermore, the computational cost of inspecting each pixel is expensive because the number of pixels is high even at low resolution. In this work, we propose a hybrid image processing method. Simple Linear Iterative Clustering with Gaussian Filter (SLIC-G) for the purpose of overcoming pixel constraints. The SLIC-G image processing method is divided into two stages: (1) simple linear iterative clustering superpixel segmentation and (2) Gaussian smoothing operation. In such a way, a large number of new transformed datasets are generated and then used for model training. Finally, two performance evaluation metrics that are suitable for imbalanced diabetic retinopathy datasets were used to validate the effectiveness of the proposed SLIC-G. The results indicate that, in comparison to prior published works' results, the proposed SLIC-G shows better performance on image classification of class imbalanced diabetic retinopathy datasets. This research reveals the importance of image processing and how it influences the performance of deep learning networks. The proposed SLIC-G enhances pre-trained network performance by eliminating the local redundancy of an image, which preserves local structures, but avoids over-segmented, noisy clips. It closes the research gap by introducing the use of superpixel segmentation and Gaussian smoothing operation as image processing methods in diabetic retinopathy-related tasks.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  15. Abdullah KA, McEntee MF, Reed W, Kench PL
    J Med Imaging Radiat Oncol, 2016 Aug;60(4):459-68.
    PMID: 27241506 DOI: 10.1111/1754-9485.12473
    The aim of this systematic review is to evaluate the radiation dose reduction achieved using iterative reconstruction (IR) compared to filtered back projection (FBP) in coronary CT angiography (CCTA) and assess the impact on diagnostic image quality. A systematic search of seven electronic databases was performed to identify all studies using a developed keywords strategy. A total of 14 studies met the criteria and were included in a review analysis. The results showed that there was a significant reduction in radiation dose when using IR compared to FBP (P  0.05). The mean ± SD difference of image noise, signal-noise ratio (SNR) and contrast-noise ratio (CNR) were 1.05 ± 1.29 HU, 0.88 ± 0.56 and 0.63 ± 1.83 respectively. The mean ± SD percentages of overall image quality scores were 71.79 ± 12.29% (FBP) and 67.31 ± 22.96% (IR). The mean ± SD percentages of coronary segment analysis were 95.43 ± 2.57% (FBP) and 97.19 ± 2.62% (IR). In conclusion, this review analysis shows that CCTA with the use of IR leads to a significant reduction in radiation dose as compared to the use of FBP. Diagnostic image quality of IR at reduced dose (30-41%) is comparable to FBP at standard dose in the diagnosis of CAD.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods*
  16. Ng KH, Lau S
    Med Phys, 2015 Dec;42(12):7059-77.
    PMID: 26632060 DOI: 10.1118/1.4935141
    Breast density is a strong predictor of the failure of mammography screening to detect breast cancer and is a strong predictor of the risk of developing breast cancer. The many imaging options that are now available for imaging dense breasts show great promise, but there is still the question of determining which women are "dense" and what imaging modality is suitable for individual women. To date, mammographic breast density has been classified according to the Breast Imaging-Reporting and Data System (BI-RADS) categories from visual assessment, but this is known to be very subjective. Despite many research reports, the authors believe there has been a lack of physics-led and evidence-based arguments about what breast density actually is, how it should be measured, and how it should be used. In this paper, the authors attempt to start correcting this situation by reviewing the history of breast density research and the debates generated by the advocacy movement. The authors review the development of breast density estimation from pattern analysis to area-based analysis, and the current automated volumetric breast density (VBD) analysis. This is followed by a discussion on seeking the ground truth of VBD and mapping volumetric methods to BI-RADS density categories. The authors expect great improvement in VBD measurements that will satisfy the needs of radiologists, epidemiologists, surgeons, and physicists. The authors believe that they are now witnessing a paradigm shift toward personalized breast screening, which is going to see many more cancers being detected early, with the use of automated density measurement tools as an important component.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods*
  17. Yazdani S, Yusof R, Riazi A, Karimian A
    Diagn Pathol, 2014;9:207.
    PMID: 25540017 DOI: 10.1186/s13000-014-0207-7
    Brain segmentation in magnetic resonance images (MRI) is an important stage in clinical studies for different issues such as diagnosis, analysis, 3-D visualizations for treatment and surgical planning. MR Image segmentation remains a challenging problem in spite of different existing artifacts such as noise, bias field, partial volume effects and complexity of the images. Some of the automatic brain segmentation techniques are complex and some of them are not sufficiently accurate for certain applications. The goal of this paper is proposing an algorithm that is more accurate and less complex).
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  18. Noor NM, Than JC, Rijal OM, Kassim RM, Yunus A, Zeki AA, et al.
    J Med Syst, 2015 Mar;39(3):22.
    PMID: 25666926 DOI: 10.1007/s10916-015-0214-6
    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods*
  19. Imran M, Hashim R, Noor Elaiza AK, Irtaza A
    ScientificWorldJournal, 2014;2014:752090.
    PMID: 25121136 DOI: 10.1155/2014/752090
    One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  20. Mookiah MR, Acharya UR, Koh JE, Chandran V, Chua CK, Tan JH, et al.
    Comput Biol Med, 2014 Oct;53:55-64.
    PMID: 25127409 DOI: 10.1016/j.compbiomed.2014.07.015
    Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback-Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links