Displaying all 2 publications

Abstract:
Sort:
  1. Kulathilake KASH, Abdullah NA, Sabri AQM, Lai KW
    Complex Intell Systems, 2023;9(3):2713-2745.
    PMID: 34777967 DOI: 10.1007/s40747-021-00405-x
    Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
  2. Kulathilake KASH, Abdullah NA, Bandara AMRR, Lai KW
    J Healthc Eng, 2021;2021:9975762.
    PMID: 34552709 DOI: 10.1155/2021/9975762
    Low-dose Computed Tomography (LDCT) has gained a great deal of attention in clinical procedures due to its ability to reduce the patient's risk of exposure to the X-ray radiation. However, reducing the X-ray dose increases the quantum noise and artifacts in the acquired LDCT images. As a result, it produces visually low-quality LDCT images that adversely affect the disease diagnosing and treatment planning in clinical procedures. Deep Learning (DL) has recently become the cutting-edge technology of LDCT denoising due to its high performance and data-driven execution compared to conventional denoising approaches. Although the DL-based models perform fairly well in LDCT noise reduction, some noise components are still retained in denoised LDCT images. One reason for this noise retention is the direct transmission of feature maps through the skip connections of contraction and extraction path-based DL modes. Therefore, in this study, we propose a Generative Adversarial Network with Inception network modules (InNetGAN) as a solution for filtering the noise transmission through skip connections and preserving the texture and fine structure of LDCT images. The proposed Generator is modeled based on the U-net architecture. The skip connections in the U-net architecture are modified with three different inception network modules to filter out the noise in the feature maps passing over them. The quantitative and qualitative experimental results have shown the performance of the InNetGAN model in reducing noise and preserving the subtle structures and texture details in LDCT images compared to the other state-of-the-art denoising algorithms.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links