Displaying all 5 publications

Abstract:
Sort:
  1. Ahmad Fauzi MF, Khansa I, Catignani K, Gordillo G, Sen CK, Gurcan MN
    Comput Biol Med, 2015 May;60:74-85.
    PMID: 25756704 DOI: 10.1016/j.compbiomed.2015.02.015
    An estimated 6.5 million patients in the United States are affected by chronic wounds, with more than US$25 billion and countless hours spent annually for all aspects of chronic wound care. There is a need for an intelligent software tool to analyze wound images, characterize wound tissue composition, measure wound size, and monitor changes in wound in between visits. Performed manually, this process is very time-consuming and subject to intra- and inter-reader variability. In this work, our objective is to develop methods to segment, measure and characterize clinically presented chronic wounds from photographic images. The first step of our method is to generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the segmentation process using either optimal thresholding or region growing. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively; while the white probability map is to detect the white label card for measurement calibration purposes. The innovative aspects of this work include defining a four-dimensional probability map specific to wound characteristics, a computationally efficient method to segment wound images utilizing the probability map, and auto-calibration of wound measurements using the content of the image. These methods were applied to 80 wound images, captured in a clinical setting at the Ohio State University Comprehensive Wound Center, with the ground truth independently generated by the consensus of at least two clinicians. While the mean inter-reader agreement between the readers varied between 67.4% and 84.3%, the computer achieved an average accuracy of 75.1%.
  2. Wan Ahmad WS, Zaki WM, Ahmad Fauzi MF
    Biomed Eng Online, 2015;14:20.
    PMID: 25889188 DOI: 10.1186/s12938-015-0014-8
    Unsupervised lung segmentation method is one of the mandatory processes in order to develop a Content Based Medical Image Retrieval System (CBMIRS) of CXR. The purpose of the study is to present a robust solution for lung segmentation of standard and mobile chest radiographs using fully automated unsupervised method.
  3. Mohamad Sehmi MN, Ahmad Fauzi MF, Wan Ahmad WSHM, Wan Ling Chan E
    F1000Res, 2021;10:1057.
    PMID: 37767358 DOI: 10.12688/f1000research.73161.2
    Background: Pancreatic cancer is one of the deadliest forms of cancer. The cancer grades define how aggressively the cancer will spread and give indication for doctors to make proper prognosis and treatment. The current method of pancreatic cancer grading, by means of manual examination of the cancerous tissue following a biopsy, is time consuming and often results in misdiagnosis and thus incorrect treatment. This paper presents an automated grading system for pancreatic cancer from pathology images developed by comparing deep learning models on two different pathological stains. Methods: A transfer-learning technique was adopted by testing the method on 14 different ImageNet pre-trained models. The models were fine-tuned to be trained with our dataset. Results: From the experiment, DenseNet models appeared to be the best at classifying the validation set with up to 95.61% accuracy in grading pancreatic cancer despite the small sample set. Conclusions: To the best of our knowledge, this is the first work in grading pancreatic cancer based on pathology images. Previous works have either focused only on detection (benign or malignant), or on radiology images (computerized tomography [CT], magnetic resonance imaging [MRI] etc.). The proposed system can be very useful to pathologists in facilitating an automated or semi-automated cancer grading system, which can address the problems found in manual grading.
  4. Ahmad Fauzi MF, Wan Ahmad WSHM, Jamaluddin MF, Lee JTH, Khor SY, Looi LM, et al.
    Diagnostics (Basel), 2022 Dec 08;12(12).
    PMID: 36553102 DOI: 10.3390/diagnostics12123093
    Hormone receptor status is determined primarily to identify breast cancer patients who may benefit from hormonal therapy. The current clinical practice for the testing using either Allred score or H-score is still based on laborious manual counting and estimation of the amount and intensity of positively stained cancer cells in immunohistochemistry (IHC)-stained slides. This work integrates cell detection and classification workflow for breast carcinoma estrogen receptor (ER)-IHC-stained images and presents an automated evaluation system. The system first detects all cells within the specific regions and classifies them into negatively, weakly, moderately, and strongly stained, followed by Allred scoring for ER status evaluation. The generated Allred score relies heavily on accurate cell detection and classification and is compared against pathologists' manual estimation. Experiments on 40 whole-slide images show 82.5% agreement on hormonal treatment recommendation, which we believe could be further improved with an advanced learning model and enhancement to address the cases with 0% ER status. This promising system can automate the exhaustive exercise to provide fast and reliable assistance to pathologists and medical personnel. The system has the potential to improve the overall standards of prognostic reporting for cancer patients, benefiting pathologists, patients, and also the public at large.
  5. Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, et al.
    Cancers (Basel), 2024 Nov 11;16(22).
    PMID: 39594748 DOI: 10.3390/cancers16223794
    Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links