Displaying publications 21 - 40 of 255 in total

Abstract:
Sort:
  1. Atee HA, Ahmad R, Noor NM, Rahma AM, Aljeroudi Y
    PLoS One, 2017;12(2):e0170329.
    PMID: 28196080 DOI: 10.1371/journal.pone.0170329
    In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM) algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM) index, fusion matrices, and mean square error (MSE). The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.
    Matched MeSH terms: Image Processing, Computer-Assisted*
  2. Chong JWR, Khoo KS, Chew KW, Vo DN, Balakrishnan D, Banat F, et al.
    Bioresour Technol, 2023 Feb;369:128418.
    PMID: 36470491 DOI: 10.1016/j.biortech.2022.128418
    The identification of microalgae species is an important tool in scientific research and commercial application to prevent harmful algae blooms (HABs) and recognizing potential microalgae strains for the bioaccumulation of valuable bioactive ingredients. The aim of this study is to incorporate rapid, high-accuracy, reliable, low-cost, simple, and state-of-the-art identification methods. Thus, increasing the possibility for the development of potential recognition applications, that could identify toxic-producing and valuable microalgae strains. Recently, deep learning (DL) has brought the study of microalgae species identification to a much higher depth of efficiency and accuracy. In doing so, this review paper emphasizes the significance of microalgae identification, and various forms of machine learning algorithms for image classification, followed by image pre-processing techniques, feature extraction, and selection for further classification accuracy. Future prospects over the challenges and improvements of potential DL classification model development, application in microalgae recognition, and image capturing technologies are discussed accordingly.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  3. Chong JWR, Khoo KS, Chew KW, Ting HY, Show PL
    Biotechnol Adv, 2023;63:108095.
    PMID: 36608745 DOI: 10.1016/j.biotechadv.2023.108095
    Identification of microalgae species is of importance due to the uprising of harmful algae blooms affecting both the aquatic habitat and human health. Despite this occurence, microalgae have been identified as a green biomass and alternative source due to its promising bioactive compounds accumulation that play a significant role in many industrial applications. Recently, microalgae species identification has been conducted through DNA analysis and various microscopy techniques such as light, scanning electron, transmission electron, and atomic force -microscopy. The aforementioned procedures have encouraged researchers to consider alternate ways due to limitations such as costly validation, requiring skilled taxonomists, prolonged analysis, and low accuracy. This review highlights the potential innovations in digital microscopy with the incorporation of both hardware and software that can produce a reliable recognition, detection, enumeration, and real-time acquisition of microalgae species. Several steps such as image acquisition, processing, feature extraction, and selection are discussed, for the purpose of generating high image quality by removing unwanted artifacts and noise from the background. These steps of identification of microalgae species is performed by reliable image classification through machine learning as well as deep learning algorithms such as artificial neural networks, support vector machines, and convolutional neural networks. Overall, this review provides comprehensive insights into numerous possibilities of microalgae image identification, image pre-processing, and machine learning techniques to address the challenges in developing a robust digital classification tool for the future.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Abisha S, Mutawa AM, Murugappan M, Krishnan S
    PLoS One, 2023;18(4):e0284021.
    PMID: 37018344 DOI: 10.1371/journal.pone.0284021
    Different diseases are observed in vegetables, fruits, cereals, and commercial crops by farmers and agricultural experts. Nonetheless, this evaluation process is time-consuming, and initial symptoms are primarily visible at microscopic levels, limiting the possibility of an accurate diagnosis. This paper proposes an innovative method for identifying and classifying infected brinjal leaves using Deep Convolutional Neural Networks (DCNN) and Radial Basis Feed Forward Neural Networks (RBFNN). We collected 1100 images of brinjal leaf disease that were caused by five different species (Pseudomonas solanacearum, Cercospora solani, Alternaria melongenea, Pythium aphanidermatum, and Tobacco Mosaic Virus) and 400 images of healthy leaves from India's agricultural form. First, the original plant leaf is preprocessed by a Gaussian filter to reduce the noise and improve the quality of the image through image enhancement. A segmentation method based on expectation and maximization (EM) is then utilized to segment the leaf's-diseased regions. Next, the discrete Shearlet transform is used to extract the main features of the images such as texture, color, and structure, which are then merged to produce vectors. Lastly, DCNN and RBFNN are used to classify brinjal leaves based on their disease types. The DCNN achieved a mean accuracy of 93.30% (with fusion) and 76.70% (without fusion) compared to the RBFNN (82%-without fusion, 87%-with fusion) in classifying leaf diseases.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH
    Comput Biol Med, 2023 Feb;153:106553.
    PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553
    Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  6. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  7. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  8. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  9. Liu H, Huang J, Li Q, Guan X, Tseng M
    Artif Intell Med, 2024 Feb;148:102776.
    PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776
    This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Mustafa WA, Yazid H, Alquran H, Al-Issa Y, Junaini S
    PLoS One, 2024;19(6):e0306010.
    PMID: 38941319 DOI: 10.1371/journal.pone.0306010
    Weld defect inspection is an essential aspect of testing in industries field. From a human viewpoint, a manual inspection can make appropriate justification more difficult and lead to incorrect identification during weld defect detection. Weld defect inspection uses X-radiography testing, which is now mostly outdated. Recently, numerous researchers have utilized X-radiography digital images to inspect the defect. As a result, for error-free inspection, an autonomous weld detection and classification system are required. One of the most difficult issues in the field of image processing, particularly for enhancing image quality, is the issue of contrast variation and luminosity. Enhancement is carried out by adjusting the brightness of the dark or bright intensity to boost segmentation performance and image quality. To equalize contrast variation and luminosity, many different approaches have recently been put forth. In this research, a novel approach called Hybrid Statistical Enhancement (HSE), which is based on a direct strategy using statistical data, is proposed. The HSE method divided each pixel into three groups, the foreground, border, and problematic region, using the mean and standard deviation of a global and local neighborhood (luminosity and contrast). To illustrate the impact of the HSE method on the segmentation or detection stage, the datasets, specifically the weld defect image, were used. Bernsen and Otsu's methods are the two segmentation techniques utilized. The findings from the objective and visual elements demonstrated that the HSE approach might automatically improve segmentation output while effectively enhancing contrast variation and normalizing luminosity. In comparison to the Homomorphic Filter (HF) and Difference of Gaussian (DoG) approaches, the segmentation results for HSE images had the lowest result according to Misclassification Error (ME). After being applied to the HSE images during the segmentation stage, every quantitative result showed an increase. For example, accuracy increased from 64.171 to 84.964. In summary, the application of the HSE method has resulted in an effective and efficient outcome for background correction as well as improving the quality of images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Sharma N, Gupta S, Gupta D, Gupta P, Juneja S, Shah A, et al.
    PLoS One, 2024;19(5):e0302880.
    PMID: 38718092 DOI: 10.1371/journal.pone.0302880
    Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Ametefe DS, Sarnin SS, Ali DM, Ametefe GD, John D, Aliu AA, et al.
    Int J Lab Hematol, 2024 Oct;46(5):837-849.
    PMID: 38726705 DOI: 10.1111/ijlh.14305
    INTRODUCTION: Acute lymphoblastic leukemia (ALL) presents a formidable challenge in hematological malignancies, necessitating swift and precise diagnostic techniques for effective intervention. The conventional manual microscopy of blood smears, although widely practiced, suffers from significant limitations including labor-intensity and susceptibility to human error, particularly in distinguishing the subtle differences between normal and leukemic cells.

    METHODS: To overcome these limitations, our research introduces the ALLDet classifier, an innovative tool employing deep transfer learning for the automated analysis and categorization of ALL from White Blood Cell (WBC) nuclei images. Our investigation encompassed the evaluation of nine state-of-the-art pre-trained convolutional neural network (CNN) models, namely VGG16, VGG19, ResNet50, ResNet101, DenseNet121, DenseNet201, Xception, MobileNet, and EfficientNetB3. We augmented this approach by incorporating a sophisticated contour-based segmentation technique, derived from the Chan-Vese model, aimed at the meticulous segmentation of blast cell nuclei in blood smear images, thereby enhancing the accuracy of our analysis.

    RESULTS: The empirical assessment of these methodologies underscored the superior performance of the EfficientNetB3 model, which demonstrated exceptional metrics: a recall specificity of 98.5%, precision of 95.86%, F1-score of 97.16%, and an overall accuracy rate of 97.13%. The Chan-Vese model's adaptability to the irregular shapes of blast cells and its noise-resistant segmentation capability were key to capturing the complex morphological changes essential for accurate segmentation.

    CONCLUSION: The combined application of the ALLDet classifier, powered by EfficientNetB3, with our advanced segmentation approach, emerges as a formidable advancement in the early detection and accurate diagnosis of ALL. This breakthrough not only signifies a pivotal leap in leukemia diagnostic methodologies but also holds the promise of significantly elevating the standards of patient care through the provision of timely and precise diagnoses. The implications of this study extend beyond immediate clinical utility, paving the way for future research to further refine and enhance the capabilities of artificial intelligence in medical diagnostics.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Sirimewan D, Kunananthaseelan N, Raman S, Garcia R, Arashpour M
    Waste Manag, 2024 Dec 15;190:149-160.
    PMID: 39321600 DOI: 10.1016/j.wasman.2024.09.018
    Optimized and automated methods for handling construction and demolition waste (CDW) are crucial for improving the resource recovery process in waste management. Automated waste recognition is a critical step in this process, and it relies on robust image segmentation techniques. Prompt-guided segmentation methods provide promising results for specific user needs in image recognition. However, the current state-of-the-art segmentation methods trained for generic images perform unsatisfactorily on CDW recognition tasks, indicating a domain gap. To address this gap, a user-guided segmentation pipeline is developed in this study that leverages prompts such as bounding boxes, points, and text to segment CDW in cluttered environments. The adopted approach achieves a class-wise performance of around 70 % in several waste categories, surpassing the state-of-the-art algorithms by 9 % on average. This method allows users to create accurate segmentations by drawing a bounding box, clicking, or providing a text prompt, minimizing the time spent on detailed annotations. Integrating this human-machine system as a user-friendly interface into material recovery facilities enhances the monitoring and processing of waste, leading to better resource recovery outcomes in waste management.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  14. Dai L, Md Johar MG, Alkawaz MH
    Sci Rep, 2024 Nov 21;14(1):28885.
    PMID: 39572780 DOI: 10.1038/s41598-024-80441-y
    This work is to investigate the diagnostic value of a deep learning-based magnetic resonance imaging (MRI) image segmentation (IS) technique for shoulder joint injuries (SJIs) in swimmers. A novel multi-scale feature fusion network (MSFFN) is developed by optimizing and integrating the AlexNet and U-Net algorithms for the segmentation of MRI images of the shoulder joint. The model is evaluated using metrics such as the Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity (SE). A cohort of 52 swimmers with SJIs from Guangzhou Hospital serve as the subjects for this study, wherein the accuracy of the developed shoulder joint MRI IS model in diagnosing swimmers' SJIs is analyzed. The results reveal that the DSC for segmenting joint bones in MRI images based on the MSFFN algorithm is 92.65%, with PPV of 95.83% and SE of 96.30%. Similarly, the DSC for segmenting humerus bones in MRI images is 92.93%, with PPV of 95.56% and SE of 92.78%. The MRI IS algorithm exhibits an accuracy of 86.54% in diagnosing types of SJIs in swimmers, surpassing the conventional diagnostic accuracy of 71.15%. The consistency between the diagnostic results of complete tear, superior surface tear, inferior surface tear, and intratendinous tear of SJIs in swimmers and arthroscopic diagnostic results yield a Kappa value of 0.785 and an accuracy of 87.89%. These findings underscore the significant diagnostic value and potential of the MRI IS technique based on the MSFFN algorithm in diagnosing SJIs in swimmers.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  15. Yeap ZX, Sim KS, Tso CP
    Microsc Res Tech, 2019 Apr;82(4):402-414.
    PMID: 30575192 DOI: 10.1002/jemt.23181
    Image processing is introduced to remove or reduce the noise and unwanted signal that deteriorate the quality of an image. Here, a single level two-dimensional wavelet transform is applied to the image in order to obtain the wavelet transform sub-band signal of an image. An estimation technique to predict the noise variance in an image is proposed, which is then fed into a Wiener filter to filter away the noise from the sub-band of the image. The proposed filter is called adaptive tuning piecewise cubic Hermite interpolation with Wiener filter in the wavelet domain. The performance of this filter is compared with four existing filters: median filter, Gaussian smoothing filter, two level wavelet transform with Wiener filter and adaptive noise Wiener filter. Based on the results, the adaptive tuning piecewise cubic Hermite interpolation with Wiener filter in wavelet domain has better performance than the other four methods.
    Matched MeSH terms: Image Processing, Computer-Assisted
  16. Muhammad Naim Mazani, Shuzlina Abdul-Rahman, Sofianita Mutalib
    ESTEEM Academic Journal, 2020;16(1):74-85.
    MyJurnal
    This study presents pre-processing methods for detecting lane detection using camera and Light Detection and Ranging (LiDAR) sensor technologies. Standard image processing methods are not suitable for complicated roads with various sign on the ground. Thus, determining the right techniques for pre-processing such data would be a challenge. The objectives of this study are to pre-process the scanned images and apply the image recognition algorithm for lane detection. The study employed Canny Edge Detection and Hough Transform algorithms on several sets of images. A different region of interest was experimented to find the optimal one. The experimental results showed that the proposed algorithms could be practical in terms of effectively detecting road lines and generate lane detection.
    Matched MeSH terms: Image Processing, Computer-Assisted
  17. Nur Aizzah Hanan Kamarul Zaman, Mohd Zulfaezal Che Azemin
    MyJurnal
    Previous work employed digital image analysis using a fully-automated computer software to quantify changes in MG, which is meibomian gland loss. However, semi-automated software is more favorable for clinical applications as it allows clinicians to manually delete undesired noise or artifacts.
    Matched MeSH terms: Image Processing, Computer-Assisted
  18. Li M, Mathai A, Lau SLH, Yam JW, Xu X, Wang X
    Sensors (Basel), 2021 Jan 05;21(1).
    PMID: 33466530 DOI: 10.3390/s21010313
    Due to medium scattering, absorption, and complex light interactions, capturing objects from the underwater environment has always been a difficult task. Single-pixel imaging (SPI) is an efficient imaging approach that can obtain spatial object information under low-light conditions. In this paper, we propose a single-pixel object inspection system for the underwater environment based on compressive sensing super-resolution convolutional neural network (CS-SRCNN). With the CS-SRCNN algorithm, image reconstruction can be achieved with 30% of the total pixels in the image. We also investigate the impact of compression ratios on underwater object SPI reconstruction performance. In addition, we analyzed the effect of peak signal to noise ratio (PSNR) and structural similarity index (SSIM) to determine the image quality of the reconstructed image. Our work is compared to the SPI system and SRCNN method to demonstrate its efficiency in capturing object results from an underwater environment. The PSNR and SSIM of the proposed method have increased to 35.44% and 73.07%, respectively. This work provides new insight into SPI applications and creates a better alternative for underwater optical object imaging to achieve good quality.
    Matched MeSH terms: Image Processing, Computer-Assisted
  19. Sophia Jamila Zahra, Riza Sulaiman, Anton Satria Prabuwono, Seyed Mostafa Mousavi Kahaki
    MyJurnal
    Feature descriptor for image retrieval has emerged as an important part of computer vision and image analysis application. In the last decades, researchers have used algorithms to generate effective, efficient and steady methods in image processing, particularly shape representation, matching and leaf retrieval. Existing leaf retrieval methods are insufficient to achieve an adequate retrieval rate due to the inherent difficulties related to available shape descriptors of different leaf images. Shape analysis and comparison for plant leaf retrieval are investigated in this study. Different image features may result in different significance interpretation of images, even though they come from almost similarly shaped of images. A new image transform, known as harmonic mean projection transform (HMPT), is proposed in this study as a feature descriptor method to extract leaf features. By using harmonic mean function, the signal carries information of greater importance is considered in signal acquisition. The selected image is extracted from the whole region where all the pixels are considered to get a set of features. Results indicate better classification rates when compared with other classification methods.
    Matched MeSH terms: Image Processing, Computer-Assisted
  20. Al-Ameen Z, Sulong G
    Scanning, 2015 Mar-Apr;37(2):116-25.
    PMID: 25663630 DOI: 10.1002/sca.21187
    Contrast is a distinctive visual attribute that indicates the quality of an image. Computed Tomography (CT) images are often characterized as poor quality due to their low-contrast nature. Although many innovative ideas have been proposed to overcome this problem, the outcomes, especially in terms of accuracy, visual quality and speed, are falling short and there remains considerable room for improvement. Therefore, an improved version of the single-scale Retinex algorithm is proposed to enhance the contrast while preserving the standard brightness and natural appearance, with low implementation time and without accentuating the noise for CT images. The novelties of the proposed algorithm consist of tuning the standard single-scale Retinex, adding a normalized-ameliorated Sigmoid function and adapting some parameters to improve its enhancement ability. The proposed algorithm is tested with synthetically and naturally degraded low-contrast CT images, and its performance is also verified with contemporary enhancement techniques using two prevalent quality evaluation metrics-SSIM and UIQI. The results obtained from intensive experiments exhibited significant improvement not only in enhancing the contrast but also in increasing the visual quality of the processed images. Finally, the proposed low-complexity algorithm provided satisfactory results with no apparent errors and outperformed all the comparative methods.
    Matched MeSH terms: Image Processing, Computer-Assisted
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links