Displaying publications 1 - 20 of 89 in total

Abstract:
Sort:
  1. Ong P
    ScientificWorldJournal, 2014;2014:943403.
    PMID: 25298971 DOI: 10.1155/2014/943403
    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  2. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MMB
    Sensors (Basel), 2018 Jul 09;18(7).
    PMID: 29987266 DOI: 10.3390/s18072208
    Loss of the ability to speak or hear exerts psychological and social impacts on the affected persons due to the lack of proper communication. Multiple and systematic scholarly interventions that vary according to context have been implemented to overcome disability-related difficulties. Sign language recognition (SLR) systems based on sensory gloves are significant innovations that aim to procure data on the shape or movement of the human hand. Innovative technology for this matter is mainly restricted and dispersed. The available trends and gaps should be explored in this research approach to provide valuable insights into technological environments. Thus, a review is conducted to create a coherent taxonomy to describe the latest research divided into four main categories: development, framework, other hand gesture recognition, and reviews and surveys. Then, we conduct analyses of the glove systems for SLR device characteristics, develop a roadmap for technology evolution, discuss its limitations, and provide valuable insights into technological environments. This will help researchers to understand the current options and gaps in this area, thus contributing to this line of research.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  3. Ewe ELR, Lee CP, Lim KM, Kwek LC, Alqahtani A
    PLoS One, 2024;19(4):e0298699.
    PMID: 38574042 DOI: 10.1371/journal.pone.0298699
    Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  4. Esmaeilpour M, Naderifar V, Shukur Z
    PLoS One, 2014;9(9):e106313.
    PMID: 25243670 DOI: 10.1371/journal.pone.0106313
    Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  5. Ng H, Tan WH, Abdullah J, Tong HL
    ScientificWorldJournal, 2014;2014:376569.
    PMID: 25143972 DOI: 10.1155/2014/376569
    This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  6. Al-Dabbagh MM, Salim N, Rehman A, Alkawaz MH, Saba T, Al-Rodhaan M, et al.
    ScientificWorldJournal, 2014;2014:612787.
    PMID: 25309952 DOI: 10.1155/2014/612787
    This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  7. Samimi P, Ravana SD
    ScientificWorldJournal, 2014;2014:135641.
    PMID: 24977172 DOI: 10.1155/2014/135641
    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  8. Taha AM, Mustapha A, Chen SD
    ScientificWorldJournal, 2013;2013:325973.
    PMID: 24396295 DOI: 10.1155/2013/325973
    When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  9. Oong TH, Isa NA
    IEEE Trans Neural Netw, 2011 Nov;22(11):1823-36.
    PMID: 21968733 DOI: 10.1109/TNN.2011.2169426
    This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  10. Yap KS, Lim CP, Abidin IZ
    IEEE Trans Neural Netw, 2008 Sep;19(9):1641-6.
    PMID: 18779094 DOI: 10.1109/TNN.2008.2000992
    In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  11. Tabatabaey-Mashadi N, Sudirman R, Khalid PI, Lange-Küttner C
    Percept Mot Skills, 2015 Jun;120(3):865-94.
    PMID: 26029964
    Sequential strategies of digitized tablet drawings by 6-7-yr.-old children (N = 203) of average and below-average handwriting ability were analyzed. A Beery Visual Motor Integration (BVMI) and a Bender-Gestalt (BG) pattern, each composed of two tangential shapes, were predefined into area sectors for automatic analysis and adaptive mapping of the drawings. Girls more often began on the left side and used more strokes than boys. The below-average handwriting group showed more directional diversity and idiosyncratic strategies.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  12. Abbasi A, Woo CS, Ibrahim RW, Islam S
    PLoS One, 2015;10(4):e0123427.
    PMID: 25884854 DOI: 10.1371/journal.pone.0123427
    Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  13. Vijayasarveswari V, Andrew AM, Jusoh M, Sabapathy T, Raof RAA, Yasin MNM, et al.
    PLoS One, 2020;15(8):e0229367.
    PMID: 32790672 DOI: 10.1371/journal.pone.0229367
    Breast cancer is the most common cancer among women and it is one of the main causes of death for women worldwide. To attain an optimum medical treatment for breast cancer, an early breast cancer detection is crucial. This paper proposes a multi- stage feature selection method that extracts statistically significant features for breast cancer size detection using proposed data normalization techniques. Ultra-wideband (UWB) signals, controlled using microcontroller are transmitted via an antenna from one end of the breast phantom and are received on the other end. These ultra-wideband analogue signals are represented in both time and frequency domain. The preprocessed digital data is passed to the proposed multi- stage feature selection algorithm. This algorithm has four selection stages. It comprises of data normalization methods, feature extraction, data dimensional reduction and feature fusion. The output data is fused together to form the proposed datasets, namely, 8-HybridFeature, 9-HybridFeature and 10-HybridFeature datasets. The classification performance of these datasets is tested using the Support Vector Machine, Probabilistic Neural Network and Naïve Bayes classifiers for breast cancer size classification. The research findings indicate that the 8-HybridFeature dataset performs better in comparison to the other two datasets. For the 8-HybridFeature dataset, the Naïve Bayes classifier (91.98%) outperformed the Support Vector Machine (90.44%) and Probabilistic Neural Network (80.05%) classifiers in terms of classification accuracy. The finalized method is tested and visualized in the MATLAB based 2D and 3D environment.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  14. Acharya UR, Bhat S, Koh JEW, Bhandary SV, Adeli H
    Comput Biol Med, 2017 Sep 01;88:72-83.
    PMID: 28700902 DOI: 10.1016/j.compbiomed.2017.06.022
    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  15. Mostafa SA, Mustapha A, Mohammed MA, Ahmad MS, Mahmoud MA
    Int J Med Inform, 2018 04;112:173-184.
    PMID: 29500017 DOI: 10.1016/j.ijmedinf.2018.02.001
    Autonomous agents are being widely used in many systems, such as ambient assisted-living systems, to perform tasks on behalf of humans. However, these systems usually operate in complex environments that entail uncertain, highly dynamic, or irregular workload. In such environments, autonomous agents tend to make decisions that lead to undesirable outcomes. In this paper, we propose a fuzzy-logic-based adjustable autonomy (FLAA) model to manage the autonomy of multi-agent systems that are operating in complex environments. This model aims to facilitate the autonomy management of agents and help them make competent autonomous decisions. The FLAA model employs fuzzy logic to quantitatively measure and distribute autonomy among several agents based on their performance. We implement and test this model in the Automated Elderly Movements Monitoring (AEMM-Care) system, which uses agents to monitor the daily movement activities of elderly users and perform fall detection and prevention tasks in a complex environment. The test results show that the FLAA model improves the accuracy and performance of these agents in detecting and preventing falls.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  16. Gandhamal A, Talbar S, Gajre S, Hani AF, Kumar D
    Comput Biol Med, 2017 04 01;83:120-133.
    PMID: 28279861 DOI: 10.1016/j.compbiomed.2017.03.001
    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  17. Suriani NS, Hussain A, Zulkifley MA
    Sensors (Basel), 2013 Aug 05;13(8):9966-98.
    PMID: 23921828 DOI: 10.3390/s130809966
    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  18. Ibitoye MO, Hamzaid NA, Zuniga JM, Hasnan N, Wahab AK
    Sensors (Basel), 2014;14(12):22940-70.
    PMID: 25479326 DOI: 10.3390/s141222940
    The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG) parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  19. Noor NM, Than JC, Rijal OM, Kassim RM, Yunus A, Zeki AA, et al.
    J Med Syst, 2015 Mar;39(3):22.
    PMID: 25666926 DOI: 10.1007/s10916-015-0214-6
    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  20. Moghaddasi Z, Jalab HA, Md Noor R, Aghabozorgi S
    ScientificWorldJournal, 2014;2014:606570.
    PMID: 25295304 DOI: 10.1155/2014/606570
    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links