Displaying publications 1 - 20 of 66 in total

Abstract:
Sort:
  1. Ibrahim NA, Suliadi S
    Comput Methods Programs Biomed, 2011 Dec;104(3):e122-32.
    PMID: 21764167 DOI: 10.1016/j.cmpb.2011.06.003
    Correlated ordinal data are common in many areas of research. The data may arise from longitudinal studies in biology, medical, or clinical fields. The prominent characteristic of these data is that the within-subject observations are correlated, whilst between-subject observations are independent. Many methods have been proposed to analyze correlated ordinal data. One way to evaluate the performance of a proposed model or the performance of small or moderate size data sets is through simulation studies. It is thus important to provide a tool for generating correlated ordinal data to be used in simulation studies. In this paper, we describe a macro program on how to generate correlated ordinal data based on R language and SAS IML.
    Matched MeSH terms: Data Interpretation, Statistical*
  2. Mendoza Beltran A, Prado V, Font Vivanco D, Henriksson PJG, Guinée JB, Heijungs R
    Environ Sci Technol, 2018 02 20;52(4):2152-2161.
    PMID: 29406730 DOI: 10.1021/acs.est.7b06365
    Interpretation of comparative Life Cycle Assessment (LCA) results can be challenging in the presence of uncertainty. To aid in interpreting such results under the goal of any comparative LCA, we aim to provide guidance to practitioners by gaining insights into uncertainty-statistics methods (USMs). We review five USMs-discernibility analysis, impact category relevance, overlap area of probability distributions, null hypothesis significance testing (NHST), and modified NHST-and provide a common notation, terminology, and calculation platform. We further cross-compare all USMs by applying them to a case study on electric cars. USMs belong to a confirmatory or an exploratory statistics' branch, each serving different purposes to practitioners. Results highlight that common uncertainties and the magnitude of differences per impact are key in offering reliable insights. Common uncertainties are particularly important as disregarding them can lead to incorrect recommendations. On the basis of these considerations, we recommend the modified NHST as a confirmatory USM. We also recommend discernibility analysis as an exploratory USM along with recommendations for its improvement, as it disregards the magnitude of the differences. While further research is necessary to support our conclusions, the results and supporting material provided can help LCA practitioners in delivering a more robust basis for decision-making.
    Matched MeSH terms: Data Interpretation, Statistical*
  3. Ser G, Keskin S, Can Yilmaz M
    Sains Malaysiana, 2016;45:1755-1761.
    Multiple imputation method is a widely used method in missing data analysis. The method consists of a three-stage
    process including imputation, analyzing and pooling. The number of imputations to be selected in the imputation step
    in the first stage is important. Hence, this study aimed to examine the performance of multiple imputation method at
    different numbers of imputations. Monotone missing data pattern was created in the study by deleting approximately 24%
    of the observations from the continuous result variable with complete data. At the first stage of the multiple imputation
    method, monotone regression imputation at different numbers of imputations (m=3, 5, 10 and 50) was performed. In the
    second stage, parameter estimations and their standard errors were obtained by applying general linear model to each
    of the complete data sets obtained. In the final stage, the obtained results were pooled and the effect of the numbers of
    imputations on parameter estimations and their standard errors were evaluated on the basis of these results. In conclusion,
    efficiency of parameter estimations at the number of imputation m=50 was determined as about 99%. Hence, at the
    determined missing observation rate, increase was determined in efficiency and performance of the multiple imputation
    method as the number of imputations increased.
    Matched MeSH terms: Data Interpretation, Statistical
  4. Liew KJ, Ramli A, Abd Majid A
    PLoS One, 2016;11(6):e0156724.
    PMID: 27315105 DOI: 10.1371/journal.pone.0156724
    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
    Matched MeSH terms: Data Interpretation, Statistical*
  5. Balamurugan S, Muthu BA, Peng SL, Wahab MHA
    Big Data, 2020 10;8(5):450-451.
    PMID: 33090023 DOI: 10.1089/big.2020.29038.cfp
    Matched MeSH terms: Data Interpretation, Statistical*
  6. Sim SZ, Gupta RC, Ong SH
    Int J Biostat, 2018 Jan 09;14(1).
    PMID: 29306919 DOI: 10.1515/ijb-2016-0070
    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.
    Matched MeSH terms: Data Interpretation, Statistical*
  7. Ghanim F, Darus M
    ScientificWorldJournal, 2013;2013:475643.
    PMID: 24396297 DOI: 10.1155/2013/475643
    By using a linear operator, we obtain some new results for a normalized analytic function f defined by means of the Hadamard product of Hurwitz zeta function. A class related to this function will be introduced and the properties will be discussed.
    Matched MeSH terms: Data Interpretation, Statistical
  8. Soleimani Amiri M, Ramli R
    Sensors (Basel), 2021 May 03;21(9).
    PMID: 34063574 DOI: 10.3390/s21093171
    It is necessary to control the movement of a complex multi-joint structure such as a robotic arm in order to reach a target position accurately in various applications. In this paper, a hybrid optimal Genetic-Swarm solution for the Inverse Kinematic (IK) solution of a robotic arm is presented. Each joint is controlled by Proportional-Integral-Derivative (PID) controller optimized with the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), called Genetic-Swarm Optimization (GSO). GSO solves the IK of each joint while the dynamic model is determined by the Lagrangian. The tuning of the PID is defined as an optimization problem and is solved by PSO for the simulated model in a virtual environment. A Graphical User Interface has been developed as a front-end application. Based on the combination of hybrid optimal GSO and PID control, it is ascertained that the system works efficiently. Finally, we compare the hybrid optimal GSO with conventional optimization methods by statistic analysis.
    Matched MeSH terms: Data Interpretation, Statistical
  9. Lim TO
    Int J Qual Health Care, 2003 Feb;15(1):3-4.
    PMID: 12630794 DOI: 10.1093/intqhc/15.1.3
    Matched MeSH terms: Data Interpretation, Statistical*
  10. Al-Jumeily D, Ghazali R, Hussain A
    PLoS One, 2014;9(8):e105766.
    PMID: 25157950 DOI: 10.1371/journal.pone.0105766
    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.
    Matched MeSH terms: Data Interpretation, Statistical
  11. Chang Y, Yeong KY
    Curr Med Chem, 2021 Mar 29.
    PMID: 33781187 DOI: 10.2174/0929867328666210329124415
    There have been intense research interests in sirtuins since the establishment of their regulatory roles in a myriad of pathological processes. In the last two decades, much research efforts have been dedicated to the development of sirtuin modulators. Although synthetic sirtuin modulators are the focus, natural modulators remain an integral part to be further explored in this area as they are found to possess therapeutic potential in various diseases including cancers, neurodegenerative diseases, and metabolic disorders. Owing to the importance of this cluster of compounds, this review gives a current stand on the naturally occurring sirtuin modulators, , associated molecular mechanisms and their therapeutic benefits.. Furthermore, comprehensive data mining resulted in detailed statistical data analyses pertaining to the development trend of sirtuin modulators from 2010-2020. Lastly, the challenges and future prospect of natural sirtuin modulators in drug discovery will also be discussed.
    Matched MeSH terms: Data Interpretation, Statistical
  12. Mahamad Maifiah MH, Velkov T, Creek DJ, Li J
    Methods Mol Biol, 2019;1946:321-328.
    PMID: 30798566 DOI: 10.1007/978-1-4939-9118-1_28
    Acinetobacter baumannii is rapidly emerging as a multidrug-resistant pathogen responsible for nosocomial infections including pneumonia, bacteremia, wound infections, urinary tract infections, and meningitis. Metabolomics provides a powerful tool to gain a system-wide snapshot of cellular biochemical networks under defined conditions and has been increasingly applied to bacterial physiology and drug discovery. Here we describe an optimized sample preparation method for untargeted metabolomics studies in A. baumannii. Our method provides a significant recovery of intracellular metabolites to demonstrate substantial differences in global metabolic profiles among A. baumannii strains.
    Matched MeSH terms: Data Interpretation, Statistical
  13. Mehdizadeh S, Sanjari MA
    J Biomech, 2017 11 07;64:236-239.
    PMID: 28958634 DOI: 10.1016/j.jbiomech.2017.09.009
    This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE.
    Matched MeSH terms: Data Interpretation, Statistical
  14. Tan CS, Ting WS, Mohamad MS, Chan WH, Deris S, Shah ZA
    Biomed Res Int, 2014;2014:213656.
    PMID: 25250315 DOI: 10.1155/2014/213656
    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method.
    Matched MeSH terms: Data Interpretation, Statistical*
  15. Norsa'adah B
    Med J Malaysia, 2004 Dec;59(5):692; author reply 693-5.
    PMID: 15889579
    Matched MeSH terms: Data Interpretation, Statistical*
  16. Khairani AZ, Ahmad NS, Khairani MZ
    J Appl Meas, 2017;18(4):449-458.
    PMID: 29252212
    Adolescences is an important transitional phase in human development where they experience physiological as well as psychological changes. Nevertheless, these changes are often understood by teachers, parents, and even the adolescents themselves. Thus, conflicts exist and adolescents are affected from the conflict physically and emotionally. An important state of emotions that result from this conflict is anger. This article describes the development and validation of the 34-item Adolescent Anger Inventory (AAI) to measure types of anger among Malaysian adolescents. A sample of 2,834 adolescents in secondary school who provide responses that were analyzed using Rasch model measurement framework. The 4 response category worked satisfactorily for the scale developed. A total of 11 items did not fit to the model's expectations, and thus dropped from the final scale. The scale also demonstrated satisfactory reliability and separation evidence. Also, items in the AAI depicted no evidence of DIF between 14- and 16-year-old adolescents. Nevertheless, the AAI did not have sufficient items to target adolescents with a high level of physical aggressive anger.
    Matched MeSH terms: Data Interpretation, Statistical*
  17. Zabidin N, Mohamed AM, Zaharim A, Marizan Nor M, Rosli TI
    Int Orthod, 2018 03;16(1):133-143.
    PMID: 29478934 DOI: 10.1016/j.ortho.2018.01.009
    OBJECTIVES: To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation.

    MATERIALS AND METHODS: This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated.

    RESULTS: Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable.

    CONCLUSION: The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch.

    Matched MeSH terms: Data Interpretation, Statistical*
  18. Sanagi MM, Ling SL, Nasir Z, Hermawan D, Ibrahim WA, Abu Naim A
    J AOAC Int, 2010 2 20;92(6):1833-8.
    PMID: 20166602
    LOD and LOQ are two important performance characteristics in method validation. This work compares three methods based on the International Conference on Harmonization and EURACHEM guidelines, namely, signal-to-noise, blank determination, and linear regression, to estimate the LOD and LOQ for volatile organic compounds (VOCs) by experimental methodology using GC. Five VOCs, toluene, ethylbenzene, isopropylbenzene, n-propylbenzene, and styrene, were chosen for the experimental study. The results indicated that the estimated LODs and LOQs were not equivalent and could vary by a factor of 5 to 6 for the different methods. It is, therefore, essential to have a clearly described procedure for estimating the LOD and LOQ during method validation to allow interlaboratory comparisons.
    Matched MeSH terms: Data Interpretation, Statistical
  19. Wahab AA, Salim MI, Ahamat MA, Manaf NA, Yunus J, Lai KW
    Med Biol Eng Comput, 2016 Sep;54(9):1363-73.
    PMID: 26463520 DOI: 10.1007/s11517-015-1403-7
    Breast cancer is the most common cancer among women globally, and the number of young women diagnosed with this disease is gradually increasing over the years. Mammography is the current gold-standard technique although it is known to be less sensitive in detecting tumors in woman with dense breast tissue. Detecting an early-stage tumor in young women is very crucial for better survival chance and treatment. The thermography technique has the capability to provide an additional functional information on physiological changes to mammography by describing thermal and vascular properties of the tissues. Studies on breast thermography have been carried out to improve the accuracy level of the thermography technique in various perspectives. However, the limitation of gathering women affected by cancer in different age groups had necessitated this comprehensive study which is aimed to investigate the effect of different density levels on the surface temperature distribution profile of the breast models. These models, namely extremely dense (ED), heterogeneously dense (HD), scattered fibroglandular (SF), and predominantly fatty (PF), with embedded tumors were developed using the finite element method. A conventional Pennes' bioheat model was used to perform the numerical simulation on different case studies, and the results obtained were then compared using a hypothesis statistical analysis method to the reference breast model developed previously. The results obtained show that ED, SF, and PF breast models had significant mean differences in surface temperature profile with a p value <0.025, while HD breast model data pair agreed with the null hypothesis formulated due to the comparable tissue composition percentage to the reference model. The findings suggested that various breast density levels should be considered as a contributing factor to the surface thermal distribution profile alteration in both breast cancer detection and analysis when using the thermography technique.
    Matched MeSH terms: Data Interpretation, Statistical
  20. Abdulrauf Sharifai G, Zainol Z
    Genes (Basel), 2020 06 27;11(7).
    PMID: 32605144 DOI: 10.3390/genes11070717
    The training machine learning algorithm from an imbalanced data set is an inherently challenging task. It becomes more demanding with limited samples but with a massive number of features (high dimensionality). The high dimensional and imbalanced data set has posed severe challenges in many real-world applications, such as biomedical data sets. Numerous researchers investigated either imbalanced class or high dimensional data sets and came up with various methods. Nonetheless, few approaches reported in the literature have addressed the intersection of the high dimensional and imbalanced class problem due to their complicated interactions. Lately, feature selection has become a well-known technique that has been used to overcome this problem by selecting discriminative features that represent minority and majority class. This paper proposes a new method called Robust Correlation Based Redundancy and Binary Grasshopper Optimization Algorithm (rCBR-BGOA); rCBR-BGOA has employed an ensemble of multi-filters coupled with the Correlation-Based Redundancy method to select optimal feature subsets. A binary Grasshopper optimisation algorithm (BGOA) is used to construct the feature selection process as an optimisation problem to select the best (near-optimal) combination of features from the majority and minority class. The obtained results, supported by the proper statistical analysis, indicate that rCBR-BGOA can improve the classification performance for high dimensional and imbalanced datasets in terms of G-mean and the Area Under the Curve (AUC) performance metrics.
    Matched MeSH terms: Data Interpretation, Statistical
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links